title
listlengths 0
18
| author
listlengths 0
4.41k
| authoraffiliation
listlengths 0
6.45k
| venue
listlengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
listlengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
listlengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"REGULARITY OF GEODESICS OF SINGULAR KÄHLER METRICS",
"REGULARITY OF GEODESICS OF SINGULAR KÄHLER METRICS"
] |
[
"Jianchun Chu ",
"Nicholas Mccleerey "
] |
[] |
[] |
We show the optimal C 1,1 regularity of geodesics in nef and big cohomology class on Kähler manifolds away from the non-Kähler locus, assuming sufficiently regular initial data. As a special case, we prove the C 1,1 regularity of geodesics of Kähler metrics on compact Kähler varieties away from the singular locus. Our main novelty is an improved boundary estimate for the complex Monge-Ampère equation that does not require strict positivity of the reference form near the boundary. We also discuss the case of some special geodesic rays.
| null |
[
"https://arxiv.org/pdf/1901.02105v1.pdf"
] | 119,297,784 |
1901.02105
|
ed647dfd7a012470f2bb0370af403ac61ad4ffb5
|
REGULARITY OF GEODESICS OF SINGULAR KÄHLER METRICS
7 Jan 2019
Jianchun Chu
Nicholas Mccleerey
REGULARITY OF GEODESICS OF SINGULAR KÄHLER METRICS
117 Jan 2019
We show the optimal C 1,1 regularity of geodesics in nef and big cohomology class on Kähler manifolds away from the non-Kähler locus, assuming sufficiently regular initial data. As a special case, we prove the C 1,1 regularity of geodesics of Kähler metrics on compact Kähler varieties away from the singular locus. Our main novelty is an improved boundary estimate for the complex Monge-Ampère equation that does not require strict positivity of the reference form near the boundary. We also discuss the case of some special geodesic rays.
Introduction
In [27], Mabuchi introduced a Riemannian structure on the space of Kähler metrics on a compact manifold X without boundary. Later, Semmes [34] and Donaldson [20] independently showed that these geodesics could be given as solutions to the Dirichlet problem for the complex Monge-Ampère operator, and since then there has been a great deal of work to establish regularity and positivity properties of such solutions -see [3,4,11,12,13,15,24,31,32,33,35]. In particular, the recent work of Chu-Tosatti-Weinkove [11] establishes C 1,1 regularity of solutions (based on their earlier work [10]), which is known to be optimal by examples of Lempert-Vivas [30], Darvas-Lempert [18], and Darvas [14].
An obvious follow-up question is to consider geodesics between Kähler metrics on a singular Kähler variety. Using Hironaka's theorem [25] on resolution of singularities, one usually exchanges the singular variety with a strictly positive Kähler metric for a smooth space with a degenerate metric. One then defines a geodesic between these degenerate metrics as in Semmes/Donaldson, i.e. as a solution to a Dirichlet problem for the complex Monge-Ampère operator. It is then natural to ask about regularity for general solutions to this Dirichlet problem, not necessarily those arising as geodesics on a singular variety. In fact, we shall go a step further and investigate regularity for the Dirichlet problem when the reference form is not even semi-positive, but merely nef and big. This will not only cover the previous set up, but will also include the case of geodesics in a nef and big class. We refer the reader to [16,21] for previous work in the semi-positive case, and [22] for the nef and big case.
When working in such generality, there are two fundamental problems that one must overcome: the first is a lack of inherent positivity near the boundary, which prevents previously known boundary estimates [8,23,9] from being applied. The second is that we will need to allow our boundary data to be unbounded from below -our approach is to approximate the unbounded boundary data by smooth functions. In full generality, we will have to leave this as a technical assumption unfortunately, but we provide a construction of the approximations in the case of geodesics in a nef and big class.
Our main result follows the approach in [2,8,23,6,28] with several technical improvements -we state our main theorem now, delaying some notation and definitions until the end of the section. Theorem 1.1. Suppose that (M, ω) is a compact Kähler manifold with weakly pseudoconcave boundary. Let α be a smooth, real (1, 1)-form on M that is ψ-big and nef, and suppose there is a function ϕ ∈ PSH(M, α), ψ ϕ 0, such that: (a) There exists a sequence of smooth functions ϕ ε ∈ PSH(α+εω)∩C ∞ (M ), decreasing to ϕ, such that we have the bounds:
|∇ϕ ε | ω + |∇ 2 ϕ ε | ω Ce −B 0 ψ
for each ε > 0, with B 0 , C fixed positive constants. (b) We have the key positivity condition:
(1.1) α + εω + √ −1∂∂ϕ ε ce B 0 ψ ω,
for each ε > 0, with B 0 , c fixed positive constants. Then the envelope:
V := sup{v ∈ PSH(M, α) | v| ∂M ϕ| ∂M } is in C 1,1 loc (M \ Sing(ψ)). We then get as immediate corollaries: Corollary 1.2. Given two cohomologous Kähler metrics ω 1 , ω 2 on a singular Kähler variety X, the geodesic connecting them is in C 1,1 loc (X reg × A), where A ⊂ C is an annulus and X reg is the smooth part of X.
Here, by a compact Kähler variety we mean a reduced, irreducible compact complex analytic space which admits a Kähler metric in the sense of Moishezon [29]. Note that we do not need X to be normal. Corollary 1.3. Let (X, ω) be a smooth Kähler manifold without boundary, [α] a big and nef class, and ϕ 1 , ϕ 2 two α-psh, exponentially smooth functions with the same singularity type that also satisfy the condition (1.1) and ψ ϕ 1 , ϕ 2 . Then the geodesic connecting ϕ 1 and ϕ 2 is in C 1,1 loc ((X \E nK (α))×A), where again A ⊂ C is an annulus.
Here we say that a function f is more singular than g if f − C g for some constant C. Similarly, we say that f and g has the same singularity type if |f − g| < C.
The most interesting case to apply Corollary 1.3 is when ϕ 1 and ϕ 2 have minimal singularities. Note however that the corollary covers more singular initial data as well, provided there is an appropriate Kähler current ψfor instance, it deals with the case when both ϕ 1 and ϕ 2 are also Kähler currents. Of course though, if ψ is singular on a larger set than E nK (α), then one gets worse estimates. Corollary 1.2 was raised as an explicit question at the AIM workshops "The complex Monge-Ampère equation" [5,Question 7] and "Nonlinear PDEs in real and complex geometry" [36,Question 2.8], and Corollary 1.3 confirms an expectation raised in [17, pg. 396].
Note that if ϕ in Theorem 1.1 is actually smooth near the boundary, one can simply take ϕ ε = max{ϕ, v ε − C ε } for all ε > 0, where the v ε come from the nef condition (see below), and the C ε are large constants such that ϕ ε = ϕ near the boundary -this is because we don't actually need the estimates in part (a) to hold everywhere, only near the boundary. In this manner we recover, and actually improve upon, [28,Theorem 1.3].
Remark 1.4. Finally, we note that some of the estimates in this paper can be combined with the technique of [11] to improve upon the main result of [24] -the proof is straight forward but tedious, so we leave the details to the interested reader. The merit of He's technique is that it does not require any positivity of the boundary data beyond that they be quasi-psh. However, it applies only in the setting of geodesics and the overall conclusion is weaker, establishing C 1,1 regularity in the spacial directions only, i.e. not in the annular directions. Theorem 1.1 will be proved in Section 2. The new boundary estimate is established in Proposition 2.3 -it is an a priori bound for the tangentnormal derivatives along the Berman path [2]. We prove Corollaries 1.2 and 1.3 in Section 3, and briefly discuss the case of geodesic rays -we mainly observe that the results in [28] still apply in this generality. Finally, we include an appendix containing some estimates for the Dirichlet problem for the ω-Laplacian when the boundary data is degenerating, which will be needed in the proof of Theorem 1.1. We now set some notation and definitions, which are standard in the case of a manifold without boundary, but do not generally make sense when there is boundary -we shall adopt their use however for convenience and enhanced readability. Throughout, (M n , ω) will be a compact Kähler manifold with non-empty boundary, of complex dimension n.
First, notation -given a smooth (1, 1)-form α on M , and a function f : M • → R, we write: α f := α + √ −1∂∂f. Abusing notation, we will also mean f | ∂M to be the upper-semicontinuous extension of f to ∂M -that is, we define:
f | ∂M (x 0 ) := lim sup x→x 0 f (x) ∀ x 0 ∈ ∂M.
Note that if f is actually continuous up the boundary, then our definition agrees with the normal one.
We now make some definitions. First, we say that ϕ : M • → R is αplurisubharmonic (or α-psh) and write ϕ ∈ PSH(M, α), if ϕ is uppersemicontinuous and satisfies: α ϕ 0 in the sense of currents. Now, we say that a closed, real (1, 1)-form α on M is big if there exists a function ψ and a constant δ > 0 such that:
(1) ψ 0.
(2) ψ is exponentially smooth -i.e. e Cψ ∈ C ∞ (M ) for some C. Note that this forces ψ to be bounded above, but not below, even at the boundary.
(3) α ψ δω -i.e. α ψ is a Kähler current.
It is somewhat more proper to call α ψ-big, in order to emphasize that there is no canonical ψ, as there is in the boundary-less case -we will not always do so however, sometimes leaving the ψ implicit. Note also that condition (2) is weaker than assuming ψ has analytic singularities, which is usually done in the case without boundary.
Finally, we say that α is nef if for every ε > 0 there exists a bounded function v ε , smooth up the boundary, such that:
α + εω + √ −1∂∂v ε > 0.
C 1,1 -Estimates for Big and Nef Classes
First, we point out that one can relax the assumptions in (1.1) slightly -throughout, we'll work with this other function F , as it makes the proof slightly easier to follow: Proposition 2.1. Suppose that F is an exponentially smooth, quasi-psh function such that:
(2.1) α + εω + √ −1∂∂ϕ ε e F ω for all ε > 0.
Then there exists an exponentially smooth, strictly α-psh function ψ such that (1.1) holds for all ε > 0 and
α + √ −1∂∂ ψ δ 2 ω.
Further, ψ will be singular only along Sing(ψ) ∪ Sing(F ).
Proof. Let ψ be as in the big condition. We may assume without loss of generality that F 0. By assumption,
√ −1∂∂F −Cω,
so if we define:
ψ := ψ + δ 2C F,
we have the claim, with c = 1 and B = 2C/δ.
Note that, if we happen to have ϕ ε = ϕ for all ε > 0 (so that ϕ is actually smooth and α is "semipositive"), then we can scale ω such that ω α ϕ . Then the function
F := log ((α ϕ ) n /ω n )
is exponentially smooth and satisfies (2.1) -to see this, look at the eigenvalues λ j of α ϕ in normal coordinates for ω at a point:
λ j = n k=1 λ k / n k=1 k =j λ k e F .
Unfortunately, such an F will not always be quasi-psh -however, we will show in Section 3 that in the case of a geodesic between two Kähler metrics on a singular Kähler variety, we can always find an exponentially smooth F ′ F that will be quasi-psh, and hence we will be able to apply our results in that setting.
Before moving on, observe that being exponentially smooth gives control over all derivatives of F and ψ:
|∇ψ| g + |∇ 2 ψ| g Ce −B 0 ψ |∇F | g + |∇ 2 F | g Ce −B 0 F , (2.2)
where here g is the Riemannian metric corresponding to ω. Also, by replacing ε with ε/2 and relabeling the ϕ ε , we can improve condition (2.1) to
(2.3) α + εω + √ −1∂∂ϕ ε (e F + ε/2)ω,
without changing the problem. Finally, we will also assume without loss of generality that:
(2.4) α ω.
Proof of Theorem 1.1. Our strategy will be the same as in [28], which is a combination of the techniques in [2] and the estimates in [12]. We begin by approximating V by the following envelopes:
V ε := sup{v ∈ PSH(M, α + εω) | v| ∂M ϕ ε | ∂M }.
Note that the V ε decrease pointwise to V as ε decreases to 0. We now define the obstacle functions h ε ∈ C ∞ (M ) as the solutions to the following Dirichlet problems:
∆ 2ω h ε = −n, h ε | ∂M = ϕ ε | ∂M .
By Lemma A.1 and A.2, we have control over the derivatives of the h ε :
(2.5) |∇h ε | g + |∇ 2 h ε | g + |∇ 3 h ε | g Ce −B 0 ψ ,
and we know that they decrease as ε → 0. In particular, for all ε 1,
h ε h 1 C, independent of ε.
Observe that for any v ∈ PSH(M, α + εω) with v| ∂M ϕ ε | ∂M , we have that:
(2.6) v h ε by (2.4
) and the weak maximum principle for the ω-Laplacian. We thus see that:
V ε = sup{v ∈ PSH(M, α + εω) | v h ε },
where the inequality now holds on all of M .
We now approximate the V ε by the smooth solutions to the non-degenerate Dirichlet problem:
(2.7) (α + εω + √ −1∂∂u ε,β ) n = e β(u ε,β −hε)+n log(ε/4) ω n , α + εω + √ −1∂∂u ε,β > 0, u ε,β | ∂M = ϕ ε | ∂M .
By [2, Proposition 2.4], [28,Proposition 4.5], the u ε,β converge uniformly to V ε as β → ∞ (the convergence is not uniform in ε, but this will not be a problem as we will send β to infinity before sending ε to zero). Note that as u ε,β ∈ PSH(M, α + εω) with u ε,β | ∂M = ϕ ε | ∂M , by (2.6), we have:
(2.8) u ε,β − h ε 0.
Also note that:
(α + εω + √ −1∂∂ϕ ε ) n (ε/2) n ω n e β(ϕε−hε)+n log(ε/4) ω n
where we used (2.3) and ϕ ε h ε . This makes ϕ ε a subsolution of (2.7), so that u ε,β actually exists (by results of [8,23]), and we have
(2.9) ϕ ε u ε,β .
Our goal is to establish uniform C 2 -estimates for u ε,β , independent of ε and β (we drop the subscripts now for ease of notation). We have just shown the requisite C 0 -bound:
(2.10) ψ ϕ ϕ ε u h ε h 1
and since ϕ ε | ∂M = u| ∂M = h ε | ∂M , it follows that the gradient is bounded on ∂M :
(2.11) |∇u| g |∇ϕ ε | g + |∇h ε | g Ce −B 0 ψ on ∂M.
We now bound the gradient on the interior by simplifying the argument in [12, Lemma 4.1, iii)]:
Lemma 2.2. There exist uniform constants β 0 , B, and C such that
|∇u| g Ce −Bψ for all β β 0 .
Proof. We begin by defining:
ω := α + εω + √ −1∂∂u.
Then note that, as u ψ and ψ 0, we have:
u := u − (1 + δ)ψ −δψ 0, and: ω − √ −1∂∂ u = (1 + δ)(α + √ −1∂∂ψ) − δα + εω (1 + δ)δω − δα δ 2 ω,
by (2.4). We now seek to bound the quantity:
Q := e H( u) |∇u| 2 g , by a constant independent of ε and β, where here H(s) is defined for s 0 as: H(s) := −Bs + 1 s + 1 ,
for some large constant B to be determined. Let x 0 be a maximum point for Q -it cannot be in Sing(ψ), as Q is zero there. If it is on the boundary of M , by (2.11) and u −δψ, we have
Q(x 0 ) Ce −B u(x 0 )−B 0 ψ(x 0 ) Ce (Bδ−B 0 )ψ(x 0 ) C,
provided we take B > B 0 /δ. Thus, suppose that x 0 is an interior point. It suffices to prove that
(2.12) |∇u| 2 g (x 0 ) Ce −Bψ(x 0 )
, for some uniform constant C -by the same argument as above, this would imply Q is uniformly bounded.
Next, following the estimates in [12, Lemma 4.1, iii)] (replacing δ with δ 2 ), we see that:
0 e −H ∆ g Q(x 0 ) H ′′ |∇u| 2 g |∇ u| 2 g + − δ 2 2 H ′ − C |∇u| 2 g tr ω ω + (CH ′ + 2β)|∇u| 2 g − 2βRe ∇h ε , ∇u g + 2Re ∇(n log(ε/4)), ∇u g + CH ′ e −B 0 ψ + CH ′ |∇ u| 2
g , for some uniform constant C > 1, whose exact value will change from line to line (here g is the Riemannian metric corresponding to ω). Picking then
B = max{(2/δ 2 )(C + 1), B 0 /δ, 3},
we may use the definition of H to see that for β β 0 := C(B + 1) + 1, we have:
0 2|∇u| 2 g |∇ u| 2 g ( u + 1) 3 + (β + 1)|∇u| 2 g − 2β|∇h ε | g |∇u| g (2.13) − C(B + 1)e −B 0 ψ − C(B + 1)|∇ u| 2 g .
We may now assume that at x 0 we have both:
|∇u| 2 g C(B + 1)( u + 1) 3 and |∇u| g 2|∇h ε | g .
If either condition fails, we obtain |∇u| 2
g (x 0 ) Ce −B 0 ψ(x 0 ) directly.
Otherwise, we still get:
C(B + 1)e −B 0 ψ |∇u| 2 g (x 0 ), from (2.13), as required.
We now bound the Hessian near the boundary. Proposition 2.3. Assume that the key condition (2.3) is satisfied for each ε > 0. Then there exist uniform constants B and C such that
|∇ 2 u| g Ce −B(F +ψ) on ∂M .
Proof. Fix a point p ∈ ∂M , and center coordinates (
{z i } n i=1 , B R ) at p, where here B R is a ball of radius R. Write z i = x 2i−1 + √ −1x 2i for 1 i n. Letr(z) = −x 2n + Re n i,j=1 a ij z i z j + O(|z| 3 ) near p.
As in [6, pg. 272], we define the tangent vector fields
D γ = ∂ ∂x γ − r xγ r x 2n ∂ ∂x 2n
for 1 γ 2n − 1 and the normal vector field
D 2n = − 1 r x 2n ∂ ∂x 2n .
Recall that we are writing:
ω := α + εω + √ −1∂∂u and ω = √ −1 n i,j=1 g ij dz i ∧ dz j .
We also denote the inverse matrix of ( g ij ) by ( g ij ). Throughout, C will be a constant, independent of ε, β, whose exact value may change from line to line.
We split the proof into three steps:
Step 1. The tangent-tangent derivatives.
Since u = ϕ ε on ∂M , at 0 (p ∈ ∂M ), we have
|D γ D η u| = |D γ D η ϕ ε | Ce −Cψ for 1 γ, η 2n − 1,
as desired.
Step 2. The tangent-normal derivatives.
We define
U := u − h ε and U γ := D γ U.
Now consider the following quantity:
w = µ 1 (u − ϕ ε ) + e BF µ 2 |z| 2 − e Bψ |U γ | − e Bψ U 2 γ ,
where 1 γ 2n − 1 and µ 1 , µ 2 , and B are large constants to be determined later. Note that the γ derivatives/directions are real, while the i derivatives/directions will be complex.
We claim that
(2.14) w 0 on B R .
Given the claim, we can control the tangent-normal derivatives as follows.
Dropping the square term, we see:
|U γ | µ 1 e −B(F +ψ) (u − ϕ ε ) + µ 2 e −Bψ |z| 2 .
At 0, both sides are 0, so
|D 2n U γ | D 2n µ 1 e −B(F +ψ) (u − ϕ ε ) + µ 2 e −Bψ |z| 2 .
Combining this with (2.2) and Lemma 2.2, we see that
|D 2n U γ | Ce −(B+B 0 )(F +ψ) .
By the definition of U γ and (2.5), we then have
|D 2n D γ u| |D 2n U γ | + |D 2n D γ h ε | Ce −(B+B 0 )(F +ψ) ,
as desired.
We now prove (2.14). We first show that it holds on the boundary of B R ∩M , and then extend it to the interior by a minimum principle argument. ∂(B R ∩ M ) has two components, ∂B R ∩ M and B R ∩ ∂M . On the second part, since u = h ε = ϕ ε on ∂M , we see w = µ 2 e BF |z| 2 0. On ∂B R ∩ M , recalling that u ϕ ε and taking B to be sufficiently large, we see that
w e BF µ 2 R 2 − e Bψ |U γ | − e Bψ U 2 γ e BF (µ 2 R 2 − C).
Hence, after fixing R and µ 2 , we can arrange that
(2.15) w 0 on ∂(B R ∩ M ).
Suppose then that x 0 is a minimum point of w. If it is the case that either U γ (x 0 ) = 0 or e F (x 0 ) = 0, then clearly w(x 0 ) 0, and hence w 0 on all of B R ∩ M . Thus we may assume that x 0 is an interior minimum such that U γ (x 0 ) = 0 and e F (x 0 ) > 0 -we will assume that U γ (x 0 ) < 0 here, as the alternative is basically the same.
At x 0 , we wish to compute
∆ g w = µ 1 ∆ g (u − ϕ ε ) + µ 2 ∆ g (e BF |z| 2 ) + ∆ g (e BF +Bψ U γ ) − ∆ g (e BF +Bψ U 2 γ ). (2.16)
For the first term, we observe that:
∆ g (u − ϕ ε ) = n − tr ω (α + εω + √ −1∂∂ϕ ε ) n − (e F + ε/2)tr ω ω, by (2.3), and that ε 2 tr ω ω nε 2 ω n (α + εω + √ −1∂∂u) n 1/n = ε 2 ne −(β/n)(u−hε)−log(ε/4) ,
by the artithmetic-geometric mean inequality and (2.7). As u − h ε 0, we have then that
(2.17) ε 2 tr ω ω ε 2 ne − log(ε/4) = 2n.
Thus,
(2.18) µ 1 ∆ g (u − ϕ ε ) −µ 1 (e F + ε/4)tr ω ω.
For the second term of (2.16), by (2.2) and a direct calculation, we obtain
(2.19) µ 2 ∆ g (e BF |z| 2 ) CB 2 e (B−B 0 )F tr ω ω.
For the third term of (2.16), by (2.2), Lemma 2.2 and the Cauchy-Schwarz inequality, we have
∆ g (e B(F +ψ) U γ ) = e B(F +ψ) ∆ g (U γ ) + U γ ∆ g (e B(F +ψ) ) + Be B(F +ψ) g ij (F + ψ) i U γj + (F + ψ) j U γi e B(F +ψ) ∆ g (D γ u) − e B(F +ψ) ∆ g (D γ h ε ) + CB 2 e (B−B 0 )(F +ψ) tr ω ω + Be B(F +ψ) g ij 1 2B U γi U γj + 2B(F + ψ) i (F + ψ) j e B(F +ψ) ∆ g (D γ u) + CB 2 e (B−B 0 )(F +ψ) tr ω ω + 1 2 e B(F +ψ) g ij U γi U γj .
By the definition of D γ , we have
D γ = ∂ ∂x γ + a ∂ ∂x 2n where a = − r xγ r x 2n .
For the third order term ∆ g (D γ u), we see that (cf. [6, pg. 275])
∆ g (D γ u) = tr ω (D γ √ −1∂∂u)+u x 2n ∆ g a+2a x 2n−1 −2tr ω (da ∧ ι ∂/∂x 2n−1 (α + εω)),
and applying D γ to (2.7) gives,
tr ω (D γ (α + εω) + D γ √ −1∂∂u) = βU γ + tr ω (D γ ω),
where we are letting D γ act on the components α and ω in the (fixed) z-coordinates. Hence,
∆ g (D γ u) = βU γ + tr ω (D γ ω) − tr ω (D γ (α + εω)) + u x 2n ∆ g a + 2a x 2n−1 − 2tr ω (da ∧ ι ∂/∂x 2n−1 (α + εω)).
Combining this with Lemma 2.2 and tr ω ω 4n (cf. (2.17)), it is clear that
(2.20) ∆ g (D γ u) = βU γ + Γ,
where Γ denotes a term satisfying |Γ| Ce −B 0 ψ tr ω ω. Using this, we then get the estimate:
(2.21) ∆ g (e B(F +ψ) U γ ) βe B(F +ψ) U γ +CB 2 e (B−B 0 )(F +ψ) tr ω ω+ 1 2 e B(F +ψ) g ij U γi U γj .
For the fourth term of (2.16), using (2.20) and Lemma 2.2, at the expense of increasing B 0 , we compute
−∆ g (U 2 γ ) = − 2U γ ∆ g (U γ ) − 2 g ij U γi U γj Ce −B 0 ψ tr ω ω − 2βU 2 γ − 2 g ij U γi U γj , so that we have −∆ g (e B(F +ψ) U 2 γ ) = − U 2 γ ∆ g (e B(F +ψ) ) − e B(F +ψ) ∆ g (U 2 γ ) − 2Be B(F +ψ) g ij (F + ψ) i U γj U γ + (F + ψ) j U γi U γ CB 2 e (B−B 0 )(F +ψ) tr ω ω − 2βe B(F +ψ) U 2 γ − 2e B(F +ψ) g ij U γi U γj + e B(F +ψ) g ij U γi U γj + 4B 2 e B(F +ψ) U 2 γ g ij (F + ψ) i (F + ψ) j CB 2 e (B−B 0 )(F +ψ) tr ω ω − e B(F +ψ) g ij U γi U γj − 2βe B(F +ψ) U 2 γ .(2.∆ g w − µ 1 (e F + ε/4)tr ω ω + CB 2 e (B−B 0 )(F +ψ) tr ω ω − 1 2 e B(F +ψ) g ij U γi U γj + βe B(F +ψ) U γ − 2βe B(F +ψ) U 2 γ .
Using then the fact that U γ (x 0 ) < 0, we see that
∆ g w(x 0 ) −µ 1 e F tr ω ω + CB 2 e (B−B 0 )(F +ψ) tr ω ω.
Choosing B, µ 1 sufficiently large, it then follows from the fact that e F (x 0 ) > 0 that ∆ g w(x 0 ) < 0. But since x 0 was assumed to be an interior minimum, we must have:
∆ g w(x 0 ) 0,
which is a contradiction. Hence, (2.14) follows.
Step 3. The normal-normal derivatives.
By steps 1 and 2 we have
|D γ D η u(p)| + |D γ D 2n u(p)| Ce −B 0 (F +ψ) for 1 γ, η 2n − 1.
Thus, to bound the normal-normal derivative, it thus sufficient to bound |u nn |.
Expanding out the determinant det( g ij ) 1 i,j n , we see that we already have the bound
(2.23) | det( g ij ) 1 i,j n − g nn det( g ij ) 1 i,j n−1 | Ce −B 0 (F +ψ) .
Recalling (2.7) and u − h ε 0, it is clear that det( g ij ) 1 i,j n = e β(u−hε)+n log(ε/4) det(g ij ) 1 i,j n C, so that (2.23) implies (2.24) g nn det( g ij ) 1 i,j n−1 Ce −B 0 (F +ψ) .
Next we show that there is a uniform lower bound for det( g ij ) 1 i,j n−1 . Note that the holomorphic tangent bundle to ∂M at p, denoted by T h ∂M , is spanned by
{ ∂ ∂z i } n−1 i=1 . Then ω| T h ∂M = (α + εω + √ −1∂∂u)| T h ∂M = (α + εω + √ −1∂∂ϕ ε )| T h ∂M + √ −1∂∂(u − ϕ ε )| T h ∂M e F ω| T h ∂M + √ −1∂∂(u − ϕ ε )| T h ∂M ,(2.25)
where we used (2.3) in the last inequality. Since u − ϕ ε ≡ 0 on ∂M , by [6,
Lemma 7.3], √ −1∂∂(u − ϕ ε )| T h ∂M = (ν · (u − ϕ ε )) L ∂M,ν ,
where ν is an outward pointing normal vector field on ∂M and L ∂M,ν is the corresponding Levi-form of ∂M . Recalling that ∂M is weakly pseudoconcave, we have L ∂M,ν 0. Since u − ϕ ε 0 on M and u − ϕ ε ≡ 0 on ∂M , we have ν · (u − ϕ ε ) 0,
so (2.25) implies ω| T h ∂M e F ω| T h ∂M .
Taking wedges, we then get
det( g ij ) 1 i,j n−1 1 C e (n−1)F .
Combining this with (2.24) and the definition of ω we have
|u nn | = | g nn − α nn − εg nn | Ce −B(F +ψ) ,
at p, as desired.
We can now bound the Laplacian on the interior: Proposition 2.4. Assume we are in the situation in Proposition 2.3. Then there exist uniform constants β 0 , B, and C > 0 such that:
|∆ g u| Ce −B ψ for all β β 0 ,
where here ψ is as in Proposition 2.1.
Proof. We may assume that without loss of generality that F 0. By the construction of ψ, we have
ψ ψ 0 and α + √ −1∂∂ ψ δ 2 ω.
Recalling u − ψ 0 and (2.4), it then follows that u − ψ 0 and (2.26) α + εω
+ (1 + δ/2) √ −1∂∂ ψ (1 + δ/2) δ 2 ω − δ 2 ω δ 2 4 ω
The trick is to again use:
u := u − (1 + δ/2) ψ.
By (2.26), it is clear that
(2.27) − ∆ g u −n + δ 2 4 tr ω ω.
Consider the following quantity:
Q = log tr ω ω − B u,
where B is a constant to be determined later. We will bound Q above using the maximum principle. Let x 0 be a maximum point of Q. It suffices to prove (tr ω ω)(x 0 ) Ce −C ψ(x 0 ) for some C, as then:
Q(x 0 ) log C − C ψ(x 0 ) − B u(x 0 ) log C + (Bδ/2 − C) ψ(x 0 ) C,
as long as B 2C/δ. Now, if x 0 ∈ ∂M , then we are already done by Proposition 2.3, as:
tr ω ω = tr ω (α + εω) + ∆ g u Ce −B(ψ+F ) Ce −C ψ on ∂M .
Note also that x 0 cannot occur on Sing(ψ). We may then compute at x 0 , using (2.27) and the estimate of [1,37]:
0 ∆ g Q(x 0 ) 1 tr ω ω (−C(tr ω ω)(tr ω ω) − tr ω Ric( ω)) − Bn + Bδ 2 4 tr ω ω Bδ 2 4 − C tr ω ω − tr ω (Ric(ω) − β √ −1∂∂(u − h ε )) tr ω ω − Bn −tr ω (Ric(ω) − β ω + β(α + εω)) − βCe −B 0 ψ tr ω ω − Bn β 2 + −Cβ − Cβe −B 0 ψ
tr ω ω for B, β sufficently large. Rearranging gives:
C + Ce −B 0 ψ(x 0 ) tr ω ω(x 0 ) 1 2 .
It then follows that
tr ω ω(x 0 ) 2C(1 + e −B 0 ψ(x 0 ) ) 4Ce −B 0 ψ(x 0 )
as required. Thus, we conclude that: Q C for a uniform C. It follows that:
tr ω ω Ce −B ψ , and hence: (∆ g u) = (tr ω ω) − tr ω (α + εω) Ce −B ψ .
Proposition 2.5. Assume we are in the situation in Proposition 2.3. Then there exist uniform constants β 0 , B, and C > 0 such that
|∇ 2 u| g Ce −B ψ for all β > β 0 ,
where ψ is as in Proposition 2.1.
Proof. We already have that the Hessian is bounded on the boundary by Proposition 2.3. We may then apply the maximum principle argument in [12, Lemma 4.3] using ψ instead of ψ, which gives us the estimate everywhere, as desired. Note that, although it is assumed in [12] that ψ has analytic singularities, it is easy to see that the proof only needs the weaker assumption of exponential smoothness, in the specific form of (2.2).
For the reader's convenience, we give a brief sketch here. Recalling Lemma 2.2 and Propositions 2.3 and 2.4, it is clear that
(2.28) sup M (e B 0 ψ |∇u| 2 g + e B 0 ψ |∆u|) + sup ∂M (e B 0 ψ |∇ 2 u| g ) C.
Without loss of generality, we assume that ψ −1. We consider the following quantity:
Q = log λ 1 + ρ(e B ψ |∇u| 2 g ) − A u,
where λ 1 is the largest eigenvalue of the real Hessian ∇ 2 u, u is as in Proposition 2.4, A and B are positive constants to be determined, and the function ρ is given by
ρ(s) = − 1 2 log 1 + sup M (e B ψ |∇u| 2 g ) − s .
Let x 0 be a maximum point of Q. By a similar argument to that in Proposition 2.4, it suffices to prove
λ 1 (x 0 ) e −C ψ(x 0 ) .
If x 0 ∈ ∂M , then we are done by (2.28). Thus, we assume that x 0 is an interior point and Q is smooth at x 0 (otherwise, we just need to apply a perturbation argument as in [12]). We compute everything at x 0 . Applying ∂ k to the logarithm of (2.7), we have
g ii ∂ k ( g ii ) = β(u k − (h ε ) k ).
It then follows that
∆ g (e B ψ |∇u| 2 g ) e B ψ 2 k g ii |u ik | 2 + |u ik | 2 − CB 2 e (B−C) ψ i g ii − Cβe (B−C) ψ e B ψ 2 k g ii |u ik | 2 + |u ik | 2 − i g ii − β,
after choosing B sufficiently large such that CB 2 e (B−C) ψ 1. Since ρ ′ 1 2 , we obtain
∆ g (ρ(e B ψ |∇u| 2 g )) ρ ′ e B ψ 2 k g ii |u ik | 2 + |u ik | 2 + ρ ′′ g ii |∂ i (e B ψ |∇u| 2 g )| 2 − 1 2 i g ii − β 2 .
(2.29)
Applying V 1 V 1 to the logarithm of (2.7) and using V 1 V 1 (u) = λ 1 , we have
g ii V 1 V 1 ( g ii ) = g pp g qq |V 1 ( g pq )| 2 + V 1 V 1 (log det g) + βV 1 V 1 (u − h ε ) g pp g qq |V 1 ( g pq )| 2 − C + β(λ 1 − Ce −C ψ ),
where we used (2.5) and ψ ψ in the second inequality. Without loss of generality, we assume that λ 1 4Ce −C ψ + 4C. It then follows that We may now finish as follows. By [2, Proposition 2.4], we have:
g ii V 1 V 1 ( g ii ) g pp g qq |V 1 ( g pq )| 2 + β 2 , which implies ∆ g (log λ 1 ) 2 α>1 g ii |∂ i (u VαV 1 )| 2 λ 1 (λ 1 − λ α ) + g pp g qq |V 1 ( g pq )| 2 λ 1 − g ii |∂ i (u V 1 V 1 )| 2 λ 2 1 + β 2 .u ε,β C 0 − − → V ε where: V ε := sup{v ∈ PSH(X, α + εω) | v h ε }.
As mentioned earlier, the V ε decrease pointwise to V as ε decreases to 0. Using (2.10), Lemma 2.2 and Proposition 2.5, we establish uniform C 1,1 estimate for u on compact subsets away from E nK (α), which implies V ∈ C 1,1 loc (M \ E nK (α)), as required.
Geodesics between Singular Kähler Metrics
We now show that our results apply in the setting of regularity of geodesics between singular Kähler metrics.
Proof of Corollary 1.2. Let (X 0 , ω 0 ) be a compact Kähler variety, without boundary, and:
µ : (X, ω) → (X 0 , ω 0 ) a smooth resolution of the singularities of X 0 with simple normal crossings, which exists thanks to Hironaka's theorem [25]. Let µ −1 (X 0,Sing ) = E = ∪ m k=1 E k be the exceptional divisor with smooth irreducible components E k . Let α 0 := µ * ω 0 0, which will be a smooth semi-positive form. It is well-known that [α 0 ] is a big and nef class, and that E nK (α 0 ) = Supp(E). Consider smooth Hermitian metrics h k on O(E k ) and defining sections s k for each E k .
Elementary results in several complex variables will now show:
(3.1) α n 0 b m k=1
|s k | a k h k ω n for fixed constants a k > 0 and a b > 0 depending on ω 0 . To see this, we work locally -cover X 0 by open charts U i such that for each i there exists an embedding:
ι i : U i ֒→ Ω i ⊂ C N ,
with N uniformly large, such that ω 0 extends to a smooth Kähler form (which we will also call ω 0 ) on the open set Ω i . Relabeling µ to be ι i • µ, we have by functoriality that µ * ω 0 is unchanged, so we may work with a holomorphic map between smooth spaces. Fix coordinates z on µ −1 (Ω i ) and
x on Ω i , and define the Jacobian of µ to be the n × N matrix:
Jac(µ) := ∂µ k ∂z j 1 j n 1 k N ,
where µ k is the k th coordinate function of µ on Ω i . Putting e j := √ −1dz j ∧ dz j , one can then compute that:
α n 0 (z) = det(Jac(µ) · ω 0 (µ(z)) · Jac(µ) T )n!e 1 ∧ . . . ∧ e n
where we are expressing ω 0 (x) as an N × N matrix in the x-coordinates.
Letting c > 0 be a constant such that ω 0 cω Eucl on all charts Ω i (which we can do, after possibly shrinking them slightly, as there are only finitely many), we then have: α n 0 (z) c n det(Jac(µ) · Jac(µ) T )n!e 1 ∧ . . . ∧ e n .
We now use [26,Lemma,pg. 304] to see that:
det Jac(µ) · Jac(µ) T = n×n minors J k of Jac(µ)
| det(J k )| 2 so that:
α n 0 c n n×n minors J k of Jac(µ) | det(J k )| 2 n!e 1 ∧ . . . ∧ e n .
Each determinant in the sum is a holomorphic function, and furthermore, we know that their common zero locus is E nK (α 0 ) = E, as µ is a local biholomorphism if and only if Jac(µ) has full-rank, which is only true when at least one of the determinants is non-zero. Thus, by the Weierstrass preparation theorem and the fact that E has simple normal crossings, we know that we can express each determinant (locally) as a product of the s i to some powers, as well as some other local holomorphic functions that do not vanish along all of E -up to estimating these, the smooth Hermitian metrics, and ω n , we then see the claim (3.1).
We may now use the discussion immediately following Proposition 2.1 to see that:
α 0 ce F ω,
where:
F := log b m k=1 |s k | a k h k ,
and c depends on an upper bound for α 0 (which always exists as it is a smooth form). Up to shrinking b, we may arrange that F 0, and note that F has analytic singularities only along E nK (α 0 ). For a very large constant C then, we have that F ∈ PSH(X, Cω), by the Poincaré-Lelong formula:
√ −1∂∂F = m k=1 a k ([E k ] − R k ) −Cω,
so we can apply Proposition 2.1 to get the key condition (1.1). Note that the resulting ψ actually has analytic singularities only along E nK (α 0 ), so our estimates will be optimal.
To now apply this to the geodesic, we will need to translate this onto the product space X × A, where A is the annulus:
A := {τ ∈ C | 1 < |τ | < e}.
Let π be the projection onto X and p the projection onto A, and define α := π * α 0 . Throughout, we will use: t := log |τ |.
Consider two Kähler metrics ω 1 and ω 2 on X 0 such that α 1 := µ * ω 1 and α 2 := µ * ω 2 are cohomologous to α 0 . Fix a Kähler form ω on X such that α k ω for k = 0, 1, 2, constants c, B > 0, and an exponentially smooth, strictly α 0 -psh ψ as above such that: α k ce B ψ ω for k = 1, 2.
There then exist two smooth functions ϕ 1 , ϕ 2 such that:
α k = α 0 + √ −1∂∂ϕ k , k = 1, 2.
The geodesic between α 1 and α 2 is then defined to be the envelope:
V := sup{v ∈ PSH(X × A, α) | v| {t=0} π * ϕ 1 , v| {t=1} π * ϕ 2 }.
Fix a large constant C such that ϕ 2 − (C − 1) ϕ 1 ϕ 2 + (C − 1). Let f be the solution to the Dirichlet problem on A:
√ −1∂∂f = ω Eucl , f | ∂A = 0.
We then define ϕ to be the following subsolution:
ϕ(x, τ ) := max{π * ϕ 1 (x) − Ct, π * ϕ 2 (x) − C(1 − t)} + p * f (τ ),
where max is a regularized maximum function with error 1/2. Observe that, for both k = 1, 2, on X × A we have:
α + √ −1∂∂(π * ϕ k ± Ct + p * f ) ce Bπ * ψ π * ω + √ −1∂ τ ∂ τ (±C log |τ | + p * f )
ce Bπ * ψ (π * ω + p * ω Eucl ). Thus, by an elementary property of max [19,Lemma 5.18], we have that:
α + √ −1∂∂ϕ ce Bπ * ψ (π * ω + p * ω Eucl ).
also, which is the key condition (1.1) that we require (as we just take ϕ ε = ϕ for all ε > 0).
Proof of Corollary 1.3. The idea is to construct a good sequence of Kähler potentials for {α+εω}, and then take regularized maximums with ϕ 1 and ϕ 2 , which will preserve the estimates we need. Specifically, recall that (X, ω) is a Kähler manifold without boundary, and [α] is a nef and big class on X. Our sequence of potentials will be the solutions to the following Monge-Ampère equations:
(3.2) (α + (ε/2)ω + √ −1∂∂v ε ) n = e β 0 vε ω n , α + (ε/2)ω + √ −1∂∂v ε > 0,
where β 0 > 0 is a fixed number such that we have the estimates:
(3.3) |∇v ε | 2 g + |∇ 2 v ε | 2 g
Ce −Bψ for all ε > 0.
We can see this by establishing C 0 bounds for the v ε -it is immediate from the comparison principle [7,Remark 2.4] that the v ε are decreasing as ε → 0; if ε 1 > ε 2 , then:
{vε 1 <vε 2 } e β 0 vε 2 ω n +2 −n (ε 1 −ε 2 ) n ω n {vε 1 <vε 2 } (α+(ε 1 /2)ω+ √ −1∂∂v ε 2 ) n {vε 1 <vε 2 } (α+(ε 1 /2)ω+ √ −1∂∂v ε 1 ) n = {vε 1 <vε 2 } e β 0 vε 1 ω n {vε 1 <vε 2 } e β 0 vε 2 ω n ,
which is a contradiction unless ω n ({v ε 1 < v ε 2 }) = 0, in which case v ε 2 v ε 1 everywhere, by continuity. In particular, the v ε are uniformly bounded above by v 1 . Further, the same argument shows that the v ε are bounded below by v 0 solving: (α + √ −1∂∂v 0 ) n = e β 0 v 0 ω n . By [7,Theorem 6.1], v 0 has minimal singularities, so there exists a large constant C such that ψ − C v 0 v ε for all ε > 0. The proofs of Lemma 2.2 and Proposition 2.5 now apply directly (One can also probably use easier proofs to obtain (3.3), but it follows immediately from what we have done already -see also [12,Section 4]).
It will be convenient to renormalize all the involved quantities to have supremum zero: sup
X ϕ 1 = sup X ϕ 2 = sup X v ε = 0,
and assume without loss of generality that ψ ϕ 1 , ψ ϕ 2 .
This will not affect the problem in anyway, as the diligent reader may check.
We will now pull-back everything to the product space X × A as in Corollary 1.2 -define A, t, π, p and f as in that proof, and again fix a constant C such that:
ϕ 2 − (C − 1) ϕ 1 ϕ 2 + (C − 1)
, where we used that ϕ 1 and ϕ 2 have the same singularity type. The geodesic connecting ϕ 1 and ϕ 2 is defined by
V := sup{v ∈ PSH(X × A, π * α) | v| {t=0} π * ϕ 1 , v| {t=1} π * ϕ 2 }.
As in the proof of Corollary 1.2, we define:
ϕ(x, τ ) := max{π * ϕ 1 (x) − Ct, π * ϕ 2 (x) − C(1 − t)} + p * f (τ ).
To apply Theorem 1.1, we define the smooth approximates
ϕ ε (x, τ ) := max{π * ϕ 1 (x) − Ct, π * ϕ 2 (x) − C(1 − t), π * v ε (x) − C ε } + p * f (τ )
where C ε := − log(ε/2) + C + 2. Clearly, ϕ ε decrease pointwise to ϕ as ε decreases to 0. We claim that (3.4) π * (α + εω) + √ −1∂∂ϕ ε e π * ψ (π * ω + p * ω Eucl ), and
(3.5) |∇ϕ ε | g + |∇ 2 ϕ ε | g C −Bψ .
Given these, Corollary 1.3 will follow from Theorem 1.1. Let us prove (3.4) first. Let (x 0 , τ 0 ) ∈ X × A and t 0 := log |τ 0 |. If we have
π * ϕ 1 (x 0 ) − Ct 0 π * v ε (x 0 ) − C ε + 1,
then using that ψ ϕ 1 and v ε 0, it follows that e π * ψ ε/2
near (x 0 , τ 0 ). Using (3.2), we see that
π * (α + εω) + √ −1∂∂ϕ ε ε 2 π * ω + p * ω Eucl e π * ψ (π * ω + p * ω Eucl ),
near (x 0 , τ 0 ), which implies (3.4) there. If
π * ϕ 1 (x 0 ) − Ct 0 > π * v ε (x 0 ) − C ε + 1
on the other hand, then it follows from the definition of max that:
ϕ ε = ϕ
near (x 0 , τ 0 ), and so: π * (α + εω) + √ −1∂∂ϕ ε e π * ψ (π * ω + p * ω Eucl ), by our assumptions on ϕ 1 and ϕ 2 . We now check (3.5). For ease of notation, write:
b 1 := π * ϕ 1 − Ct, b 2 := π * ϕ 2 − C(1 − t), b ε := v ε − C ε .
Recall then the definition of max:
max(a, b, c) := R 3 max{y 1 , y 2 , y 3 }θ(y 1 − a)θ(y 2 − b)θ(y 3 − c) dy 1 dy 2 dy 3 ,
where here θ, 0 θ 1 is a cutoff function on R, θ ≡ 1 near 0, with support in [−1/2, 1/2]. Thus,
ϕ ε = max(b 1 , b 2 , b ε ) + π * f.
A short calculation shows that
|∇ϕ ε | g C (min{|b 1 |, |b 2 |, |b ε |} + 1) (|∇b 1 | + |∇b 2 | + |∇b ε |) , as b 1 , b 2 , b ε 0. Using (3.
3) and the fact that ϕ 1 and ϕ 2 are both exponentially smooth, we then see
|∇ϕ ε | g C(−π * ϕ 1 + C)e −Bπ * ψ C(−π * ψ)e −Bπ * ψ Ce −Bπ * ψ .
A similar argument shows that |∇ 2 ϕ ε | g Ce −Bπ * ψ , establishing (3.5).
Remark 3.1. Let us briefly note that the above procedure to produce the v ε does not work in the case of a general α on a general M -this is because one would need to solve a Dirichlet problem, and so would need to have knowledge/control of permissible boundary data, which is unavailable without further assumptions, e.g. as in Corollary 1.3.
We now briefly discuss the case of geodesic rays originating at singular Kähler metrics. Recall that the main result of [28] can be summarized as follows (see that paper for a more specific statement):
Theorem 3.2. Suppose that M is a compact complex manifold with boundary. Let ξ be a function with analytic singularities on M , such that ξ is singular on a divisor E ⊂ M with E ∩ ∂M = ∅. Let R h be a smooth form cohomologus to [E], and suppose that α is a closed, smooth, real (1, 1)-form on M such that α− R h is ψ-big and nef, with Sing(ψ)∩ ∂M = ∅. Finally, let ϕ ∈ PSH(M, α) be smooth near the boundary of M and sufficiently regular. Then the envelope:
sup{v ∈ PSH(M, α) | v| ∂M ϕ| ∂M , v ξ + O(1)} is in C 1,1
loc (M \ Sing(ξ + ψ)) if the boundary of M is weakly pseudoconcave and α + √ −1∂∂ϕ δω on some neighborhood of ∂M .
In [28], the above theorem was used to prove C 1,1 regularity of certain geodesic rays originating at a Kähler metric by taking M = X × D. Here, we simply remark that the results in Section 2 can be combined with the method in [28] to improve Theorem 3.2, and this can then be used to prove regularity of certain geodesic rays on a singular Kähler variety in the exact same way: Theorem 3.3. Theorem 3.2 is still valid if we allow Sing(ψ) to intersect ∂M and we weaken the assumptions that ϕ be smooth and strictly α-psh near the boundary to just assuming the existence of a family of ϕ ε satisfying conditions (a) and (b) in Theorem 1.1.
as claimed.
To bound the gradient on the interior, we consider the quantity:
Q = e 2Bψ |∇h ε | 2 g + e Bψ h 2 ε ,
where B is a constant to be determined. Suppose that Q achieves a maximum at x 0 . If x 0 ∈ ∂M , then we are already done. Otherwise, we choose holomorphic normal coordinates at x 0 for g (the Reimannian metric corresponding to ω), so that:
i (h ε ) kii = i (h ε ) iik = (∆ ω h ε ) k = 0. This implies ∆ ω (e 2Bψ |∇h ε | 2 g ) 2e 2Bψ Re i,k (h ε ) kii (h ε ) k + e 2Bψ |∇ 2 h ε | 2 g − 2Be 2Bψ |∇ϕ ε | g |∇h ε | g |∇ 2 h ε | g e 2Bψ |∇ 2 h ε | 2 g − 1 2 e 2Bψ |∇ 2 h ε | 2 g − CB 2 e 2Bψ |∇ϕ ε | 2 g |∇h ε | 2 g 1 2 e 2Bψ |∇ 2 h ε | 2 g − CB 2 e (2B−C)ψ |∇h ε | 2 g . (A.2)
By the maximum principle, at x 0 we have 0 ∆ ω Q = ∆ ω (e 2Bψ |∇h ε | 2 g ) + ∆ ω (e Bψ h 2 ε ) 1 2 e 2Bψ |∇ 2 h ε | 2 g − CB 2 e (2B−C)ψ |∇h ε | 2 g + e Bψ |∇h ε | 2 g − CB 2 e (B−C)ψ .
We can now choose B sufficiently large such that CB 2 e (2B−C)ψ 1 2 e Bψ , implying |∇h ε | 2 g Ce −Cψ at x 0 .
Using (A.1) then shows that Q(x 0 ) C, as desired. Finally, we bound the Hessian of the h ε . We establish the boundary estimate first. The tangent-tangent derivative estimate is obvious. The tangent-normal derivatives can be bounded in a manner analogous to the proof of Proposition 2.3; very briefly, one considers the quantity: w = b + e Bψ (µ|z| 2 − e Bψ |D γ h ε | − e Bψ |D γ h ε | 2 ), on a ball B R (p) of fixed radius R, with p ∈ ∂M , 1 γ 2n − 1, and µ and B constants. Choosing µ sufficiently large, one arranges that: w 0 on ∂(B R ∩ M ), and then shows that w cannot have an interior minimum point. We refer the reader to the proof of Proposition 2.3 for more details. Finally, the normalnormal derivative is bounded just by the fact that ∆ ω h ε = −2n. Thus, we have the second order estimate on the boundary:
|∇ 2 h ε | g Ce −Cψ on ∂M .
To bound the Hessian everywhere now, we consider the quantity: Q = e 2Bψ |∇ 2 h ε | 2 g + e Bψ |∇h ε | 2 g , where B is a constant to be determined. Let x 0 be an interior maximum point of Q. Choosing holomorphic normal coordinates for g at x 0 gives
i (h ε ) klii = (∆ ω h ε ) kl − ∂ k ∂ k (g ij )(h ε ) ij = −∂ k ∂ l (g ij )(h ε ) ij ,
which implies | i (h ε ) klii | C|∇ 2 h ε | g . Similarly, we also have | i (h ε ) klii | C|∇ 2 h ε | g . Combining this with the Cauchy-Schwarz inequality, it follows that
∆ ω (e 2Bψ |∇ 2 h ε | 2 g ) 2e 2Bψ Re i,k,l (h ε ) klii (h ε ) kl + i,k,l (h ε ) klii (h ε ) kl + e 2Bψ |∇ 3 h ε | 2 g − 2Be 2Bψ |∇ϕ ε | g |∇ 2 h ε | g |∇ 3 h ε | g − Ce 2Bψ |∇ 2 h ε | 2 g − CB 2 e 2Bψ |∇ϕ ε | 2 g |∇ 2 h ε | 2 g
− CB 2 e (2B−C)ψ |∇ 2 h ε | 2 g . Using the maximum principle and (A.2), at x 0 , we have 0 ∆ ω Q = ∆ ω (e 2Bψ |∇ 2 h ε | 2 g ) + ∆ ω (e Bψ |∇h ε | 2 g )
− CB 2 e (2B−C)ψ |∇ 2 h ε | 2 g + 1 2 e Bψ |∇ 2 h ε | 2 g − CB 2 e (B−C)ψ |∇h ε | 2 g .
After choosing B C sufficiently large, we see that It then follows that |∇ 2 h ε | 2 g Ce −Cψ at x 0 , which implies Q(x 0 ) C, as desired.
Lemma A.2. Assume we are in the situation in Lemma A.1. For each integer k 3, there exist positive constants B k , C k such that |∇ k h ε | g C k e −B k ψ .
Proof. Since ψ is exponentially smooth, there exists a constant C 0 such that e C 0 ψ is smooth. Using ∆ ω h ε = −2n, it is clear that ∆ ω (e BC 0 ψ h ε ) = − 2ne BC 0 ψ + B(B − 1)h ε e (B−2)C 0 ψ |∇(e C 0 ψ )| 2 g + Bh ε e (B−1)C 0 ψ ∆ ω (e C 0 ψ ) + 2Be (B−1)C 0 ψ Re ∇(e C 0 ψ ), ∇h ε ,
(A.3)
where B is a constant to be determined. Using Lemma A.1 and choosing B sufficiently large, we obtain ∆ ω (e BC 0 ψ h ε ) C 1 (M ) + e BC 0 ψ ϕ ε C 3 (∂M ) C.
Applying the Schauder estimate, it follows that
e BC 0 ψ h ε C 2, 1 2 (M ) C.
At the expense of increasing B, it follows from (A.3) that ∆ ω (e BC 0 ψ h ε ) C 1, 1 2 (M ) + e BC 0 ψ ϕ ε C 4 (∂M ) C.
Using the Schauder estimate again, we obtain e BC 0 ψ h ε C 3, 1 2 (M )
C.
Repeating the above argument, for any k 3, there exist constants B k , C k such that e B k ψ h ε C k, 1 2 (M ) C k , as required.
r be a defining function for M in B R (so that {r 0} = M ∩ B R and {r = 0} = ∂M ∩ B R ). After making a quadratic change of coordinates, we may assume that (see [6, Remark 7.13])
), (2.29), (2.30) and the rest of arguments of [12, Lemma 4.3], we obtain λ 1 (x 0 ) Ce −C ψ(x 0 ) , as required.
Bψ and e (B−C)ψ |∇h ε | 2 g C.
Acknowledgments. We would like to thank Valentino Tosatti for useful discussions, and for helping improve the clarity of this paper.Appendix A. Estimates for ∆ ω Lemma A.1. Let ϕ ε be as in Theorem 1.1, and let h ε be the solutions to:for all ε > 0. Then there exist positive constants B, C such that:Proof. First, without loss of generality we may scale ψ such that:Now note that by the maximum principle, we have that the h ε are decreasing as ε → 0, and that (cf. (2.10)):Let b be the solution to:and define: h ε := h ε − ϕ ε 0. We claim that there exists a constant B > 0 such that:Combining this with the lower bound in (A.1), it follows that |∇h ε | g Ce −Bψ at the boundary. To see the claim, using (2.2), we compute:Combining this with (A.1) and |∆ ω ϕ ε | Ce −Cψ , we see thatBy the Cauchy-Schwarz inequality, we have
Équations du type Monge-Ampère sur les variétés kählériennes compactes. T Aubin, Bull. Sci. Math. 2Aubin, T.Équations du type Monge-Ampère sur les variétés kählériennes compactes, Bull. Sci. Math. (2) 102 (1978), no. 1, 63-95.
From Monge-Ampère equations to envelopes and geodesic rays in the zero temperature limit. R J Berman, to appear in Math. ZBerman, R.J. From Monge-Ampère equations to envelopes and geodesic rays in the zero temperature limit, to appear in Math. Z.
On the Optimal Regularity of Weak Geodesics in the Space of Metrics on a Polarized Manifold, Analysis meets geometry, 111-120. R J Berman, Trends Math. Birkhäuser/SpringerBerman, R.J. On the Optimal Regularity of Weak Geodesics in the Space of Met- rics on a Polarized Manifold, Analysis meets geometry, 111-120, Trends Math., Birkhäuser/Springer, Cham, 2017.
On geodesics in the space of Kähler metrics. Z B Locki, Advances in geometrics analysis. Somerville, MAInt. Press21B locki, Z., On geodesics in the space of Kähler metrics, in Advances in geometrics analysis, 3-19, Adv. Lect. Math. (ALM), 21, Int. Press, Somerville, MA, 2012.
The complex Monge-Ampère equation. Z B Locki, M Pȃun, V Tosatti, American Institute of Mathematics WorkshopPalo Alto, CaliforniaReportB locki, Z., Pȃun, M., Tosatti, V. The complex Monge-Ampère equa- tion, American Institute of Mathematics Workshop, Palo Alto, Cal- ifornia, August 15-19, 2016. Report and open problems available at http://aimath.org/pastworkshops/mongeampere.html
Complex Monge-Ampère Equations and Geodesics in the Space of Kähler Metrics. S Boucksom, Lecture Notes in Math. Guedj, V. ed.2038SpringerMonge-Ampère equations on complex manifolds with boundaryBoucksom, S. Monge-Ampère equations on complex manifolds with boundary. In: Guedj, V. ed., Complex Monge-Ampère Equations and Geodesics in the Space of Kähler Metrics, Lecture Notes in Math., Vol. 2038. Heidelberg: Springer, pp. 257- 282, 2012
Monge-Ampére equations in big cohomology classes. S Boucksom, P Eyssidieux, V Guedj, A Zeriahi, Acta Math. 2Boucksom, S., Eyssidieux, P., Guedj, V., Zeriahi, A. Monge-Ampére equations in big cohomology classes, Acta Math, 205 (2010), no. 2, 199-262.
The Dirichlet problem for nonlinear second-order elliptic equations. II. Complex Monge-Ampère, and uniformly elliptic equations. L Caffarelli, J J Kohn, L Nirenberg, J Spruck, Comm. Pure Appl. Math. 382Caffarelli, L., Kohn, J.J., Nirenberg, L., Spruck, J. The Dirichlet problem for nonlin- ear second-order elliptic equations. II. Complex Monge-Ampère, and uniformly elliptic equations, Comm. Pure Appl. Math. 38 (1985), no. 2, 209-252.
The space of Kähler metrics. X X Chen, J. Differential Geom. 562Chen, X.X. The space of Kähler metrics, J. Differential Geom. 56 (2000), no. 2, 189-234.
The Monge-Ampère equation for non-integrable almost complex structures, to appear in J. J Chu, V Tosatti, B Weinkove, Eur. Math. Soc. (JEMS). Chu, J., Tosatti, V., Weinkove, B. The Monge-Ampère equation for non-integrable almost complex structures, to appear in J. Eur. Math. Soc. (JEMS).
On the C 1,1 regularity of geodesics in the space of Kähler metrics. J Chu, V Tosatti, B Weinkove, Ann. PDE. 3215Chu, J., Tosatti, V., Weinkove, B. On the C 1,1 regularity of geodesics in the space of Kähler metrics, Ann. PDE 3 (2017), no. 2, 3:15
C 1,1 regularity for degenerate complex Monge-Ampère equations and geodesics rays. J Chu, V Tosatti, B Weinkove, Comm. Partial Differential Equations. 432Chu, J., Tosatti, V., Weinkove, B. C 1,1 regularity for degenerate complex Monge- Ampère equations and geodesics rays, Comm. Partial Differential Equations 43 (2018), no. 2, 292-312.
Optimal regularity of plurisubharmonic envelopes on compact Hermitian manifolds. J Chu, B Zhou, to appear in Sci. China MathChu, J., Zhou, B. Optimal regularity of plurisubharmonic envelopes on compact Her- mitian manifolds, to appear in Sci. China Math.
Morse theory and geodesics in the space of Kähler metrics. T Darvas, Proc. Amer. Math. Soc. 1428Darvas, T. Morse theory and geodesics in the space of Kähler metrics, Proc. Amer. Math. Soc. 142 (2014), no. 8, 2775-2782.
Weak geodesic rays in the space of Kähler potentials and the class E (X, ω). T Darvas, J. Inst. Math. Jussieu. 164Darvas, T. Weak geodesic rays in the space of Kähler potentials and the class E (X, ω), J. Inst. Math. Jussieu 16 (2017), no. 4, 837-858.
Metric geometry of normal Kähler spaces, energy properness, and existence of canonical metrics. T Darvas, Int. Math. Res. Not. IMRN. 22Darvas, T. Metric geometry of normal Kähler spaces, energy properness, and existence of canonical metrics, Int. Math. Res. Not. IMRN, (2017), no. 22, 6752-6777.
On the singularity type of full mass currents in big cohomology classes. T Darvas, E Di Nezza, C H Lu, Compos. Math. 1542Darvas, T., Di Nezza, E., Lu, C. H. On the singularity type of full mass currents in big cohomology classes, Compos. Math. 154 (2018), no. 2, 380-409.
Weak geodesics in the space of Kähler metrics. T Darvas, L Lempert, Math. Res. Lett. 195Darvas, T., Lempert, L. Weak geodesics in the space of Kähler metrics, Math. Res. Lett. 19 (2012), no. 5, 1127-1135.
Complex analytic and differential geometry, freely accessible book. J.-P Demailly, Demailly, J.-P. Complex analytic and differential geometry, freely accessible book, (https://www-fourier.ujf-grenoble.fr/∼demailly/manuscripts/agbook.pdf).
Symmetric spaces, Kähler geometry and Hamiltonian dynamics. S K Donaldson, Northern California Symplectic Geometry Seminar. RIAmer. Math. SocDonaldson, S. K. Symmetric spaces, Kähler geometry and Hamiltonian dynamics, in Northern California Symplectic Geometry Seminar, 13-33, Amer. Math. Soc., Prov- idence, RI, 1999
Geometry and topology of the space of Kähler metrics on a singular varieties. E Di Nezza, V Guedj, Compos. Math. 1548Di Nezza, E., Guedj, V. Geometry and topology of the space of Kähler metrics on a singular varieties, Compos. Math., 154 (2018), no. 8, 1593-1632.
E Di Nezza, C H Lu, arXiv:1808.06308L p metric geometry of big and nef cohomology classes, preprint. Di Nezza, E., Lu, C.H. L p metric geometry of big and nef cohomology classes, preprint, arXiv: 1808.06308
The Dirichlet problem for complex Monge-Ampère equations and regularity of the pluri-complex Green function. B Guan, Comm. Anal. Geom. 64CorrectionGuan, B. The Dirichlet problem for complex Monge-Ampère equations and regularity of the pluri-complex Green function, Comm. Anal. Geom. 6 (1998), no. 4, 687-703. Correction 8 (2000), no. 1, 213-218.
On the space of Kähler potentials. W He, Comm. Pure Appl. Math. 682He, W. On the space of Kähler potentials, Comm. Pure Appl. Math. 68 (2015), no. 2, 332-343.
Bimeromorphic smoothing of a complex-analytic space. H Hironaka, Acta Math. Vietnam. 22Hironaka, H. Bimeromorphic smoothing of a complex-analytic space, Acta Math. Viet- nam. 2 (1977), no. 2, 103-168.
Holomorphic Mappings of Complex Manifolds. Y Lu, J. Differential Geom. 23Lu, Y. Holomorphic Mappings of Complex Manifolds, J. Differential Geom. 2 (1968), no. 3, 299-312.
Some symplectic geometry on compact Kähler manifolds. I. T Mabuchi, Osaka J. Math. 242Mabuchi, T. Some symplectic geometry on compact Kähler manifolds. I, Osaka J. Math. 24 (1987), no. 2, 227-252.
Envelopes with prescribed singularities. N Mccleerey, arXiv:1807.05817preprintMcCleerey, N. Envelopes with prescribed singularities, preprint, arXiv: 1807.05817
Singular Kählerian spaces. B Moishezon, Proc. Int. Conf. on manifolds and related topics in topology. Int. Conf. on manifolds and related topics in topologyUniv. of Tokyo PressMoishezon, B. Singular Kählerian spaces, Proc. Int. Conf. on manifolds and related topics in topology, Univ. of Tokyo Press, 1974, 343-351.
Geodesics in the space of Khler metrics. L Lempert, L Vivas, Duke Math. J. 1627Lempert, L., Vivas, L. Geodesics in the space of Khler metrics, Duke Math. J. 162, (2013), no. 7, 1369-1381.
The Monge-Ampère operator and geodesics in teh space of Kähler potentials. D H Phong, J Sturm, Invent. Math. 1661Phong, D.H., Sturm, J. The Monge-Ampère operator and geodesics in teh space of Kähler potentials, Invent. Math. 166 (2006), no. 1, 125-149.
The Dirichlet problem for degenerate comple Monge-Ampère equations. D H Phong, J Sturm, Proc. Amer. Math. Soc. 13810Phong, D.H., Sturm, J. The Dirichlet problem for degenerate comple Monge-Ampère equations, Proc. Amer. Math. Soc. 138 (2010), no. 10, 3637-3650.
The Dirichlet problem for the complex homogeneous Monge-Ampère equation. J Ross, D Witt Nyström, Proceedings of Symposia in Pure Mathematics. Symposia in Pure MathematicsRoss, J., Witt Nyström, D. The Dirichlet problem for the complex homogeneous Monge-Ampère equation, to appear in Proceedings of Symposia in Pure Mathematics.
Complex Monge-Ampère and symplectic manifolds. S Semmes, Amer. J. Math. 1143Semmes, S. Complex Monge-Ampère and symplectic manifolds, Amer. J. Math. 114 (1992), no. 3, 495-550.
Regularity of envelopes in Kähler classes. V Tosatti, Math. Res. Lett. 251Tosatti, V. Regularity of envelopes in Kähler classes, Math. Res. Lett. 25 (2018), no. 1, 281-289.
Nonlinear PDEs in real and complex geometry. G Szekelyhidi, V Tosatti, B Weinkove, American Institute of Mathematics Workshop. ReportSzekelyhidi, G., Tosatti, V., Weinkove, B. Nonlinear PDEs in real and complex geom- etry, American Institute of Mathematics Workshop, Palo Alto, California, August 13- 17, 2018. Report and open problems available at http://aimpl.org/nonlinpdegeom
On the Ricci curvature of a compact Kähler manifold and the complex Monge-Ampère equation, I. S.-T Yau, Comm. Pure Appl. Math. 313Yau, S.-T. On the Ricci curvature of a compact Kähler manifold and the complex Monge-Ampère equation, I, Comm. Pure Appl. Math. 31 (1978), no. 3, 339-411.
|
[] |
[
"Discretized Tikhonov regularization for Robin boundaries localization",
"Discretized Tikhonov regularization for Robin boundaries localization"
] |
[
"Hui Cao \nSchool of Mathematics and Computational Science\nSun Yat-sen University\n510275GuangzhouP.R. China\n",
"Sergei V Pereverzev \nJohann Radon Institute for Computational and Applied Mathematics\nAustrian Academy and Sciences\nA-4040LinzAustria\n",
"Eva Sincich \nLaboratory for Multiphase Processes\nUniversity of Nova Gorica\nSlovenia\n"
] |
[
"School of Mathematics and Computational Science\nSun Yat-sen University\n510275GuangzhouP.R. China",
"Johann Radon Institute for Computational and Applied Mathematics\nAustrian Academy and Sciences\nA-4040LinzAustria",
"Laboratory for Multiphase Processes\nUniversity of Nova Gorica\nSlovenia"
] |
[] |
We deal with a boundary detection problem arising in nondestructive testing of materials. The problem consists in recovering an unknown portion of the boundary, where a Robin condition is satisfied, with the use of a Cauchy data pair collected on the accessible part of the boundary. We combine a linearization argument with a Tikhonov regularization approach for the local reconstruction of the unknown defect. Moreover, we discuss the regularization parameter choice by means of the so called balancing principle and we present some numerical tests that show the efficiency of our method.
|
10.1016/j.amc.2013.10.036
|
[
"https://arxiv.org/pdf/1305.1106v1.pdf"
] | 2,718,363 |
1305.1106
|
a4a25581f5d9f240c6e57ed318c6ab47c4d96513
|
Discretized Tikhonov regularization for Robin boundaries localization
Hui Cao
School of Mathematics and Computational Science
Sun Yat-sen University
510275GuangzhouP.R. China
Sergei V Pereverzev
Johann Radon Institute for Computational and Applied Mathematics
Austrian Academy and Sciences
A-4040LinzAustria
Eva Sincich
Laboratory for Multiphase Processes
University of Nova Gorica
Slovenia
Discretized Tikhonov regularization for Robin boundaries localization
Tikhonov regularizationRobin boundary conditionFree boundary problemBalancing PrincipleLocal identification
We deal with a boundary detection problem arising in nondestructive testing of materials. The problem consists in recovering an unknown portion of the boundary, where a Robin condition is satisfied, with the use of a Cauchy data pair collected on the accessible part of the boundary. We combine a linearization argument with a Tikhonov regularization approach for the local reconstruction of the unknown defect. Moreover, we discuss the regularization parameter choice by means of the so called balancing principle and we present some numerical tests that show the efficiency of our method.
Introduction
In this paper we deal with an inverse problem arising in corrosion detection. We consider a domain Ω ⊂ R 2 which models a 2D transverse section of a thin metallic specimen whose boundary is partly accessible and stays in contact with an aggressive environment. Hence, in order to detect the damage which is expected to occur in such a portion of the boundary, one has to rely on the electrostatic measurements of a potential u performed on the accessible portion.
We are then lead to the study of the following elliptic boundary value problem
∆u = 0 ,
in Ω , ∂u ∂ν = Φ , on Γ A , ∂u ∂ν + γu = 0 , on Γ I ,
u = 0 , on Γ D .
(1.1) According to this model u is the harmonic potential in Ω. We assume that the boundary of Ω is decomposed in three open and disjoint subsets Γ A , Γ I , Γ D . On the portion Γ A , which is the one accessible to direct inspection, we prescribe a current density Φ and we measure the corresponding voltage potential u| Γ A . The portion Γ I , where the corrosion took place, is out of reach. On such a portion the potential u satisfies an homogeneous Robin condition, which models a resistive coupling with the exterior environment by means of the impedance coefficient γ.
In this paper we are interested in the numerical reconstruction issue of the unknown and damaged boundary Γ I from the data collected on the accessible part of the boundary Γ A , that is the Cauchy data pair (u| Γ A , Φ). Boundary and parameter identification results related to this stationary inverse problem has been provided by many authors [1,2,3,5,9,4,10,11,12,13,14,15,16,17]. Local uniqueness and conditional stability results for the inverse problem at hand are contained in [5] and constitute the theoretical setting on which our numerical analysis relies. The present local determination of corroded boundaries consists in the localization of a small perturbation Γ I,θ of a reference boundary Γ I . It is convenient to introduce a small vector field θ ∈ C 1 0 (Γ I ) so that the damaged domain Ω θ is such that
∂Ω θ = Γ A ∪ Γ D ∪ Γ I,θ , Γ I,θ = {z ∈ R 2 : z = w + θ(w), w ∈ Γ I }.
Such a local approach combined with a linearization argument (see [5]) allows a reformulation of the problem of the localization of the unknown defect Γ I,θ as the identification of the unknown term θ in a boundary condition of the type
∂u ∂ν + γu = d ds θ · ν d ds u + γθ · ν(γ + 2H)u
at the portion Γ I , where u is a harmonic function satisfying homogeneous Neumann and Dirichlet conditions on Γ A and Γ D respectively, u is the solution of (1.1), and H denotes the curvature of the reference boundary Γ I . As in [5] we carry over our analysis under the a-priori assumption of a constant γ such that 2H(x) + γ > 0 in Γ I and we limit ourselves to the case of positive fluxes Φ only. We linearize the forward map F : θ → u θ | Γ A , where u θ is the solution of the system (2.3) below, by its Fréchet derivative F and take the voltage contrast on Γ A , as the noisy right-hand term for the considered operator equation,
F θ = (u θ − u)| Γ A .
As in [13], we assume that the unavoidable measurement errors in voltage contrast are not smaller than the truncation error, o( θ ). Therefore, if the noise level for voltage measurements is assumed to be δ, then the noise level for the right-hand term of the above operator equation can be written as δ = Kδ, where a constant K is not necessary to be precisely known. Our method is based on a discretized Tikhonov regularization argument where the regularization parameter is chosen by a balancing principle (cf. [6,8,20]). Such an a posteriori parameter choice can lead to a regularized solution with order-optimal accuracy. At the same time it can provide a reliable estimate for the constant K.
Local identification of the unknown boundary
In this section we shall collect the main identifiability results of which our reconstruction procedure and our numerical tests are a follow up. For a more detailed description we refer to [5]. We denote with ν the outward normal to Γ I and we assume that θ is a vector field in C 1 0 (Γ I ) having a nontrivial normal component θ ν on Γ I . Let the Sobolev space H 1 0 (Ω, Γ D ) be defined as follows
H 1 0 (Ω, Γ D ) = {v ∈ H 1 (Ω) : v = 0 on Γ D in the trace sense} . (2.1)
We introduce the forward map F F :
C 1 0 (Γ I ) → H 1 2 (Γ A ) (2.2) θ → u θ | Γ A where u θ ∈ H 1 0 (Ω, Γ D ) is the solution to the elliptic problem ∆u θ = 0 in Ω θ ∂u θ ∂ν = Φ on Γ A ∂u θ ∂ν + γu θ = 0 on Γ I,θ u = 0 on Γ D . (2.3)
We recall the following differentiability property for the forward map F .
Lemma 2.1. The operator F in (2.2)
is Fréchet differentiable at zero. Indeed, consider the linear operator F :
C 1 0 (Γ I ) → H 1 2 (Γ A ) defined as F θ = u | Γ A , where u is the solution to the boundary value problem ∆u = 0 in Ω ∂u ∂ν = 0 on Γ A ∂u ∂ν + γu = d ds θ ν d ds u + γθ ν (γ + 2H) u on Γ I u = 0 on Γ D ,(2.
4)
the function u is the solution of (1.1) and H denotes the curvature of the boundary Γ I . Then,
1 θ C 1 0 (Γ I ) F (θ) − F (0) − F θ H 1 2 (Γ A ) → 0 as θ → 0 in C 1 0 (Γ I ).
Let us also recall that a weak solution to (2.4) is a function u ∈ H 1 0 (Ω, Γ D ) such that
Ω ∇u ∇v + Γ I γu v = Γ I γθ ν (γ + H)uv − Γ I θ ν d ds u d ds v (2.5)
for all v ∈ H 1 0 (Ω, Γ D ). The following theorem ensures that the operator F is injective, under some reasonable hypothesis. This property allows us to conclude that the solution θ to our inverse problem is identifiable, at least for small perturbations. Moreover, we recall a conditional Lipschitz type upper bound for θ on a suitable portion of Γ I in terms of u | Γ A = F θ, thus showing that the inversion of F is not too much ill-behaved.
Theorem 2.2. Let Φ ∈ H 1 2 (Γ A )
be nonnegative in the sense of distributions. Let us assume that 2H(x) + γ > 0 and θ ν (x) 0 for any x ∈ Γ I . Then F is injective. Moreover, there exists a positive constant c > 0 such that
u H 1 2 (Γ A ) c Γ I |θ|,
whereΓ I is an inner portion of the boundary Γ I .
Finally, in the next theorem, we consider L 2 (Γ A ) as codomain space of the operator F introduced in Lemma 2.1 stating a compactness result.
Theorem 2.3. The linear operator F : C 1 0 (Γ I ) → L 2 (Γ A ) θ → u | Γ A
where u is the solution to the boundary value problem (2.4), is compact.
Tikhonov regularization for a local reconstruction and an estimate of the accuracy
Here and in the following, with a slight abuse of notation, we shall denote by F the compact operator F :
H 1 0 (Γ I ) → L 2 (Γ A ) (3.1) θ → u ,
where u satisfies the weak formulation in (2.5) for any v ∈ H 1 0 (Ω, Γ D ). The existence and uniqueness of u ∈ H 1 0 (Ω, Γ D ) follows from standard arguments on elliptic boundary value problems. Moreover, the compactness of F in (3.1) follows along the lines of Theorem 4.5 in [5]. In view of this compactness property, the issue of the identification of θ may be interpreted as the regularized inversion of the above compact operator F = F (Γ I ) between the Hilbert spaces H 1 0 (Γ I ) and L 2 (Γ A ). Such kind of reformulation allows us to deal with the approximate inversion by the technique of Tikhonov regularization. We are interested in finding the solution to operator equation
F θ =r := u | Γ A ,(3.2)
where instead of the exact datar, a noisy version r δ is known. As in [13], if we linearize the forward map F defined in (2.2) by its Fréchet derivative, then by Lemma 2.1, we obtain
F θ = F (θ) − F (0) + o( θ ), i.e. F θ = (u θ − u)| Γ A + o( θ ).
Here u θ | Γ A and u| Γ A are voltages measured in experiments. In practice they are usually given in a noisy form as u θ | δ Γ A and u| δ Γ A with δ being the noise level for unavoidable experimental error for the measurements of the voltage. When θ is rather small, one can assume that these measurement errors in voltage contrast (u θ − u)| Γ A have the same order of magnitude as the truncation error o( θ ). Thus, we take
r δ := u δ θ | Γ A − u δ | Γ A
as the noisy right hand term for (3.2) and assume that
r − r δ L 2 (Γ A ) ≤δ = Cδ, (3.3)
where a constant C is unknown. If the Tikhonov regularization is applied to the ill-posed operator equation
F θ = r δ ,
then the regularized solution solves
(F ) * F θ + αI = (F ) * r δ (3.4)
where α > 0 is the Tikhonov regularization parameter and I is the identity operator on space H 1 0 (Γ I ). It is well known that the solution to (3.4) will be the minimizer of the functional
J (θ) := F θ − r δ 2 L 2 (Γ A ) + α θ 2 H 1 0 (Γ I ) . (3.5)
Here, we assume the exact solution θ belongs to the set of source condition
M h := s ∈ H 1 0 (Γ I ) : s = h((F ) * F )w, w ≤ 1 (3.6)
where h is an 'index function' defined on [0, ∞), which is operator monotone (see [18,19]) and satisfies the condition h(0) = 0. Moreover, it has been proven that
sup 0<λ≤b α α + λ h(λ) ≤ h(α) for all α ∈ (0,ᾱ] (3.7)
and someᾱ > 0. Let us notice that J (θ) in (3.5) is the standard Tikhonov regularization functional, where the penalty term naturally is imposed in H 1 0 -norm. Such a consideration can facilitate the analysis for the accuracy. Moreover, it is equivalent to the Tikhonov regularization functional considered in [13] with a penalty term based on the derivative of the regularized solution. The discretization of the regularized problem (3.4) is realized by Galerkin method. The Galerkin approximation of Tikhonov-regularization consists in minimizing the above functional J (θ) in a finite-dimensional subspace X n ⊂ H 1 0 (Γ I ). As usual, in Galerkin scheme, the discretized regularized solution θ δ α,n is characterized by the variational equations
F θ δ α,n − r δ , F z + α θ δ α,n , z = 0, ∀z ∈ X n , (3.8) or, equivalently, θ δ α,n = ((F n ) * F n + αI) −1 (F n ) * r δ ,(3.9)
where F n := F P n , with P n being the projection from H 1 0 (Γ I ) onto X n . Let f 1 , f 2 , . . ., f n be basis functions of X n . If one decomposes θ δ α,n into a linear combination of
f 1 , f 2 , . . ., f n , i.e. θ δ α,n = n i=1 c i f i , then the coefficient vector c = {c i } n i=1
can be obtained by solving a linear algebraic system,
(M + αG) c = R δ ,
with the following matrices and vector,
M := F f i , F f j L 2 (Γ A ) n i,j=1 G := f i , f j H 1 0 (Γ I ) n i,j=1
(3.10)
R δ := F f i , r δ L 2 (Γ A ) n i=1 .
Remark 3.1. The adjoint operator (F ) * is not involved in the construction of θ δ α,n . Theoretically, F f i can be obtained by solving the boundary value system (2.4) and deriving the trace on Γ A , where function θ is replaced by f i , for i = 1, . . . , n. Moreover, we do not need each F f i in an explicit form, but only its products in (3.10), which can be computed much more accurately than F f i itself.
According to the classical results on Tikhonov regularization for linear illposed problem and in view of (3.3) and (3.7), it holds that
θ − θ δ α,n H 1 0 (Γ I ) ≤ θ − ((F n ) * F n + αI) −1 (F n ) * r H 1 0 (Γ I ) + ((F n ) * F n + αI) −1 (F n ) * r − r δ H 1 0 (Γ I ) ≤ θ − ((F n ) * F n + αI) −1 (F n ) * r H 1 0 (Γ I ) + Cδ 2 √ α .
As in [19], we can estimate the noise free term as follows,
θ − ((F n ) * F n + αI) −1 (F n ) * r H 1 0 (Γ I ) ≤ I − ((F n ) * F n + αI) −1 (F n ) * F n θ H 1 0 (Γ I ) + ((F n ) * F n + αI) −1 (F n ) * F (I − P n ) θ H 1 0 (Γ I ) ≤ I − ((F n ) * F n + αI) −1 (F n ) * F n h((F n ) * F n )w H 1 0 (Γ I ) + I − ((F n ) * F n + αI) −1 (F n ) * F n (h((F ) * F ) − h((F n ) * F n )) w H 1 0 (Γ I ) + F (I − P n ) θ L 2 (Γ A ) √ α ≤ C 1 h(α) + h F (I − P n ) 2 + F (I − P n ) √ α ,
where the constant C 1 does not depend on α and n. In view of the best possible order of accuracy without discretization being h(α) + δ/ √ α, the discretization has to be chosen such that
F (I − P n ) : H 1 0 (Γ I ) → L 2 (Γ A ) ≤ δ. (3.11)
Summing up the estimates above, we have the following theorem.
θ − θ δ α,n H 1 0 (Γ I ) ≤Kh(α) + K δ √ α (3.12)
where the constantsK and K do not depend on α and δ.
Parameter choice rule based on the balancing principle
In this section, we give a regularization parameter choice rule based on the balancing principle developed in [6,7,20]. The essential idea of this principle is to choose the regularization parameter α balancing the two parts in error estimate (3.12). As a posteriori parameter choice rule, the balancing principle can select regularization parameter in an adaptive way without a priori knowledge of the solution set (3.6). That is, the index function h in the bound (3.12), which indicates the smoothness of θ as shown in (3.6), does not need to be known. At the same time, it does not require to know the precise noise level, either. In our model problem, constant C in (3.3) indicating the precise noise level is unknown, which leads to K in (3.12) is also unknown. A reference noise level δ is sufficient for the performance of the balancing principle. The regularization parameter chosen by the balancing principle leads to a regularized solution with an order-optimal accuracy. Assume that the projection P n is chosen with n = n(F , δ) such that (3.11) is satisfied. Let θ δ α := θ δ α,n(F ,δ) . We select parameter α from the geometric sequence ∆ := {α n = α 0 q n , n = 0, 1, . . . , N }, with q > 1, sufficiently small α 0 , and sufficiently large N such that α N −1 ≤ 1 < α N . For any given K , one can choose the parameter from ∆(α 0 , q, N ) by the following adaptive strategy,
α(K) = max α n ∈ ∆ : θ δ αn − θ δ αm H 1 0 (Γ I ) ≤ Kδ 3 √ αn + 1 √ αm , m = 0, 1, . . . , n − 1 . (4.1)
We further rely on the assumption that a two-sided estimate
cK δ √ α ≤ θ 0 α − θ δ α H 1 0 (Γ I ) ≤ K δ √ α . (4.2)
holds for some c ∈ (0, 1), where θ 0 α is defined by (3.9) with r δ being taken as r. The upper estimate in (4.2) is due to (3.12). As to the lower estimate, it just suggests that the noise propagation error is not that small. If the lower estimate is not satisfied, it just means that our estimate to noise level is too pessimistic. However this will not cause a problem, since later we shall show that under assumption (4.2) balancing principle can provide an order-optimal accuracy. Now, consider the following hypothesis set of possible values of the constant K K = k j = k 0 p j , j = 0, 1, . . . , M , p > 1, and assume that there are two adjacent terms k l , k l+1 ∈ K such that
k l ≤ cK ≤ K ≤ k l+1 . (4.3)
In fact, each element in K can be viewed as a candidate for the estimator to constant K. Our aim is to detect k l+1 (or say k l ) among the elements in K, and to use k l+1 in adaptive strategy (4.1) to obtain a parameter α.
In view of (4.2) and (4.3), if the hypothesis k j ∈ K for K is chosen too small, i.e., k j ≤ k l then, as it is shown in [6,20], the corresponding regularization parameter α(k j ) will be smaller than a threshold depending on α 0 and p. Thus, if
α(k i ) := min α(k j ) ≥ 9α 0 p 2 + 1 p − 1 2 , (4.4)
then there holds that, either i = l, or i = l + 1.
In order to guarantee the regularized solution is stable enough, we choose final regularization parameter as
α + = α(k i+1 ).
With such a choice α + , we have the following theorem.
θ − θ δ α + H 1 0 (Γ I ) ≤ 6p 2 √ qKh(h −1 (Kδ)),
whereh(t) =Kh(t) √ t,K is the constant from estimation (3.12), andh −1 is the inverse function ofh.
Note that from [18] it follows that the error bound indicated in Theorem 4.1 is order-optimal, i.e., it is only worse by a constant factor 3p 2 √ q than a priori optimal bound 2Kh(h −1 (Kδ)). If index function h in the source condition (3.6) is given as h(λ) = cλ ν , 0 < ν ≤ 1, then h(h −1 (Kδ)) = O(δ 2ν 2ν+1 ), which coincides with the classical rate for Tikhonov regularization.
Remark 4.1. The proof of Theorem 4.1 can be referred to [6] or [7]. For the general discussions on the application of the balancing principle with two flexible parameters, one can refer to [20].
Numerical tests
In this section, we present some numerical examples to illustrate the theoretical results obtained above. In Examples 1-3, we consider the corrosion problem in (1.1) with
Ω = (0, π) × (0, 1), Γ A = (0, π) × {0}, Γ I = (0, π) × {1}, Γ D = {0} × (0, 1) ∪ {π} × (0, 1).
On such a rectangle domain, if the flux Φ = sin(x) is given on Γ A , then the solution to (1.1) is in the form of u(x, y) = − sinh(y) + γ sinh(1) + cosh(1) sinh(1) + γ cosh(1) cosh y sin(x).
with γ > 0. We test the same flux Φ = sin(x) in Examples 1-3.
Example 1.
In this example the vector field θ is given as θ = (0, θ 2 (x)) with
θ 2 (x) = x 0 θ 2 (t)dt, where θ 2 (0) = 0, and θ 2 (x) = − cot(x) + γ cot 2 (x) − γ 2 + 1 cot 2 (x) − γ , 0 ≤ x < π 2 , − cot(x) − γ cot 2 (x) − γ 2 + 1 cot 2 (x) − γ , π 2 ≤ x ≤ π,
with the constant γ such that 0 < γ < 1. Such a choice of θ corresponds to u θ (x, y) = exp(−y) sin(x) solving (2.3). As we mentioned in the Introduction, the impedance coefficient γ depending on the exterior environment should be a fixed constant in the model problem (1.1). However, in this particular example, the scale of θ depends on γ.
On the other hand, our linearization approach by truncation can only work when θ is rather small. Thus, in this example we test different values of γ which are all quite close to 1. Figures 1 (a) and (b) illustrate the behaviors of θ 2 (x) and θ 2 (x) when γ = 0.999. In order to simulate the error arising in experiment measurements, we add random noise to each grid involved in calculation, i.e. we take
r δ = (u θ − u)| Γ A + ξδ,
where ξ is random variable with range [−1, 1] and the reference noise level δ = 10 −6 . Figure 1 Sequence {α(k j )} 19 j=0 produced by (4.1) with K replaced by k j results in 4.827 · 10 −11 , 8.157 · 10 −11 , 1.379 · 10 −10 , 3.937 · 10 −10 , 1.900 · 10 −9 , 9.173 · 10 −9 , 3.406 · 10 −8 , 7.482 · 10 −8 , 9.728 · 10 −8 , 1.265 · 10 −7 , 1.644 · 10 −7 , 2.778 · 10 −7 , 3.612 · 10 −7 , 6.104 · 10 −7 , 1.341 · 10 −6 , 2.946 · 10 −6 , 8.415 · 10 −6 , 1.094 · 10 −5 , 1.422 · 10 −5 , 1.849 · 10 −5 .
For the parameters designed as above, the value of the threshold is calculated as 9α 0
p 2 +1 p−1 2 = 7
.236 · 10 −9 . Then α + = α(k 7 ) = 3.406 · 10 −8 . At the same time, we obtain an estimate to K as k 7 = 0.0290, which suggests the true noise level Kδ is about 2.90 · 10 −8 . Table 1 summarizes the results for the other values of γ in Example 1.
Here we take different reference noise level δ according to γ, because, as we mentioned, in this particular example, γ determines the scale of θ and furthermore the truncation error. In Table 1, Err h1 := θ − θ δ α + H 1 (Γ I ) and Err l2 := θ − θ δ α + L 2 (Γ I ) denote the errors in the corresponding norms. The reconstructed functions θ δ α + are displayed in Figure 2.
Example 2.
In this example, the vector field θ = (0, θ 2 (x)) to be identified is similar to what is considered in [13], where θ 2 (x) is a piecewise linear function, as shown in Figure 3. In contrast to Example 1, we do not assume that γ ∈ (0, 1), and test two cases: γ = 1, γ = 10. The solution of (2.3) and its trace u θ | Γ A are generated numerically. We simulate point-wise random noise in each discretization note on Γ A with reference level δ = 10 −7 , and α + is chosen according to the balancing principle under such a value of δ.
γ δ K α + Err h1
The approximations θ δ α + are displayed in Figure 3 and the test results are summarized in Table 2.
Example 3.
In this example, we take the vector field θ = (0, θ 2 (x)) with
θ 2 (x) = h − π 2 2 + h 2 − x − π 2 2 , 0 < x < π,
as shown in Figure 4. Here one can change the value of the constant h > 0 to adjust the scale of θ. The solution u θ and its trace on Γ A in this example are also obtained numerically. In order to guarantee that the scale of θ is small enough such that the truncation method can work well, we test h = 30π and h = 90π. In both cases, larger value of γ may make the problem less ill-posed and result in better reconstruction. The approximations θ δ α + are displayed in Figure 5 and the test results are summarized in Table 3. In the last example, we consider a domain Ω given as a half annulus bounded by the following curves (see Figure 6). For flux Φ = y on Γ A , the function u solving (1.1) can be written as γ δ K α + Err h1 Err l2 0.99 10 −5 0.0816 1.032 · 10 −5 0.0057 0.0028 0.9 10 −4 0.0816 1.032 · 10 −5 0.0708 0.0384 u(x, y) = Ay + By x 2 + y 2 , where A = 1 + with 0 < γ < 1. For vector field θ = (0, θ 2 (x)), we consider θ 2 (x) = ϕ(
x) − √ 1 − x 2 , where ϕ(x) = 1 γ 1 − (γx + γ − 1) 2 , −1 < x ≤ 0, 1 γ 1 − (γx − γ + 1) 2 , 0 ≤ x ≤ 1.
It can be verified that u θ (x, y) = y solves (2.3) in Ω θ .
In this example γ also determines the scale of θ. Thus, we take the values of γ very close to 1. The approximations θ δ α + are displayed in Figure 7 and the test results are summarized in Table 4. We would like to note that in all considered examples the balancing principle (4.1), (4.4) has been implemented with the same values of the design parameters α 0 , p and q, because the domain Ω and the operator F are the same for all examples. This suggests that in practice, for a given domain Ω the parameters α 0 , p and q can be determined in the experiments with a problem (2.4) where a solution is known, and then kept for studying all other problems (2.3) in the given domain Ω. Both the theoretical and numerical results suggest that the linearizaton approach considered in this paper can perform well for the identification of the corroded boundary only on condition that the scale of this boundary function is quite small. This is the limitation of the approach. However, in practice one certainly does not expect too much corrosion taking place to the metallic specimen.
Theorem 4. 1 .
1Under the assumptions above, there holds
Figure 1 :
1noise added to (u θ − u)| Γ A (d) comparison ofr = u | Γ A and r δ Functions in Example 1 with γ = 0.999.
(c) and (d) show the additional noise ξδ and the comparison ofr = u | Γ A and r δ in the case γ = 0.999. In all of the following tests, the discretization level in (3.8) is taken as n = 20 and the regularization parameter α is chosen by the balancing principle described in Section 4. In the case of γ = 0.999, the parameters in the implementation of the balancing principle are settled as follows: ∆ = {α n = α 0 q n , n = 0, 1, . . . , N }, α 0 = 1 · 10 −11 , q = 1.3, N = 69; K = k j = k 0 p j , j = 0, 1, . . . , M , k 0 = 0.006, p = 1.3, M = 19.
Figure 2 :Figure 3 :
23The simulated solutions θ δ α + in Example 1. The simulated solutions θ δ α + in Example 2. Err h1 Err l2 1 10 −7 0.3937 5.756 · 10 −8 0.0031 3.565 · 10 −4 10 10 −7 0.5119 9.728 · 10 −8 0.0030 2.622 · 10 −4
Figure 4 :
4An illustration for θ 2 (x) in Example 3.
Figure 5 :
5The simulated solutions θ δ α + in Example 3.
−7 0.1060 1.743 · 10 −6 6.300 · 10 −4 1.340 · 10 −4
Figure 6 :
6Domain Ω θ in Example 4.
Γ
A = (x, y) : y = √ 4 − x 2 , −2 < x < 2 , Γ I = (x, y) : y = √ 1 − x 2 , −1 < x < 1 , Γ D = {(x, y) : y = 0, 1 < |x| < 2} .
Figure 7 :
7The simulated solutions θ δ α + in Example 4.
Table 1 :
1Test results for Example 1.
Table 2 :
2Test results for Example 2.
Table 3 :
3Test results for Example 3.
Table 4 :
4Test results for Example 4.
Stable determination of corrosion by a single electrostatic measurement. G Alessandrini, L Piero, L Rondi, Inverse Problems. 19G. Alessandrini, L. Del Piero, L. Rondi, Stable determination of corro- sion by a single electrostatic measurement, Inverse Problems 19 (2003), 973-984.
Detecting nonlinear corrosion by electrostatic measurements. G Alessandrini, E Sincich, Applicable Analysis. 85G. Alessandrini, E. Sincich, Detecting nonlinear corrosion by electro- static measurements, Applicable Analysis 85 (2006), 107-128.
Solving elliptic Cauchy problems and the identification of nonlinear corrosion. G Alessandrini, E Sincich, J. Comput. Appl. Math. 198G. Alessandrini, E. Sincich, Solving elliptic Cauchy problems and the identification of nonlinear corrosion, J. Comput. Appl. Math. 198 (2007), 307-320.
Uniqueness for the determination of unknown boundary and impedance with homogeneous Robin condition. V Bacchelli, Inverse Problems. 254pp15004V. Bacchelli, Uniqueness for the determination of unknown boundary and impedance with homogeneous Robin condition, Inverse Problems 25 (2009), 015004 (4pp).
Linearization of a free boundary problem in corrosion detection. E Cabib, D Fasino, E Sincich, J. Math. Anal. Appl. 378E. Cabib, D. Fasino, E. Sincich, Linearization of a free boundary prob- lem in corrosion detection, J. Math. Anal. Appl. 378 (2011), 700-709.
Natural linearization for the identification of a diffusion coefficient in a quasi-linear parabolic system from short-time observations. H Cao, S Pereverzev, Inverse Problems. 22H. Cao, S. Pereverzev, Natural linearization for the identification of a diffusion coefficient in a quasi-linear parabolic system from short-time observations, Inverse Problems 22 (2006), 2311-2330.
A Carleman estimate and the balancing principle in the quasi-reversibility method for solving Cauchy problem for the Laplace equation. H Cao, M V Klibanov, S V Pereverzev, Inverse Problems. 2521H. Cao, M. V. Klibanov, S. V. Pereverzev, A Carleman estimate and the balancing principle in the quasi-reversibility method for solving Cauchy problem for the Laplace equation, Inverse Problems 25 (2009), 035005, 21 pp.
Balancing principle for the regularization of elliptic Cauchy problems. H Cao, S V Pereverzev, Inverse Problems. 23H. Cao, S. V. Pereverzev, Balancing principle for the regularization of elliptic Cauchy problems, Inverse Problems 23 (2007), 1943-1961.
Natural linearization for corrosion identification. H Cao, S V Pereverzev, E Sincich, Journal of Physics: Conference Series. 13512027H. Cao, S. V. Pereverzev, E. Sincich, Natural linearization for corrosion identification, Journal of Physics: Conference Series 135 (2008) 012027.
Integral equations for inverse problems in corrosion detection from partial Cauchy data. F Cakoni, R Kress, Inverse Problems and Imaging. 1F. Cakoni, R. Kress, Integral equations for inverse problems in corrosion detection from partial Cauchy data, Inverse Problems and Imaging 1 (2007), 229-245.
Identification of Robin coefficients by means of boundary measurements. S Chaabane, S Jaoua, Inverse Problems. 15S. Chaabane, S. Jaoua, Identification of Robin coefficients by means of boundary measurements, Inverse Problems 15 (1999), 1425-1438.
Recovering unknown terms in a nonlinear boundary condition for the Laplace's equation. D Fasino, G Inglese, IMA J. Appl. Math. 71D. Fasino, G. Inglese, Recovering unknown terms in a nonlinear bound- ary condition for the Laplace's equation, IMA J. Appl. Math. 71 (2006), 832-852.
D Fasino, G Inglese, F Mariani, Corrosion detection in conducting boundaries: II. Linearization, stability and discretization, Inverse Problems. 23D. Fasino, G. Inglese, F. Mariani, Corrosion detection in conducting boundaries: II. Linearization, stability and discretization, Inverse Prob- lems 23 (2007), 1101-1114.
Nondestructing evaluation of corrosion damage using electrostatic measurements. P G Kaup, F Santosa, J. Nondestructive Eval. 14P. G. Kaup, F. Santosa, Nondestructing evaluation of corrosion damage using electrostatic measurements, J. Nondestructive Eval. 14 (1995), 127-136.
Logarithmic convergence rates for the identification of a nonlinear Robin coefficient. P Kügler, E Sincich, J. Math. Anal. Appl. 259P. Kügler, E. Sincich, Logarithmic convergence rates for the identifica- tion of a nonlinear Robin coefficient, J. Math. Anal. Appl. 259 (2009), 451-463.
Lipschitz stability for the inverse Robin problem. E Sincich, Inverse Problems. 23E. Sincich, Lipschitz stability for the inverse Robin problem, Inverse Problems 23 (2007), 1311-1326.
Stability for the determination of unknown boundary and impedance with a Robin boundary condition. E Sincich, SIAM J. Math. Anal. 42E. Sincich, Stability for the determination of unknown boundary and impedance with a Robin boundary condition, SIAM J. Math. Anal. 42 (2010), 2922-2943.
Geometry of linear ill-posed problems in variable Hilbert scales. P Mathé, S V Pereverzev, Inverse Problems. 19P. Mathé, S. V. Pereverzev, Geometry of linear ill-posed problems in variable Hilbert scales. Inverse Problems 19 (2003), 789-803.
Discretization strategy for linear ill-posed problems in variable Hilbert scales. P Mathé, S V Pereverzev, Inverse Problems. 19P. Mathé, S. V. Pereverzev, Discretization strategy for linear ill-posed problems in variable Hilbert scales.Inverse Problems 19 (2003), 1263- 1277.
On the balancing principle for some problems of numerical analysis. R D Lazarov, S Lu, S V Pereverzev, Numer. Math. 106R. D. Lazarov, S. Lu, S. V. Pereverzev, On the balancing principle for some problems of numerical analysis, Numer. Math. 106 (2007), 659- 689.
|
[] |
[
"Clinical prediction system of complications among COVID-19 patients: a development and validation retrospective multicentre study",
"Clinical prediction system of complications among COVID-19 patients: a development and validation retrospective multicentre study"
] |
[
"Ghadeer O Ghosheh ",
"Bana Alamad ",
"Kai-Wen Yang ",
"Faisil Syed ",
"Nasir Hayat ",
"Imran Iqbal ",
"Fatima Al Kindi ",
"Sara Al Junaibi ",
"Maha Al Safi ",
"Raghib Ali ",
"Walid Zaher ",
"Mariam Al Harbi ",
"Farah E Shamout "
] |
[] |
[] |
BackgroundExisting prognostic tools mainly focus on predicting the risk of mortality among patients with coronavirus disease 2019 . However, clinical evidence suggests that COVID-19 can result in non-mortal complications that affect patient prognosis. To support patient risk stratification, we aimed to develop a prognostic system that predicts complications common to COVID-19.MethodsIn this retrospective study, we used data collected from 3,352 COVID-19 patient encounters admitted to 18 facilities between April 1 and April 30, 2020, in Abu Dhabi (AD), United Arab Emirates. The hospitals were split based on geographical proximity to assess for our proposed system's learning generalizability, AD Middle region and AD Western & Eastern regions, A and B, respectively. Using clinical data collected during the first 24 hours of admission, the machine learning-based prognostic system predicts the risk of developing any of seven complications during the hospital stay. The complications include secondary bacterial infection, Acute Kidney Injury (AKI), Acute Respiratory Distress Syndrome (ARDS), and elevated biomarkers linked to increased patient severity, including d-dimer, interleukin-6, aminotransferases, and troponin. During training, the system applies an exclusion criteria, hyperparameter tuning, and model selection for each complication-specific model. We assessed its performance using the area under the receiver operating characteristic curve (AUROC) and the area under the precision recall curve.FindingsThe system achieves good accuracy across all complications and both regions. In test set A (587 patient encounters), the system achieves 0.91 AUROC for AKI and > 0.80 AUROC for most of the other complications. In test set B (225 patient encounters), the respective system achieves ≥ 0.90 AUROC for AKI, elevated troponin, and elevated interleukin-6, and > 0.80 AUROC for most of the other complications. The best performing models, as selected by our system, were mainly gradient boosting models and logistic regression.
| null |
[
"https://arxiv.org/pdf/2012.01138v1.pdf"
] | 227,247,929 |
2012.01138
|
350f9f9bac27ec45bba25688cf60d2e8551ee5e8
|
Clinical prediction system of complications among COVID-19 patients: a development and validation retrospective multicentre study
Ghadeer O Ghosheh
Bana Alamad
Kai-Wen Yang
Faisil Syed
Nasir Hayat
Imran Iqbal
Fatima Al Kindi
Sara Al Junaibi
Maha Al Safi
Raghib Ali
Walid Zaher
Mariam Al Harbi
Farah E Shamout
Clinical prediction system of complications among COVID-19 patients: a development and validation retrospective multicentre study
1 Engineering Division, NYU Abu Dhabi 2 Abu Dhabi Health Services 3 G42 Healthcare * Joint supervision †
BackgroundExisting prognostic tools mainly focus on predicting the risk of mortality among patients with coronavirus disease 2019 . However, clinical evidence suggests that COVID-19 can result in non-mortal complications that affect patient prognosis. To support patient risk stratification, we aimed to develop a prognostic system that predicts complications common to COVID-19.MethodsIn this retrospective study, we used data collected from 3,352 COVID-19 patient encounters admitted to 18 facilities between April 1 and April 30, 2020, in Abu Dhabi (AD), United Arab Emirates. The hospitals were split based on geographical proximity to assess for our proposed system's learning generalizability, AD Middle region and AD Western & Eastern regions, A and B, respectively. Using clinical data collected during the first 24 hours of admission, the machine learning-based prognostic system predicts the risk of developing any of seven complications during the hospital stay. The complications include secondary bacterial infection, Acute Kidney Injury (AKI), Acute Respiratory Distress Syndrome (ARDS), and elevated biomarkers linked to increased patient severity, including d-dimer, interleukin-6, aminotransferases, and troponin. During training, the system applies an exclusion criteria, hyperparameter tuning, and model selection for each complication-specific model. We assessed its performance using the area under the receiver operating characteristic curve (AUROC) and the area under the precision recall curve.FindingsThe system achieves good accuracy across all complications and both regions. In test set A (587 patient encounters), the system achieves 0.91 AUROC for AKI and > 0.80 AUROC for most of the other complications. In test set B (225 patient encounters), the respective system achieves ≥ 0.90 AUROC for AKI, elevated troponin, and elevated interleukin-6, and > 0.80 AUROC for most of the other complications. The best performing models, as selected by our system, were mainly gradient boosting models and logistic regression.
Introduction
The Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) has led to a global health emergency since the emergence of the coronavirus disease 2019 . Despite containment efforts, more than 55 million confirmed cases have been reported globally, of which 157,785 cases are in the United Arab Emirates (UAE) as of November 21, 2020 [1]. Due to unexpected burdens on healthcare systems, identifying high risk groups using prognostic models has become vital to support patient triage.
Most of the recently published prognostic models focus on predicting mortality, the need for intubation, or admission into the intensive care unit [2]. While the prediction of such adverse events is important for patient triage, clinical evidence suggests that COVID-19 may result in a variety of complications in organ systems that may eventually lead to mortality [3]. For example, Acute Respiratory Distress Syndrome (ARDS) related pneumonia has been reported as a major complication of COVID-19 [4]. Other studies reported alarming percentages among hospitalized COVID-19 patients that have developed hematological complications [5], organ dysfunction [6], or secondary bacterial infection [4]. Table 1 summarizes key studies that reported diagnosed complications or biomarkers which may lead to severe complications across different COVID-19 patient populations. Those findings suggest a pressing need for the development and validation of a prognostic system that predicts such complications in COVID-19 patients to support patient management.
Here, we address this need by proposing an automated prognostic system that learns to predict a variety of non-mortal complications among COVID-19 patients admitted to the Abu Dhabi Health Services (SEHA) facilities, UAE. The system uses multi-variable data collected during the first 24 hours of the patient admission, including vital-sign measurements, laboratory-test results, and baseline information. We particularly focus on seven complications based on the clinical evidence presented in Table 1, which are either based on clinical diagnosis or biomarkers that are indicative of patient severity. To allow for reproducibility and external validation, we made our code and a test set publicly available at: https: //github.com/nyuad-cai/COVID19Complications. for the overall dataset showing how the inclusion and exclusion criteria were applied to obtain the final training and test sets, where n represents the number of patient encounters, and p represents the number of unique patients.
Methods
We reported this study following the TRIPOD guidance [16]. ). There were 9 facilities in the Middle region, which includes the capital city, and 9 facilities in the Eastern and Western regions. Those regions are highlighted in Figure 1(a). Figure 1(
Data source
Defining and labeling of complications
Based on clinical evidence and in collaboration with clinical experts, we focused on predicting seven complications, including three clinically diagnosed events such as secondary bacterial infection (SBI), Acute Kindey Injury (AKI) [17] and ARDS [18] and four biomarkers that may be indicative of patient severity. In particular, among COVID-19 patients, elevated troponin reflects myocardial injury and has been reported to be associated with a higher risk of mortality [7], elevated d-dimer is associated with thrombotic events [9], elevated interleukin-6 is a proinflammatory cytokine that has been shown to be associated with disease severity Table 2: Criteria used to define the occurrence of the complications that our system aims to predict.
Complication Definition Reference
Elevated Troponin Troponin T ≥ 14 ng/L [19] Elevated D-Dimer D-Dimer ≥ 500 ng/mL [20] Elevated Aminotransferases AST ≥ 40 U/l AND ALT ≥ 40 U/l * and in-hospital mortality [12], and elevated aminotransferases have been reported to be associated with liver Injury [11]. For each patient encounter in the training and test sets, we identified the first occurrence (i.e., date and time), if any, of each complication based on the criteria shown in Table 2. The biomarkers-based complications are defined based on elevated laboratory-test results, SBI is defined based on positive cultures, AKI is defined based on the KDIGO classification criteria [17], and ARDS is defined based on the Berlin definition [18], which required the processing of free-text chest radiology reports. Further details on the processing of those reports is described in Supplementary Section A.
Input features
We considered data recorded within the first 24 hours of admission as input features to our predictive models. This data included continuous and categorical features related to the patient baseline information and demographics, vital signs, and laboratory-test results. Within the patient's baseline and demographic information, age and body mass index (BMI) were treated as continuous features, while sex, pre-existing medical conditions (i.e., hypertension, diabetes, chronic kidney disease, and cancer), and symptoms recorded at admission (i.e., cough, fever, shortness of breath, sore throat, and rash) were treated as binary features. As for the vital-sign measurements and laboratory-test results, we excluded any variable that was used to define the presence of any complication in order to avoid label leakage. In particular, we considered seven continuous vital-sign features, including systolic blood pressure, diastolic blood pressure, respiratory rate, peripheral pulse rate, oxygen saturation, auxiliary temperature, and the Glasgow Coma Score, and 19 laboratory-test results, including albumin, activated partial thromboplastin time (APTT), bilirubin, calcium, chloride, c-reactive protein, ferritin, hematocrit, hemoglobin, international normalized ratio (INR), lactate dehydrogenase (LDH), lymphocytes count, prothrombin time, procalcitonin, sodium, red blood cell count (RBC), urea, uric acid, and neutrophils count. All vital-sign measurements and laboratory-test results were processed into minimum, maximum, and mean statistics. We also defined seven binary input features to represent whether a complication had occurred within the first 24 hours of admission, to allow the models to learn from any dependencies between the complications.
Predictive modeling
The proposed system predicts the risk of developing each of the complications during the patient's stay after 24 hours of admission. This is represented by a vector y consisting of 7 risk scores, where each risk score is computed by a complication-specific model, such that y = y El. troponin , y El. d-dimer , y El. aminotransferases , y El. interleukin-6 , y SBI , y AKI , y ARDS ,
where y complication ∈ [0, 1].
The overall workflow of the model development is depicted in Figure 2. For each complication-specific model, we excluded from its training and test sets patients who developed that complication prior to the time of prediction. For AKI, we also excluded patients with chronic kidney disease. Then for each complication, our system trains five model ensembles based on five types of base learners: logistic regression (LR), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP) and a light gradient boosting model (LGBM). Missing data was imputed using median imputation for all models except for LGBM, which can natively learn from missing data, and the data was further scaled using min-max scaling for LR and MLP and standard scaling for SVM and KNN.
For each type of base learner, the system performs a stratified k-folds cross-validation using the complication's respective training set with k = 3. We performed random hyperparameter search for each base learner's hyperparameters [21] over 20 iterations, resulting with 3 trained models per hyperparameter set. The hyperparameter search ranges are described in Supplementary Section B. We selected the top two sets of hyperparameters that achieved the highest average area under the receiving operator characteristic curve (AUROC) on the validation sets, resulting with 6 trained models per ensemble. Then, we selected the ensemble that achieves the highest average AUROC on the validation set. Each model within the selected ensemble was further calibrated using isotonic regression on its respective validation set to ensure non-harmful decision making [22], except for the LR models. The final prediction of each complication consisted of an average of the predictions of all calibrated base learners per ensemble.
To understand what input features were most predictive of each complication, we performed post-hoc feature importance analysis using the tree SHapley Additive exPlanations (SHAP) [23]. All analysis was performed using Python (version 3.7.3), LR, KNN, SVM, and MLP models were trained using the Python scikit-learn package and the LGBM models were trained using the LightGBM package [24].
Statistical Analysis
We evaluated each complication ensemble using the AUROC and the area under the precision recall curve (AUPRC) on the test set. Confidence intervals for all of the evaluation metrics were computed using bootstrapping with 1,000 iterations [25]. We also assessed the calibration of the ensemble, after post-hoc calibration of its trained models, using reliability plots and reported calibration intercepts and slopes [22].
Role of Funding Source
The funding source had no role in the study design or data analysis. The study was performed by all co-authors who had access to the anonymized dataset.
Complications training subsets
Predictions of 7 complications
Complication labeling Figure 2: Overview of our proposed model development approach and expected application in practice. In the first row, we developed our complication-specific models by first preprocessing the data, identifying the occurrences of the complications based on the criteria shown in Table 2, training and selecting the best-performing models on the validation set, and then evaluating the performance on the test set, retrospectively. As for deployment, we expect our system to predict the risk of developing any of the seven complications for any patient after 24 hours of admission.
Results
A total of 3,352 encounters were included in the study and the characteristics of the characteristics of the final data splits are presented in Table 3. Across all the data splits, the mean age ranges between 39.3 and 45.5 years and the proportion of males ranges between 84.8% to 88.9%. The mortality rate was also less than 4% across all data splits, ranging between 1.3% and 3.7%. The most prevalent complication across all datasets was elevated d-dimer, although most patients mainly exhibited elevated d-dimer during the first 24 hours of admission. Elevated interleukin-6 was the most prevalent complication developed after 24 hours of admission across all datasets. The incidence of the complications developed after 24 hours were higher in the test sets than in their respective training sets, except for elevated troponin and d-dimer which were higher in training set A (3.0% and 6.6%, respectively) than in test set A (2.4% and 4.8%, respectively). The distributions of the vital signs and laboratory-test results, in terms of of the mean and interquartile ranges, are shown in Table 4. The performance of the models selected by our system across the two test sets in terms of the AUROC and AUPRC are shown in Table 5. The ROC, PRC, and reliability plots are also visualized in Figure 3. Across both test sets, our data-driven approach achieved good accuracy (>0.80 AUROC) for most complications. In test set A, AKI was the best discriminated endpoint at 24 hours from admission, with 0.905 AUROC (95% CI 0.861, 0.946). This is followed by ARDS (0.864 AUROC), SBI (0.862 AUROC), elevated troponin (0.843 AUROC), elevated interleukin-6 (0.820 AUROC), and elevated aminotransferases (0.801 AUROC). The complication with the worst discrimination was elevated d-dimer (0.717 AUROC). In test set B, AKI was also the best discriminated endpoint with 0.958 AUROC (95% CI 0.913, 0.994), followed by elevated troponin (0.913 AUROC), and elevated interleukin-6 (0.899 AUROC). Similar to test set A, elevated d-dimer was the worst discriminated endpoint (0.714 AUROC). We also observe that LGBM was selected as the best performing model on the validation sets for most complications, as shown in Supplementary Section C. LR was selected for AKI in both datasets, for elevated d-dimer in dataset A, and for SBI in dataset B, highlighting its predictive power despite its simplicity compared to the other machine learning models.
The top four important features for each complication are shown in Figure 4 across the two test sets. In test set A, age was among the top predictive features for all the complications except for elevated interleukin-6 and AKI. In test set B, C-reactive protein was among the top predictive features for predicting elevated Table 3: Summary of the baseline characteristics of the patient cohort in the training sets and test sets and the prevalence of the predicted complications. Note that n represents the total number of patients while % is the proportion of patients within the respective dataset. aminotransferases, elevated d-dimer, elevated interleukin-6, and ARDS. Other features such as ferritin and LDH, and BMI, were among the top predictive features for several complications across both sets, specifically for AKI and ARDS, respectively. We also visualize the timeline of for two patients in Figure 5, along with the predictions of our system. In Figure 5(a), the patient shown developed all seven complications during their hospital stay of 43 days. This highlights the importance of predicting all complications simultaneously, especially for patients who may develop more than one complication. In Figure 5(b), the patient did not develop any complications during their hospital stay of two days. To compare both patients, the system's predictions for patient (a) were relatively higher than those for patient (b). For example, the AKI predictions were 0.54 and 0.03, respectively, despite the fact that patient (a) developed AKI at around 20 days from admission. This demonstrates the value of our system in predicting the risk of developing complications early during the patient's stay. Table 5.
Discussion
In this study, we developed a predictive system of commonly occurring complications among COVID-19 patients to support patient triage. During validation, the system was assessed for performance and calibration.
To the best of our knowledge, this is one of the few machine learning studies that predict non-mortal complications secondary to COVID-19 and the first to demonstrate a system that predicts the risk of such complications simultaneously. The system achieves a good performance across all complications, for example, reaching above 0.9 AUROC for AKI across two independent datasets. This study has several strengths and limitations.
One of the main strengths is that we used multicentre data collected from 18 facilities across several regions in Abu Dhabi, UAE. COVID-19 treatment is free for all patients, hence there were no obvious gaps in terms of access to healthcare services in our dataset. Our dataset is diverse since Abu Dhabi is residence for more than 200 nationalities, of which only 19.0% of the population only is Emirati. Those characteristics of the dataset make our findings relevant to a global audience. This is also the first data-driven study to present the population in the UAE and one of few studies with large sample sizes (3,352 COVID-19 patient encounters) among COVID-19 related studies, while most previous studies have focused on European or Chinese patient cohorts. Despite the diversity of the dataset, one limitation is that we did not perform validation on a patient cohort external to the UAE. Compared to other international patient cohorts, our patient cohort is relatively younger, with a lower overall mortality rate, suggesting that our system needs to be further validated on populations with different demographic distributions [4,7,13,14]. Our data-driven approach and open-access code can be easily adapted for such purposes.
Several studies reported worse prognosis among COVID-19 infected patients who had multi-organ failure, severe inflammatory response, and other hematological complications [5,6,7,10]. Most existing studies focus on predicting the mortality endpoint [2]. The low mortality rate in our dataset strongly discouraged the development of a mortality risk prediction score, as small sample sizes may lead to biased models [2]. Our work was motivated by predicting the precursors of such severe adverse events, as identified by the World Health Organization [3]. We identified and predicted seven complications indicative of patient severity in order to avoid worse patient outcomes. The prevalence of the predicted complications ranged between 2%-10% and 2%-13% in our training and test sets, respectively. This high class imbalance is reflected in the AUPRC results. Since most of those tasks have not been investigated thoroughly before, our results introduce new benchmarks to evaluate other competing models. Future work should also investigate the use of multi-label deep learning classifiers, while accounting for the exclusion criteria during training.
An important aspect of this study is that the labeling criteria relies on renowned clinical standards and hospital-acquired data to identify the exact time of the occurrence of such complications. In collaboration with the clinical experts, this approach was considered more reliable than relying on International Classification of Disease (ICD) codes, since ICD codes are generally used for billing purposes and their derivation may vary across facilities, especially during a pandemic. One limitation of the labeling procedure is that it could miss patients for whom the data used in identifying a particular complication was not collected. However, this issue is more closely related to data collection practices at institutions and clinical data is often not missing at random. We also avoided label leakage by ensuring that there is no overlap between the set of input features and the features used to identify complications.
The feature importance analysis revealed that age, oxygen saturation, and respiratory rate are highly predictive of several complications. Since COVID-19 is predominantly a pulmonary illness, it was not surprising that oxygen saturation and respiratory rate ranked among the highest predictive features. Such features are routinely collected at hospitals and do not incur any additional data collection costs. We also identified C-reactive protein, ferritin, LDH, procalcitonin, systolic blood pressure, and diastolic blood pressure as markers for severity among COVID-19 patients, which is aligned with clinical literature [20,26]. This analysis demonstrates that our system's learning is clinically meaningful and relevant.
We assessed our models' calibration by reporting the calibration slopes and intercepts with confidence intervals and visualizing the calibration curves. Sufficiently large datasets are usually needed to produce stable calibration curves at model validation stage. Despite the size of our dataset, we found that reporting the calibration slopes and intercepts would provide a concise summary of potential problems with our system's risk calibration, to avoid harmful decision-making [22]. Overall, our results show that our ensemble models were adequately calibrated across all complications, as shown in Table 5 and Figure 3(c). This is also reflected in the sample patient timelines shown in Figure 5, where the predicted risks for the patient who experienced the complications were relatively higher than those predicted for the patient who did not experience any complications. Limiting factors to perfect calibration are the small dataset size and the fact that the ensemble prediction consists of an average of the predictions of the individually calibrated models. Further work should investigate how to improve the calibration of ensemble models.
Our data-driven approach and results highlight the promise of machine learning in predicting the risk of complications among COVID-19 patients. The proposed approach performs well when applied to two independent multicentre training and test sets in the UAE. The system can be easily implemented in practice due to several factors. First, the input features that our system uses are routinely collected by hospitals that accommodate COVID-19 patients as recommended by WHO. Second, training the machine learning models within our system does not require high computational resources. Finally, through feature importance analysis, our system can offer interpretability, and is also fully automated as it does not require any manual interventions. To conclude, we propose a clinically applicable prognostic system that predicts non-mortal complications among COVID-19 patients. Our system can serve as a guide to anticipate the course of COVID-19 patients and to help initiate more targeted and complication-specific decision-making on treatment and triage.
Contributors
Data sharing
We are unable to share the full dataset used in this study due to restrictions by the data provider. However, to allow for reproducibility and benchmarking on our dataset, we are sharing test set B (n=225), the trained models, and the source code online at https://github.com/nyuad-cai/COVID19Complications.
Supplementary Information
A Details of data pre-processing for labeling the complications KDIGO classification was used to classify AKI encounters [17]. The definition has three criteria, and if any of them were satisfied, the patient was assigned a diagnosis of AKI. The three criteria were either an increase in serum creatinine of 0.3 mg/dl within 48 hours, an increase of 1.5 times the baseline serum creatinine measurement, or urine output of less than 0.5 ml/kg/hr for 6 hours [17]. We only assessed the first two definitions, since urine output was not available in our dataset. The patient's first record of serum creatinine was treated as the baseline for that patient. Patients with reported chronic kidney disease were excluded from the training and testing AKI subsets.
The Berlin definition was employed to identify the timing and incidence of ARDS [18]. The full ARDS labeling process is illustrated by the flow diagram in Figure 1. Textual chest X-ray reports and CT scan reports were processed using natural language processing (NLP) techniques to identify three categorized key terms: opacity, bilaterality, and ARDS. The lexicon developed was in reference to the Herasevich [27] and ASSIST [28] sniffers, which was further refined and validated based on clinical expertise. To minimize the influence of uncertainty profiles, the negation expression "no" was searched 40 characters prior to the identification of opacity. The ARDS diagnosis was confirmed if either one of the two criteria is satisfied: (1) the ARDS term is present or (2) both terms of bilaterality and opacity are present in the report. We identified the first radiology observation of bilateral opacity, as subsequent reports usually refer to the ones previously conducted for the identical patient instead of repeating the full interpretation and findings. Manual inspection of portions of the reports was done to validate the efficacy of the algorithm.
For the oxygenation criteria, 13,862 arterial partial pressure of oxygen (P aO 2 ) measurements acquired through arterial blood gas tests (ABG) were recorded for 358 unique patients. We have confirmed with SEHA clinicians that such test is only conducted for patients suspected of ARDS or with severe symptoms, and therefore, patients without one can be ruled out of ARDS directly. Each P aO 2 measurement was matched with the closet prior record of F iO 2 (the fraction of inspired oxygen) for the given patient to obtain the P/F ratio. For patients with missing F iO 2 measurements, we assumed that they were not on oxygen therapy and were assigned a value of 0.2095 (20.95% of oxygen in air). The patients were then labeled as potentially having ARDS if their P/F ratio ≤ 300 mm Hg.
The earliest recorded time -either arrival time, admission time, or the first time the patient tested positive for COVID-19 -was utilized in lieu of the precise point of clinical insult of respiratory symptoms for the timing criteria of the Berlin definition. To rule out pulmonary edema of other origin, patients with cardiac edema prior to the onset of ARDS were identified from the vitals and excluded. With the criteria and steps delineated herein, 243 patients were identified as having ARDS across both training sets as well as test sets. Terms for Identification Figure 1: The ARDS labeling process in our dataset, in accordance with the four criteria of the Berlin definition [18]:
imaging, oxygenation, timing, and origin. The lexicon developed for identifying bilateral opacity in radiology reports is also shown within the table on the left.
B Hyperparameter search
Our system performs random search for the hyperparameters of the machine learning models and then evaluates their performance on the validation sets. The searched hyperarameters for each of the models are shown in Table B. Table B: Hyperparameter values considered during the random hyperparameter search. Ranges are indicated with a '-'.
Model Hyperparameters Values
C Model comparison
After preprocessing the data, we compared the performance of 5 ensembles based on 5 types of base learners on the validation sets: Logistic Regression (LR), K-Nearest Neighbors (KNN), Support Vector Machine (SVM) Multi-layer Perceptron (MLP), and Light Gradient Boosting Model (LGBM). The models were compared using the AUROC and AUPRC and the results are shown in Table C. We selected the ensemble that achieved the highest AUROC on the validation set.
Figure 1 :
1(a) The UAE map showcasing the location of the healthcare facilities included in this study. (b) Flowchart
Figure 3 :
3The (a) ROC curves, (b) PRC curves, and (c) calibration curves are shown for all model ensembles evaluated on test set A (top) and test set B (bottom). The color legend for all figures is shown on the right. The numerical values for the AUROC, AUPRC, calibration slopes and intercepts can be found in
Figure 4 :
4The four most important features are shown for each complication in (a) test set A and (b) test set B.Feature importance was computed using the average SHAP values of the six models per ensemble.
Figure 5 :
5Timeline showing the development of complications with respect to number of days from admission (x-axis)
GOG, BA, and KWY managed and analyzed the data. FS and IQ extracted, anonymized, and provided the dataset for analysis. GOG, KWY, and NH developed and maintained the experimental codebase. FAK, SAJ, MAS, RA, and MAH provided clinical expertise. WZ, FS, MAH and FES designed the study. MAH and FES supervised the work. GOG, BA, KWY, and FES wrote the manuscript. All authors interpreted the results and revised and approved the final manuscript.
Table 1 :
1Summary of clinical studies reporting various non-mortal complications in patients with confirmed COVID-19diagnosis. The alarming incidence rates suggest a pressing need for developing a clinical decision support system that predicts such complications.Complication
Cohort Size Incidence rate Location
References
Elevated Troponin
614
45.3%
Italy
[7]
1527
8.0%
China
[8]
Elevated D-Dimer
248
74.6%
China
[9]
2377
76.0%
United States
[5]
Elevated Aminotransferases
105
21.0%
China
[10]
5700
39.0% & 58.5% * United States
[11]
Elevated Interleukin-6
728
16.5%
China
[12]
Secondary Bacterial Infection 191
15.0%
China
[4]
338
5.6% †
United States
[13]
Acute Kidney Injury
5449
36.6%
United States
[6]
98
9.2%
South Korea
[14]
Acute Respiratory Distress
Syndrome
191
31.0%
China
[4]
98
18.4%
South Korea
[14]
1099
3.4%
China
[15]
This study is a retrospective multi-centre study that includes anonymized data recorded within 3,493 COVID-19 hospital encounters at 18 Abu Dhabi Health Services (SEHA) healthcare facilities in Abu Dhabi, United Arab Emirates. The study received approval by the Institutional Review Board (IRB) from the Department of Health (Ref: DOH/CVDC/2020/1125) and New York University (Ref: HRPP-2020-70
b) shows the flowchart as we applied the exclusion criteria to obtain the final data splits. We excluded 127 non-adult encounters and 14 pregnant encounters and split the dataset into training and test sets. Training set A consisted of 1,829 encounters recorded in the Middle region between April 1, 2020 and April 25, 2020. To evaluate for temporal generalizability, test set Aincluded 587 encounters recorded in the Middle region between April 26, 2020 and April 30, 2020. Training
set B included 711 encounters admitted to the Eastern and Western regions between April 1, 2020 and April
25, 2020 and test set B included 225 encounters admitted to the same hospitals between April 26, 2020, and
April 30, 2020.
* Based on SEHA's clinical standards.Elevated Interleukin-6
Interleukin-6 ≥ 8.43 pg/mL
*
SBI
Positive blood, urine, throat or sputum cultures within 24 hours of sample
collection
*
AKI
Based on the Kidney Disease Improving Global Guidelines (KDIGO) classifica-
tion, increase in Serum Creatinine by ≥ 0.3mg/dl within 48 hours
OR
increase in Serum Creatinine by ≥ to 1.5 times
OR
Urine volume < 0.5ml/kg/hr for 6 hours
[17]
ARDS
Based on the Berlin definition, presence of bilateral opacity in radiology reports
AND
Oxygenation: PaO 2 /FiO 2 ≤300 mm Hg
AND
Timing: ≤ one week
AND
Origin: pulmonary
[18]
Training set A Test set A Training set B Test set BPatient Cohort
Encounters, n
1829
587
711
225
Age, mean (IQR)
41.7 (17.0)
45.5 (18.0)
39.3 (17.0)
42.7 (20.0)
Male, n (%)
1582 (86.5)
522 (88.9)
622 (87.5)
191 (84.8)
Arab, n (%)
295 (16.1)
89 (15.2)
120 (16.9)
43 (19.1)
Non-Arab, n (%)
1534 (83.9)
498 (84.8)
591 (83.1)
182 (80.9)
Critical Care, n (%)
332 (18.2)
83 (14.1)
63 (8.8)
26 (11.6)
Mortality, n (%)
36 (2.0)
22 (3.7)
9 (1.3)
3 (1.3)
Complications
Elevated troponin, n (%)
101 (5.5)
50 (8.5)
33 (4.6)
19 (8.4)
Developed within 24 hours from admission, n (%) 47 (2.6)
36 (6.1)
20 (2.8)
5 (2.2)
Developed after 24 hours from admission, n (%)
54 (3.0)
14 (2.4)
13 (1.8)
14 (6.2)
Elevated d-dimer, n (%)
643 (35.2)
296 (50.4)
173 (24.3)
78 (34.7)
Developed within 24 hours from admission, n (%) 523 (28.6)
268 (45.7)
130 (18.3)
60 (26.7)
Developed after 24 hours from admission, n (%)
120 (6.6)
28 (4.8)
43 (6.0)
18 (8.0)
Elevated aminotransferases, n (%)
397 (21.7)
117 (30.2)
119 (16.7)
56 (24.9)
Developed within 24 hours from admission, n (%) 287 (15.7)
133 (22.7)
72 (10.1)
35 (15.6)
Developed after 24 hours from admission, n (%)
110 (6.0)
44 (7.5)
47 (6.6)
21 (9.3)
Elevated interleukin-6, n (%)
245 (13.5)
126 (21.5)
65 (9.1)
28 (12.4)
Developed within 24 hours from admission, n (%) 57 (3.1)
49 (8.3)
7 (1.0)
1 (0.4)
Developed after 24 hours from admission, n (%)
188 (10.3)
77 (13.1)
58 (8.2)
27 (12.0)
SBI, n (%)
92 (5.0)
45 (7.7)
23 (3.2)
17 (7.6)
Developed within 24 hours from admission, n (%) 1 (0.1)
3 (0.5)
1 (0.1)
1 (0.4)
Developed after 24 hours from admission, n (%)
91 (5.0)
42 (7.2)
22 (3.1)
16 (7.1)
AKI, n (%)
126 (6.9)
52 (8.9)
32 (4.5)
16 (7.1)
Developed within 24 hours from admission, n (%) 28 (1.5)
9 (1.5)
14 (2.0)
3 (1.3)
Developed after 24 hours from admission, n (%)
98 (5.4)
43 (7.3)
18 (2.5)
13 (5.8)
ARDS, n (%)
117 (6.4)
57 (9.7)
45 (6.3)
24 (10.7)
Developed within 24 hours from admission, n (%) 61 (3.3)
26 (4.4)
23 (3.2)
13 (5.8)
Developed after 24 hours from admission, n (%)
56 (3.1)
31 (5.3)
22 (3.1)
11 (4.9)
Table 4 :
4Characteristics of the variables that were used as input features to our models. The mean and interquartile ranges are shown for the demographic features, vital-sign measurements, and laboratory-test results. For the commorbidities and symptoms admission, n denotes the number of patients and % denotes the percentage of patients per the respective dataset.Variable, unit
Training set A
Test set A
Training set B Test set B
Demographics, mean (IQR)
Age
41.7 (17.0)
45.5 (18.0)
39.3 (17.0)
42.7 (20.0)
BMI
26.9 (5.2)
26.7å(5.7)
26.5 (5.7)
27.9 (6.2)
Comorbidties, n (%)
Hypertension
550 (30.1)
213 (36.3)
168 (23.6)
71 (31.6)
Diabetes
427 (23.3)
221 (37.6)
121 (17.0)
73 (32.4)
Chronic kidney disease
68 (3.7)
30 (5.1)
20 (2.8)
7 (3.1)
Cancer
30 (1.6)
7 (1.2)
12 (1.7)
8 (3.6)
Symptoms at admission, n (%)
Cough
851 (46.5)
338 (57.6)
259 (36.4)
99 (44.0)
Fever
28 (1.5)
20 (3.4)
3 (0.4)
3 (1.3)
Shortness of breath
190 (10.4)
99 (16.9)
71 (10.0)
34 (15.1)
Sore throat
238 (13.0)
89 (15.2)
118 (16.6)
28 (12.4)
Rash
29 (1.6)
10 (1.7)
15 (2.1)
5 (2.2)
Laboratory-test results, mean (IQR)
Albumin, g/L
38.4 (11.1)
35.1 (11.9)
39.9 (6.0)
38.6 (7.7)
APTT, seconds
38.8 (5.6)
38.1 (6.1)
30.4 (8.3)
29.5 (5.7)
Bilirubin, micromol/L
9.9 (5.5)
9.3 (6.1)
9.7 (5.5)
9.4 (5.0)
Calcium, mmol/L
2.3 (0.2)
2.2 (0.2)
2.3 (0.2)
2.3 (0.2)
Chloride, mmol/L
100.9 (4.2)
99.7 (4.4)
101.3 (3.3)
100.7 (4.4)
C-reactive protein, mg/L
30.1 (23.4)
55.7 (74.6)
22.2 (7.
Table 5 :
5Performance evaluation of the best performing models on test sets A & B, which were selected based on the average AUROC performance on the validation sets, as shown in Supplementary Section C. Model type indicates the type of the base learners within the final selected ensemble. All the metrics were computed using bootstrapping with 1,000 iterations[25].Complication
Result
Test Set A
Test Set B
Model type
LGBM
LGBM
AUROC
0.843 (0.720, 0.945)
0.913 (0.788, 0.994)
AUPRC
0.226 (0.106, 0.499)
0.674 (0.405, 0.898)
Elevated troponin
Calibration Slope
0.661 (0.134, 1.264)
1.029 (0.142, 2.536)
Calibration Intercept -0.032 (-0.101, 0.029)
0.060 (-0.027, 0.257)
Model type
LR
LGBM
AUROC
0.717 (0.618, 0.816)
0.714 (0.612, 0.810)
AUPRC
0.315 (0.167, 0.494)
0.235 (0.118, 0.397)
Elevated d-dimer
Calibration Slope
1.592 (0.460, 1.841)
0.338 (-0.33, 1.481)
Calibration Intercept -0.187 (-0.241, 0.023)
0.071 (-0.102, 0.206)
Model type
LGBM
LGBM
AUROC
0.801 (0.741, 0.858)
0.808 (0.699, 0.894)
AUPRC
0.261 (0.176, 0.391)
0.396 (0.229, 0.604)
Elevated aminotransferases
Calibration Slope
-0.145 (-0.193, 0.159)
0.628 (0.172, 1.135)
Calibration Intercept
0.205 (0.110, 0.254)
0.042 (-0.110, 0.186)
Model type
LGBM
LGBM
AUROC
0.820 (0.760, 0.872)
0.899 (0.810, 0.971)
AUPRC
0.514 (0.403, 0.635)
0.776 (0.623, 0.900)
Elevated interleukin-6
Calibration Slope
0.777 (0.540, 0.980)
1.120 (0.879, 1.299)
Calibration Intercept
0.018 (-0.046, 0.094)
-0.094 (-0.193, 0.034)
Model type
LGBM
LR
AUROC
0.862 (0.802, 0.920)
0.843 (0.721, 0.960)
AUPRC
0.486 (0.339, 0.645)
0.612 (0.384, 0.847)
SBI
Calibration Slope
0.977 (0.566, 1.298)
1.583 (0.846, 1.865)
Calibration Intercept -0.021 (-0.126, 0.095) -0.075 (-0.155, 0.051)
Model type
LR
LR
AUROC
0.905 (0.861, 0.946)
0.958 (0.913, 0.994)
AUPRC
0.377 (0.238, 0.574)
0.637 (0.344, 0.901)
AKI
Calibration Slope
0.683 (0.313, 1.032)
1.127 (-0.022, 1.808)
Calibration Intercept -0.030 (-0.129, 0.079)
0.076 (-0.106, 0.216)
Model type
LGBM
LGBM
AUROC
0.864 (0.809, 0.910)
0.808 (0.621, 0.972)
AUPRC
0.340 (0.202, 0.496)
0.570 (0.273, 0.842)
ARDS
Calibration Slope
1.346 (1.082, 1.419)
2.023 (0.486, 2.
Cardiac edema Timing ≤1 week
edema≤1PaO 2 ≤ 300mmHgHave PaO 2 record
Have imaging report
All patients (n=3493)
No
No
ARDS (n=243)
< 24 hrs (n=123)
≥ 24 hrs (n=120)
(n=178)
Yes
(n=3315)
(n=1748)
(n=1567)
Yes
(n=347)
No
(n=243)
No
Not ARDS (n=3250)
(n=1401)
Yes
(n=266)
Yes
(n=244)
No
(n=81)
No
(n=22)
Yes
(n=1)
Yes
Imaging
Oxygenation
Origin
Timing
Opacity
ARDS
acute respiratory distress syndrome
bilateral
biapical
bibasilar
bibasal
widespread
diffuse
perihilar
parahilar
multifocal
extensive
symmetrical
both lung
"left" and "right"
opacity
opacities
opacification
shadowing
infiltrate
infiltration
consolidate
consolidation
pneumonia
aspiration
groundglass
ground glass
reticular
cyst
Bilaterality
ARDS
(Bilaterality AND Opacity)
OR (ARDS)
Table C :
CPerformance comparison for the different ensembles on the validation sets. Best performance is shown in bold.Validation Set A
Validation Set B
Complication
Models AUROC AUPRC AUROC AUPRC
LR
0.908
0.409
0.977
0.557
KNN
0.839
0.298
0.882
0.327
SVM
0.829
0.175
0.956
0.243
Elevated troponin
MLP
0.816
0.201
0.937
0.291
LGBM
0.919
0.430
0.990
0.686
LR
0.742
0.291
0.724
0.329
KNN
0.704
0.220
0.652
0.163
SVM
0.658
0.198
0.691
0.217
Elevated d-dimer
MLP
0.700
0.239
0.669
0.184
LGBM
0.741
0.247
0.753
0.327
LR
0.841
0.361
0.850
0 0.597
KNN
0.792
0.236
0.802
0.483
SVM
0.756
0.237
0.831
0.268
Elevated aminotransferases
MLP
0.754
0.208
0.842
0.525
LGBM
0.849
0.375
0.884
0.5786
LR
0.874
0.550
0.905
0.602
KNN
0.846
0.445
0.869
0.574
SVM
0.823
0.380
0.905
0.575
Elevated interleukin-6
MLP
0.853
0.467
0.881
0.587
LGBM
0.907
0.649
0.9464
0.690
LR
0.873
0.360
0.945
0.625
KNN
0.783
0.291
0.918
0.464
SVM
.825
0.226
0.911
0.490
SBI
MLP
0.802
0.274
0.935
0.462
LGBM
0.904
0.501
0.931
0.417
LR
0.907
0.459
0.829
0.225
KNN
0.809
0.240
0.740
0.319
SVM
0.818
0.235
0.807
0.262
AKI
MLP
0.775
0.218
0.809
0.283
LGBM
0.889
0.449
0.828
0.225
LR
0.891
0.310
0.950
0.488
KNN
0.835
0.178
0.901
0.364
SVM
0.825
0.220
0.922
0.326
ARDS
MLP
0.770
0.143
0.931
0.415
LGBM
0.911
0.360
0.960
0.654
AcknowledgmentsWe would like to thank NYU Abu Dhabi for the generous funding. We would also like to thank Waqqas Zia and Benoit Marchand from the Dalma team at New York University Abu Dhabi for supporting data management and access to computational resources. This study was supported through the data resources and staff expertise provided by Abu Dhabi Health Services.
An interactive web-based dashboard to track covid-19 in real time. The Lancet infectious diseases. Ensheng Dong, Hongru Du, Lauren Gardner, 20Ensheng Dong, Hongru Du, and Lauren Gardner. An interactive web-based dashboard to track covid-19 in real time. The Lancet infectious diseases, 20(5):533-534, 2020.
Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal. Laure Wynants, Ben Van Calster, S Gary, Richard D Collins, Georg Riley, Ewoud Heinze, Schuit, M J Marc, Darren L Bonten, Dahly, A A Johanna, Damen, P A Thomas, Debray, bmj. 369Laure Wynants, Ben Van Calster, Gary S Collins, Richard D Riley, Georg Heinze, Ewoud Schuit, Marc MJ Bonten, Darren L Dahly, Johanna AA Damen, Thomas PA Debray, et al. Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal. bmj, 369, 2020.
A minimal common outcome measure set for covid-19 clinical research. The Lancet Infectious Diseases. C John, Srinivas Marshall, Janet Murthy, Neil Diaz, Adhikari, C Derek, Angus, M Yaseen, Kenneth Arabi, Michael Baillie, Scott Bauer, Bronagh Berry, Blackwood, John C Marshall, Srinivas Murthy, Janet Diaz, Neil Adhikari, Derek C Angus, Yaseen M Arabi, Kenneth Baillie, Michael Bauer, Scott Berry, Bronagh Blackwood, et al. A minimal common outcome measure set for covid-19 clinical research. The Lancet Infectious Diseases, 2020.
Hua Chen, and Bin Cao. Clinical course and risk factors for mortality of adult inpatients with covid-19 in wuhan, china: a retrospective cohort study. Fei Zhou, Ting Yu, Ronghui Du, Guohui Fan, Ying Liu, Zhibo Liu, Jie Xiang, Yeming Wang, Bin Song, Xiaoying Gu, Lulu Guan, Yuan Wei, Hui Li, Xudong Wu, Jiuyang Xu, Shengjin Tu, Yi Zhang, The Lancet. 395Fei Zhou, Ting Yu, Ronghui Du, Guohui Fan, Ying Liu, Zhibo Liu, Jie Xiang, Yeming Wang, Bin Song, Xiaoying Gu, Lulu Guan, Yuan Wei, Hui Li, Xudong Wu, Jiuyang Xu, Shengjin Tu, Yi Zhang, Hua Chen, and Bin Cao. Clinical course and risk factors for mortality of adult inpatients with covid-19 in wuhan, china: a retrospective cohort study. The Lancet, 395(10229):1054 -1062, 2020.
Prevalence and outcomes of d-dimer elevation in hospitalized patients with covid-19. Arteriosclerosis, thrombosis, and vascular biology. S Jeffrey, Dennis Berger, Samrachana Kunichoff, Tania Adhikari, Nancy Ahuja, Yindalon Amoroso, Meng Aphinyanaphongs, Ronald Cao, Alexander Goldenberg, James Hindenburg, Horowitz, 40Jeffrey S Berger, Dennis Kunichoff, Samrachana Adhikari, Tania Ahuja, Nancy Amoroso, Yindalon Aphinyanaphongs, Meng Cao, Ronald Goldenberg, Alexander Hindenburg, James Horowitz, et al. Prevalence and outcomes of d-dimer elevation in hospitalized patients with covid-19. Arteriosclerosis, thrombosis, and vascular biology, 40(10):2539-2547, 2020.
Acute kidney injury in patients hospitalized with covid-19. Jamie S Hirsch, Jia H Ng, Daniel W Ross, Purva Sharma, H Hitesh, Richard L Shah, Azzour D Barnett, Steven Hazzan, Fishbane, D Kenar, Mersema Jhaveri, Abate, Kidney International. 981Jamie S. Hirsch, Jia H. Ng, Daniel W. Ross, Purva Sharma, Hitesh H. Shah, Richard L. Barnett, Azzour D. Hazzan, Steven Fishbane, Kenar D. Jhaveri, Mersema Abate, and et al. Acute kidney injury in patients hospitalized with covid-19. Kidney International, 98(1):209-218, 2020.
Association of troponin levels with mortality in italian patients hospitalized with coronavirus disease 2019: results of a multicenter study. Carlo Mario Lombardi, Valentina Carubelli, Annamaria Iorio, Riccardo M Inciardi, Antonio Bellasi, Claudia Canale, Rita Camporotondo, Francesco Catagnano, Laura A Dalla Vecchia, Stefano Giovinazzo, JAMA cardiology. Carlo Mario Lombardi, Valentina Carubelli, Annamaria Iorio, Riccardo M Inciardi, Antonio Bellasi, Claudia Canale, Rita Camporotondo, Francesco Catagnano, Laura A Dalla Vecchia, Stefano Giovinazzo, et al. Association of troponin levels with mortality in italian patients hospitalized with coronavirus disease 2019: results of a multicenter study. JAMA cardiology, 2020.
Prevalence and impact of cardiovascular metabolic diseases on covid-19 in china. Bo Li, Jing Yang, Faming Zhao, Lili Zhi, Xiqian Wang, Lin Liu, Zhaohui Bi, Yunhe Zhao, Clinical Research in Cardiology. 1095Bo Li, Jing Yang, Faming Zhao, Lili Zhi, Xiqian Wang, Lin Liu, Zhaohui Bi, and Yunhe Zhao. Prevalence and impact of cardiovascular metabolic diseases on covid-19 in china. Clinical Research in Cardiology, 109(5):531-538, 2020.
D-dimer as a biomarker for disease severity and mortality in covid-19 patients: a case control study. Yumeng Yao, Jiatian Cao, Qingqing Wang, Qingfeng Shi, Kai Liu, Zhe Luo, Xiang Chen, Sisi Chen, Kaihuan Yu, Zheyong Huang, Journal of intensive care. 81Yumeng Yao, Jiatian Cao, Qingqing Wang, Qingfeng Shi, Kai Liu, Zhe Luo, Xiang Chen, Sisi Chen, Kaihuan Yu, Zheyong Huang, et al. D-dimer as a biomarker for disease severity and mortality in covid-19 patients: a case control study. Journal of intensive care, 8(1):1-11, 2020.
Pattern of liver injury in adult patients with covid-19: a retrospective analysis of 105 patients. Qi Wang, Hong Zhao, Li-Gai Liu, Yan-Bin Wang, Ting Zhang, Ming-Hui Li, Yan-Li Xu, Gui-Ju Gao, Hao-Feng Xiong, Ying Fan, Military Medical Research. 71Qi Wang, Hong Zhao, Li-Gai Liu, Yan-Bin Wang, Ting Zhang, Ming-Hui Li, Yan-Li Xu, Gui-Ju Gao, Hao-Feng Xiong, Ying Fan, et al. Pattern of liver injury in adult patients with covid-19: a retrospective analysis of 105 patients. Military Medical Research, 7(1):1-8, 2020.
Safiya Richardson, Jamie S Hirsch, Mangala Narasimhan, James M Crawford, Thomas Mcginn, Karina W Davidson, and the Northwell COVID-19 Research Consortium. Presenting Characteristics, Comorbidities, and Outcomes Among 5700 Patients Hospitalized With COVID-19 in the. New York City Area323Safiya Richardson, Jamie S. Hirsch, Mangala Narasimhan, James M. Crawford, Thomas McGinn, Karina W. Davidson, , and the Northwell COVID-19 Research Consortium. Presenting Characteristics, Comorbidities, and Outcomes Among 5700 Patients Hospitalized With COVID-19 in the New York City Area. JAMA, 323(20):2052-2059, 05 2020.
Dynamic interleukin-6 level changes as a prognostic indicator in patients with covid-19. Zeming Liu, Jinpeng Li, Danyang Chen, Rongfen Gao, Wen Zeng, Sichao Chen, Yihui Huang, Jianglong Huang, Wei Long, Man Li, Liang Guo, Xinghuan Wang, Xiaohui Wu, Frontiers in Pharmacology. 111093Zeming Liu, Jinpeng Li, Danyang Chen, Rongfen Gao, Wen Zeng, Sichao Chen, Yihui Huang, Jianglong Huang, Wei Long, Man Li, Liang Guo, Xinghuan Wang, and Xiaohui Wu. Dynamic interleukin-6 level changes as a prognostic indicator in patients with covid-19. Frontiers in Pharmacology, 11:1093, 2020.
Clinical characteristics of covid-19 in new york city. Parag Goyal, Justin J Choi, Laura C Pinheiro, Edward J Schenck, Ruijun Chen, Assem Jabri, Michael J Satlin, Thomas R Campion, Musarrat Nahid, Joanna B Ringel, Katherine L Hoffman, Mark N Alshak, Han A Li, Graham T Wehmeyer, Mangala Rajan, Evgeniya Reshetnyak, Nathaniel Hupert, Evelyn M Horn, Fernando J Martinez, Roy M Gulick, Monika M Safford, New England Journal of Medicine. 38224Parag Goyal, Justin J. Choi, Laura C. Pinheiro, Edward J. Schenck, Ruijun Chen, Assem Jabri, Michael J. Satlin, Thomas R. Campion, Musarrat Nahid, Joanna B. Ringel, Katherine L. Hoffman, Mark N. Alshak, Han A. Li, Graham T. Wehmeyer, Mangala Rajan, Evgeniya Reshetnyak, Nathaniel Hupert, Evelyn M. Horn, Fernando J. Martinez, Roy M. Gulick, and Monika M. Safford. Clinical characteristics of covid-19 in new york city. New England Journal of Medicine, 382(24):2372-2374, 2020.
Clinical features and outcomes of 98 patients hospitalized with sars-cov-2 infection in daegu, south korea: a brief descriptive study. Kyung Soo Hong, Kwan Ho Lee, Jin Hong Chung, Kyeong-Cheol Shin, Eun Young Choi, Hyun Jung Jin, Jong Geol Jang, Wonhwa Lee, June Hong Ahn, Yonsei Medical Journal. 615431Kyung Soo Hong, Kwan Ho Lee, Jin Hong Chung, Kyeong-Cheol Shin, Eun Young Choi, Hyun Jung Jin, Jong Geol Jang, Wonhwa Lee, and June Hong Ahn. Clinical features and outcomes of 98 patients hospitalized with sars-cov-2 infection in daegu, south korea: a brief descriptive study. Yonsei Medical Journal, 61(5):431, 2020.
Clinical characteristics of coronavirus disease 2019 in china. Zheng-Yi Wei-Jie Guan, Yu Ni, Wen-Hua Hu, Chun-Quan Liang, Jian-Xing Ou, Lei He, Hong Liu, Chun-Liang Shan, Lei, S C David, Hui, New England journal of medicine. 38218Wei-jie Guan, Zheng-yi Ni, Yu Hu, Wen-hua Liang, Chun-quan Ou, Jian-xing He, Lei Liu, Hong Shan, Chun-liang Lei, David SC Hui, et al. Clinical characteristics of coronavirus disease 2019 in china. New England journal of medicine, 382(18):1708-1720, 2020.
Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (tripod) the tripod statement. S Gary, Johannes B Collins, Reitsma, Douglas G Altman, Moons, Circulation. 1312Gary S Collins, Johannes B Reitsma, Douglas G Altman, and Karel GM Moons. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (tripod) the tripod statement. Circulation, 131(2):211-219, 2015.
Kdigo clinical practice guidelines for acute kidney injury. Arif Khwaja, Nephron Clinical Practice. 1204Arif Khwaja. Kdigo clinical practice guidelines for acute kidney injury. Nephron Clinical Practice, 120(4):c179-c184, 2012.
Acute Respiratory Distress Syndrome: The Berlin Definition. JAMA. 30723The ARDS Definition Task ForceThe ARDS Definition Task Force. Acute Respiratory Distress Syndrome: The Berlin Definition. JAMA, 307(23):2526-2533, 06 2012.
Diagnostic accuracy of single baseline measurement of elecsys troponin t high-sensitive assay for diagnosis of acute myocardial infarction in emergency department: systematic review and meta-analysis. Zhivko Zhelev, Christopher Hyde, Emily Youngman, Morwenna Rogers, Simon Fleming, Toby Slade, Helen Coelho, Tracey Jones-Hughes, Vasilis Nikolaou, Bmj. 35015Zhivko Zhelev, Christopher Hyde, Emily Youngman, Morwenna Rogers, Simon Fleming, Toby Slade, Helen Coelho, Tracey Jones-Hughes, and Vasilis Nikolaou. Diagnostic accuracy of single baseline measurement of elecsys troponin t high-sensitive assay for diagnosis of acute myocardial infarction in emergency department: systematic review and meta-analysis. Bmj, 350:h15, 2015.
C-reactive protein, procalcitonin, d-dimer, and ferritin in severe coronavirus disease-2019: a meta-analysis. Ian Huang, Raymond Pranata, Michael Anthonius Lim, Amaylia Oehadian, Bachti Alisjahbana, Therapeutic advances in respiratory disease. 141753466620937175Ian Huang, Raymond Pranata, Michael Anthonius Lim, Amaylia Oehadian, and Bachti Alisjahbana. C-reactive protein, procalcitonin, d-dimer, and ferritin in severe coronavirus disease-2019: a meta-analysis. Therapeutic advances in respiratory disease, 14:1753466620937175, 2020.
Random search for hyper-parameter optimization. James Bergstra, Yoshua Bengio, The Journal of Machine Learning Research. 131James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13(1):281-305, 2012.
A calibration hierarchy for risk models was defined: from utopia to empirical data. Daan Ben Van Calster, Yvonne Nieboer, Vergouwe, Bavo De Cock, J Michael, Ewout W Pencina, Steyerberg, Journal of clinical epidemiology. 74Ben Van Calster, Daan Nieboer, Yvonne Vergouwe, Bavo De Cock, Michael J Pencina, and Ewout W Steyerberg. A calibration hierarchy for risk models was defined: from utopia to empirical data. Journal of clinical epidemiology, 74:167-176, 2016.
From local explanations to global understanding with explainable ai for trees. M Scott, Gabriel Lundberg, Hugh Erion, Alex Chen, Degrave, Bala Jordan M Prutkin, Ronit Nair, Jonathan Katz, Nisha Himmelfarb, Su-In Bansal, Lee, Nature machine intelligence. 21Scott M Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. From local explanations to global understanding with explainable ai for trees. Nature machine intelligence, 2(1):2522-5839, 2020.
Lightgbm: A highly efficient gradient boosting decision tree. Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, Tie-Yan Liu, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 3146-3154. Curran Associates, Inc., 2017.
. J Thomas, Bradley Diciccio, Efron, Bootstrap confidence intervals. Statistical science. Thomas J DiCiccio and Bradley Efron. Bootstrap confidence intervals. Statistical science, pages 189-212, 1996.
Blood pressure control and adverse outcomes of covid-19 infection in patients with concomitant hypertension in wuhan, china. Jinjun Ran, Ying Song, Zian Zhuang, Lefei Han, Shi Zhao, Peihua Cao, Yan Geng, Lin Xu, Jing Qin, Daihai He, Hypertension Research. Jinjun Ran, Ying Song, Zian Zhuang, Lefei Han, Shi Zhao, Peihua Cao, Yan Geng, Lin Xu, Jing Qin, Daihai He, et al. Blood pressure control and adverse outcomes of covid-19 infection in patients with concomitant hypertension in wuhan, china. Hypertension Research, pages 1-10, 2020.
Validation of an electronic surveillance system for acute lung injury. Vitaly Herasevich, Murat Yilmaz, Hasrat Khan, Rolf D Hubmayr, Ognjen Gajic, Intensive care medicine. 356Vitaly Herasevich, Murat Yilmaz, Hasrat Khan, Rolf D Hubmayr, and Ognjen Gajic. Validation of an electronic surveillance system for acute lung injury. Intensive care medicine, 35(6):1018-1023, 2009.
Validation Study of an Automated Electronic Acute Lung Injury Screening Tool. Helen C Azzam, Satjeet S Khalsa, Richard Urbani, Chirag V Shah, Jason D Christie, Paul N Lanken, Barry D Fuchs, Journal of the American Medical Informatics Association. 164Regularization parameter C [0.01, 0.10, 0.1, 10, 25, 50, 100] Logistic Regression Max iterations [50, 100, 200] Leaf size [1-50Helen C. Azzam, Satjeet S. Khalsa, Richard Urbani, Chirag V. Shah, Jason D. Christie, Paul N. Lanken, and Barry D. Fuchs. Validation Study of an Automated Electronic Acute Lung Injury Screening Tool. Journal of the American Medical Informatics Association, 16(4):503-508, 07 2009. Regularization parameter C [0.01, 0.10, 0.1, 10, 25, 50, 100] Logistic Regression Max iterations [50, 100, 200] Leaf size [1-50]
. K-Nearest , Neighbors Power parameter [1,2] N neighbors [1 -30K-Nearest Neighbors Power parameter [1,2] N neighbors [1 -30]
Multi-layer Perceptron Learning rate [Constant, Adaptive] Weight optimization solver [Sgd, Adam] Hidden layer sizes. 50Activation functionL2 penalty parameter alpha [0.005,0.002, 0.01,0.2, 0.03, 0.05] Activation function [Tanh, Relu] Multi-layer Perceptron Learning rate [Constant, Adaptive] Weight optimization solver [Sgd, Adam] Hidden layer sizes [(50,50,50), (50,100,50), (100,)]
|
[
"https://github.com/nyuad-cai/COVID19Complications."
] |
[
"TRIVIALIZING NUMBER OF POSITIVE KNOTS",
"TRIVIALIZING NUMBER OF POSITIVE KNOTS"
] |
[
"Kazuhiko Inoue "
] |
[] |
[] |
In this paper, we give the trivializing number of all minimal diagrams of positive 2-bridge knots and study the relation between the trivializing number and the unknotting number for a part of these knots.Date: September 9, 2017.
| null |
[
"https://arxiv.org/pdf/1512.01793v1.pdf"
] | 119,176,591 |
1508.00096
|
f6b3ef170560f8b00892cf08467775b480b63a86
|
TRIVIALIZING NUMBER OF POSITIVE KNOTS
6 Dec 2015
Kazuhiko Inoue
TRIVIALIZING NUMBER OF POSITIVE KNOTS
6 Dec 2015
In this paper, we give the trivializing number of all minimal diagrams of positive 2-bridge knots and study the relation between the trivializing number and the unknotting number for a part of these knots.Date: September 9, 2017.
Introduction
The trivializing number is one of numerical invariants of knots as same as the unknotting number, which judges some complexity of a knot. In general, it is known that the former is equal or over than twice the latter. Furthermore, Hanaki has conjectured that the trivializing number of a positive knot is just twice the unknotting number of the same knot. Indeed, for any positive knot up to 10 crossings, the equality such that tr(K) = 2u(K) holds, where tr(K) is the trivializing number of a knot and u(K) is the unknotting number of the same knot (see [1]), and our result gives a partial positive answer of this conjecture. This paper is constructed as follows: In Section 2, we define the trivializing number of a diagram and the trivializing number of a knot. In Section 3, we shortly do a review of positive knot and 2-bridge knot. In Section 4, we determine the standard diagrams of positive 2-bridge knots, and in Section 5, we determine the trivializing number of minimal diagrams of positive 2-bridge knots. In section 6, we show that for some positive 2-bridge knots, the equation tr(K) = 2u(K) holds. In Section 7, we introduce positive pretzel knots, and show that the equation above also holds for some of them.
Preliminaries
We work in the PL category. Throughout this paper, all knots are oriented. A projection of a knot K in R 3 is a regular projection image of K in R 2 ∪ {∞} = S 2 . A diagram of K is a projection endowed with over/under information for its double points. A crossing is a double point with over/under information, and a precrossing is a double point without over/under information. A pseudo-diagram of K is a projection of K whose double points are either crossings or pre-crossings. See Figure 1.
A pseudo-diagram is said to be trivial if we always get a diagram of a trivial knot after giving arbitrary over/under information to all the pre-crossings. An example is given in Figure 2. It is known that we can change every projection into a trivial pseudo-diagram by giving appropriate over/under information to some of the pre-crossings. For example in the case as shown in Figure 3, we can get a trivial pseudo-diagram by transforming two pre-crossings 1 and 2 of P into crossings; however, it can be easily checked that we cannot get a trivial pseudo-diagram by transforming only one pre-crossing of P into a crossing. Therefore, we have tr(D) = tr(P ) = 2.
Positive 2-bridge knots
Generally speaking, the trivializing number of a knot is not always realized by its minimal diagram (a diagram that has the minimal number of crossings); in fact we have counter examples (see [1]). Moreover, even for a given diagram, determining its trivializing number is not so easy in general. In Section 5, we give the trivializing numbers of all minimal diagrams of positive 2-bridge knots.
Let D be an oriented diagram of a knot. To each of its crossings, we associate sign + or − as shown in Figure 4(1). If all the crossings in D have the same sign + (resp. −), then we say that D is a positive diagram (resp. negative diagram). When D is a positive diagram, the mirror image of D, which is obtained by changing the over/under information of all crossings of D and is denoted by D * , is a negative diagram. Since D and D * correspond to the same projection, we have tr(D) = tr(D * ). A positive knot is a knot which has a positive diagram.
For a finite sequence a 1 , a 2 , . . . , a m of integers, let us consider the knot (or link) diagram D(a 1 , a 2 , . . . , a m ) as shown in Figure 5. In the figure, a rectangle in the upper row (resp. lower row), depicted by double lines (resp. simple lines), with integer a represents a left-hand (resp. right-hand) horizontal half-twists if a ≥ 0, and |a| right-hand (resp. left-hand) horizontal half-twists if a < 0. See Figure 6 for some explicit examples. We say a rectangle in the upper row (resp. lower row), an upper rectangle (resp. a lower rectangle) for short. A knot which is represented by such a diagram is called a 2-bridge knot. If a i > 0 for all i with 1 ≤ i ≤ m or if a i < 0 for all i, then the diagram D(a 1 , a 2 , . . . , a m ) is reduced and alternating, and hence is a minimal diagram (see [2]). We call such a diagram a standard diagram of the knot.
It is known that every 2-bridge knot has a unique standard diagram (see, for example, [2]). Therefore, a positive (resp. negative) 2-bridge knot is a positive (resp. negative) alternating knot. A positive alternating knot may not necessarily have a diagram which is both positive and alternating in general. However, Nakamura has shown the following. By the theorem abobe, the standard diagram of a positive 2-bridge knot is necessarily positive.
In order to study the trivializing number of the standard diagram D of a positive or negative 2-bridge knot, by taking the mirror image, we may assume a i > 0 for all i. Note that a positive knot may turn into a negative one by this operation.
standard diagrams of positive 2-bridge knots
In this section, we determine the standard diagrams of positive 2-bridge knots. (1) When m is even, say m = 2n, we have either (a1) a 2i is even for 1 ≤ i ≤ n − 1, (a2) a 2n is odd, and (a3) n i=1 a 2i−1 is even, or (b1) a 1 is odd, (b2) a 2i−1 is even for 2 ≤ i ≤ n, and (b3) n i=1 a 2i is even.
(2) When m is odd, say m = 2n + 1, we have either (a1) a 2i−1 is even for 2 ≤ i ≤ n, (a2) a 1 and a 2n+1 are odd, and (a3) n i=1 a 2i is odd, or (b1) a 2i is even for 1 ≤ i ≤ n, and (b2) n+1 i=1 a 2i−1 is odd. Let us consider a rectangle with integer a i > 0, as in Figure 5, which corresponds to a i left-hand (resp. right-hand) half-twists if it is in the upper (resp. lower) row. In the following, such a rectangle will sometimes be denoted by (a i ). If its crossings all have the same sign +, then the orientation of the two arcs are of a form as in Figure 7. If the crossings all have sign −, then they are of a form as in Figure 8. Furthermore, we adopt the symbolic convention as depicted in Figure 9. (1) We may assume that the orientation of the diagram is as depicted in Figure 10, since it is a diagram of a knot.
When the crossings in (a i ) with 1 ≤ i ≤ 2n all have the same sign +, the orientations of the two arcs of (a 2n ) are as shown in Figure 11 1 or 2 . Then the orientation of the diagram is as shown in Figure 11 4 . Since the orientations of the arcs of (a 2n−1 ) must be as shown in Figure 11 3 , the orientations of the arcs of (a 2n ) must be as in Figure 11 1 . In particular, a 2n is necessarily odd. Due to the orientation of (a 2n−1 ), we see that the orientation of the other a 2i−1 , 1 ≤ i ≤ n are of the form as in Figure 11 3 . Furthermore, by chasing the oriented arcs, we can determine the orientations of all arcs in the remaining rectangles. Hence, the oriented diagram is as depicted in Figure 12.
Then, we see that a 2i , 1 ≤ i ≤ n − 1, are all even. Since this is a knot diagram, the oriented strand passing through 1 and then 2 needs to pass through 3 . On the other hand, when the signs of crossings in (a i ) are all −, by turning the diagram as shown in Figure 13(1) on the plane by 180 degrees, and by reversing orientaitions of all arcs, we can get the mirror image of the diagram with the sign + as shown in Figure 13(2). Consequently, we know that the conditions (b1), (b2) and (b3) are satisfied. (2)When all crossings in (a i ) with 1 ≤ i ≤ 2n + 1 have the same sign +, we can consider in a similar way to the case where m is even. We may assume that the orientation of the diagram is as shown in Figure 14.
Since the orientations of the two arcs of (a 2n+1 ) are as shown in Figure 15 1 or 2 , the orientation of the diagram is like as Figure 15 4 . Furthermore, the orientation of the arcs of (a 2n ) is as shown Figure 15 3 , the orientation of the arcs of (a 2n+1 ) must be as shown in Figure 15 1 . In particular, a 2n+1 is necessarily odd. Due to the orientation of (a 2n ), we see that the orientations other (a 2i ), 1 ≤ i ≤ n − 1 are of the form as in Figure 15 3 . Hence, the oriented diagram is as depicted in Figure 16. This shows that the conditions (a1), (a2) and (a3) are satisfied. If the signs of the crossings in (a i ) are all negative, then the orientation of a 2n+1 is as shown in Figure 17 1 . Therefore, the orientations of the other a 2i+1 1geqi ≥ n−1 must be of the same form as shown in Figure 17 2 . Thus the diagram is naturally as shown in Figure 17 3 , and we can easily see that the conditions (b1) and (b2) are satisfied. This completes the proof.
Beside, we classify these diagrams into four types, that is type of 1a, type of 1b, type of 2a, and type of 2b. Remark that these four types correspond not only to the proposition above but also to Main Theorem.
Main Theorem
For determining the trivializing number of a diagram, we can make use of the chord diagram. Let P be a projection with n pre-crossings. A chord diagram of P , denoted by CD P , is a circle with n chords marked on it by dotted line segment where the preimage of each pre-crossing is connected by a chord (see [1]). We provide an example of the chord diagram as shown in Figure 18.
A chord diagram is said to be parallel if all the chords in it have no intersection. For example, the rightmost chord diagram in Figure 18 is made of the chords which correspond to the crossings 1, 2, 3, and is parallel. In the situation above, next theorem holds.
Theorem 5.1 (Hanaki [1]). For a chord diagram, the following holds.
• The number of chords which are taken away from a chord diagram is even.
• tr(D) = min {the number of chords which must be taken away from a chord diagram in order to get a parallel chord diagram}
We consider sub chord diagram correponding to each (a i ).
Lemma 5.2. Let SC ai be the sub-chord diagram corresponding to the rectangle (a i ).
(1) If two arcs of (a i ) enter from the same side, (left or right), as shown in Figure 19 1 and 2 , then any two chords in SC ai certainly cross each other (i.e. any two chards have an intersection) as shown in Figure 19 7 .
(2) If one of two arcs enters from the left-hand side of (a i ) and the other enters from the right-hand side as shown in Figure19 3 , 4 , 5 and 6 , then there are no intersections in SC ai as shown in Figure 19 8 . That is to say SC ai is parallel. Figure 19. The orientations of two arcs in (a i ) and the sub-chord diagram corresponding to (a i )
Proof. First we name the crossings in (a i ), 1, 2, . . . , k from left to right.
(1) If an arc enters from the left side, it passes the crossings 1, 2, . . . , k in order. Since the arc enters from the left side again, it passes the crossings in the same order. Therefore, the sub chord diagram corresponding to (a i ) is as shown in Figure 20(1 ). (2) On the otherhand, when the arcs enter from the different sides of (a i ), the sub-chord diagram corresponding to (a i ) is naturally as shown in Figure 20( 2 ). From the lemma above, we can consider a sub-chord diagram of a rectangle as one chord. That is to say, we can gather all chords in the sub-chord diagram corresponding to (a i ) into one chord denoted by a i . Furthermore, in the case of (1) in Lemma 5.2, we name this chord I-chord then represent it by a dotted line, while in the case of (2) we name this chord P-chord and represent it by a solid line.
Moreover, we determine the chord diagram CD P corresponding to the diagram D. When D is of type 1a, by thinking over the orientation of each rectangle, we can see that any a 2i for 1 ≤ i ≤ n is a P-chord, and any a 2i−1 for 1 ≤ i ≤ n is an I-chord. If every a 2i−1 , 1 ≤ i ≤ n, is even, then we obtain the diagram as shown in Figure21(1), and also obtain the chord diagram as shown in Figure21 (2). (For convenience, we represent a chord diagram not by a circle but by a quadrangle.) Figure 21. An oriented diagram D of type 1a with every a 2i−1 even, and the chord diagram corresponding to D Otherwise, the way to round a diagram depends on whether a 2i−1 is odd or even. So we rename the lower rectangles which consist of odd number half-twists (the number of these tangles is even), (b 1 ), (b 2 ), . . . , (b 2r ) from left to right in the diagram. Moreover we also rename all upper rectangles as following:
• the upper rectangles on the left-hand side of (b 1 ); (c 1 0 ), (c 2 0 ), . . . , (c q0 0 )
• the upper rectangles between (b j ) and (b j+1 ); (c 1 j ), (c 2 j ), . . . , (c qj j )
• the upper rectangles on the right-hand side of (b 2r ); (c 1 2r ), (c 2 2r ), . . . , (c q2r 2r ) Then we can obtain the sub chord diagram corresponding to the rectangles between (b j ) and (b j+1 ) as shown in Figure22, where c k j (1 ≤ k ≤ q j ) consists of some parallel chords which correspond to the crossings in rectangle (c k j ). Furthermore, any two P-chords in c 1 j , c 2 j , . . . , c qj j does not cross each other, so we can bundle them again into one P-chord. Now we represent them by a solid line. In other words, we consider that c j = c 1 j + c 2 j + . . . + c qj j . Since the chords which correspond to the lower tangles between (b j ) and (b j+1 ) are all I-chords, by Theorem 5.1 the number of chords which we can leave is at most only one. Besides, the I-chord which does not cross c j is only b j as shown in Figure 22(1) or b j+1 as shown in Figure 22(2). So we only need to consider the chord diagram in which all I-chords between b j and b j+1 were already delated as shown in Figure 23.
In the case of the diagram of type 1b, we can consider in a similar fashion to type 1a. When every a 2i is even, the diagram is as shown in Figure 24(1), and the chord diagram is as shown in Figure 24(2). Otherwise, the sub chord diagram between (b j ) and (b j+1 ) is also as shown in Figure 22, and we can get the chord diagram as shown in Figure 25. On the other hand, in the case of the diagrams of type 2a and type 2b, when j is an even number, the sub chord diagram is as shown in Figure 22(1), and when j is an odd number, the sub chord diagram is as shown in Figure 22(2). Thus we can get the chord diagrams as shown in Figure 26 and Figure 27.
In the condition above, we can obtain the theorem bellow.
tr(D) = min n i=1 a 2i−1 + r j=1 c 2j−1 n i=1 a 2i−1 + p j=1 c 2j−1 + r j=p+1 c 2j − 1 (2) When D is of type 1b.
(a) If every a 2i is even, then tr(
D) = n i=1 a 2i . (b) Otherwise, tr(D) = min n i=1 a 2i + s j=1 c 2j−1 n i=1 a 2i + p j=0 c 2j + s j=p+2 c 2j−1 − 1 (3) When D is of type 2a. tr(D) = min n i=1 a 2i + t j=0 c 2j+1 n i=1 a 2i + p j=0 c 2j + t j=p+1 c 2j+1 (4) When D is of type 2b. tr(D) = min{ n i=1 a 2i+1 + p j=0 c 2j+1 + u j=p+2 c 2j − 1} Proof 5.4.
(1) When D is of type 1a. (This means that c 2r crosses all chords corresponding to the crossings in lower rectangles). So if we leave c 2r then we must delete all these chords which cross c 2r . That is to say the number of chords we must delete is n i=1 a 2i−1 + r j=1 c 2j−1 . See Figure 28(1). When we delete c 2r , we can leave all chords in P-chord c 2r−1 and only one chord in I-chord b 2r−1 . So the number of chords we need to delete is n i=1 a 2i−1 + r−1 j=1 c 2j−1 + c 2r − 1. See Figure 28(2). Figure 28. The operation of deleting some P-chords
Next we attempt to delete the P-chords which correspond to c 2j (1 ≤ j ≤ r) step by step in the way as following: {c 2r } → {c 2r , c 2r−2 } → {c 2r , c 2r−2 , c 2r−4 } → · · · → {c 2r , c 2r−2 , · · · , c 2 }. By these operations we can also get a trivial chord diagram even if we leave the P-chords which correspond to c 2j−1 (1 ≤ j ≤ r) step by step in the way as following:
{c 2r−1 } → {c 2r−1 , c 2r−3 } → {c 2r−1 , c 2r−3 , c 2r−5 } → · · · → {c 2r−1 , c 2r−3 · · · c 1 }.
There is an one-to-one correlation between these two operations. Consequently, the minimum of these numbers is the trivializing number of the diagram, and the following holds.
tr(D) = min n i=1 a 2i−1 + r j=1 c 2j−1 n i=1 a 2i−1 + p j=1 c 2j−1 + r j=p+1 c 2j − 1 (2) When D is of type 1b. (a)tr(D) = min n i=1 a 2i + s j=1 c 2j−1 n i=1 a 2i + p j=0 c 2j + s j=p+2 c 2j−1 − 1 (3)
When D is of type 2a. In this case, the chord diagram is as shown in Figure 26, and we see that every I-chord which corresponds to (b j ) (1 ≤ j ≤ 2t+ 1) necessarily crosses two P-chords which correspond to (c 0 ) and (c 2t+1 ). So we can leave none of these I-chords unless we delete both c 0 and c 2t+1 . In addition, we consider the relation of P-chords which correspond to (c j ) (1 ≤ j ≤ 2t + 1). If we leave every c 2j (0 ≤ j ≤ t), then we must delete every c 2j+1 (0 ≤ j ≤ t). Hence the number of chords which we need to delete is the following: n i=1 a 2i + t j=0 c 2j+1 . Furthermore there exists a relation in the P-chords in this chord diagram. That is, if we delete c 0 then we can leave c 1 , if we delete {c 0 , c 2 } then we can leave {c 1 , c 3 }, and so on. Because of this, the following holds. tr(D) = min{ n i=1 a 2i+1 + p j=0 c 2j+1 + u j=p+2 c 2j − 1} We have just completed the proof of Main Theorem.
tr(D) = min n i=1 a 2i + t j=0 c 2j+1 n i=1 a 2i + p j=0 c 2j + t j=p+1 c 2j+1 (4) When D is of type 2b,
The relation between trivializing number and unknotting number
In this section we study the relation between the trivializing number and the unknotting number. The definitions of the unknotting number of a diagram and the unknotting number of a knot are the following:
Definition 6.1. The unknotting number of a diagram D, denoted by u(D), is the minimal number of crossings of D whose over/under information should be changed for getting a diagram of a trivial knot.
Definition 6.2. The unknotting number of a knot K, denoted by u(K), is the minimum of u(D), where the minimum is taken over all diagrams D of K.
There is a relation between the unknotting number and the signature. (About the signature there is a detailed explanation in [2]). The signature is an invariant of knots, and in general the following holds. About the relation between the trivializing number and the unknotting number, it is known that 2u(D) ≤ tr(D) (2u(K) ≤ tr(K)) holds in general. However, particularly for positive knots, there exists a conjecture that 2u(K) = tr(K) ( [1]). And as the partial positive answer of this, we have the next corollary to Theorem 5.3 and Theorem 6.3.
Corollary 6.4. Let K be a positive 2-bridge knot and has a diagram D = C(a 1 , a 2 , . . . , a 2n ) a i > 0 for any i (1 ≤ i ≤ 2n). If a 2i−1 is even for any i (1 ≤ i ≤ n), or a 2i is even for any i (1 ≤ i ≤ n), then 2u(K) = tr(K).
Proof. First we prove the case where any a 2i−1 is an even number. In this case, D is a minimal diagram of K, so by Theorem 3.1, D is an positive and alternating diagram. Besides, by the Proposition 4.1, a 2n must be an odd number and other a 2i (1 ≤ i ≤ n − 1) are necessarily all even numbers. Moreover, the sign of any crossing is +, thereby w(D) = 2n i=1 a i . The checkerboard coloring is like as shown in Figure 30, and we know W = n i=1 a 2i + 1, B = n i=1 a 2i−1 + 1.
(W − B) = 1 2 (− 2n i=1 a i + n i=1 a 2i − n i=1 a 2i−1 ) = − n i=1 a 2i−1
Furthermore by Theorem 6.3, we can see (|σ(D)|)/2 = ( n i=1 a 2i−1 )/2 ≤ u(K) = u(D). In actually as shown in Figure 31, we can obtain a trivial diagram by some crossing changes of the crossings which correspondent to lower tangles, and the number of these crossing changes is ( Finally we can get the inequality 2u(K) ≤ tr(K) ≤ tr(D) and the equality 2u(K) = tr(D). Thus 2u(K) = tr(K) holds.
In the case that any a 2i is an even number, we can also gain this equality in a similar fashion. This result is for the special case of positive 2-bridge knots. So whether 2u(D) = tr(D) holds for any minimal diagram of positive 2-bridge knots or not, and whether 2u(K) = tr(K) holds or not, these questions are our theme of the future.
Positive pretzel knot
For positive pretzel knots we can get the following:
Theorem 7.1. Let K be a pretzel knot P (p 1 , p 2 , . . . , p 2n ) p i > 0 for any i (1 ≤ i ≤ 2n), p 2n is even and other p i s are all odd (1 ≤ i ≤ 2n − 1), then the following holds.
tr(K) = 2u(K) = 2n i=1 p i − 2n + 1 Proof. A diagram D of knot K is as shown in Figure 32, and we know this diagram is positive and alternating. Besides the sub-chord diagram which corresponds to each p i is an I-chord. So the chord diagram of D is as shown in Figure 33.
Figure 1 .
1Projection, diagram, and pseudo-diagram
Figure 2 .
2Example of a trivial pseudo-diagram Definition 2.1. The trivializing number of a projection P , denoted by tr(P ), is the minimal number of pre-crossings of P which should be transformed into crossings for getting a trivial pseudo-diagram.
Definition 2 . 2 .
22The trivializing number of a diagram D, denoted by tr(D), is by definition the trivializing number of the associated projection which is obtained from D by ignoring the over/under information.
Figure 3 .
3An operation for getting a trivial pseudo-diagram Definition 2.3. The trivializing number of a knot K, denoted by tr(K), is the minimum of tr(D), where the minimum is taken over all diagrams D of K.
Figure 4 .
4Sign of a crossing and an example of a positive diagram
Figure 5 .
52-bridge knot diagrams
Figure 6 .
6Examples of 2-bridge knot diagrams
Theorem 3 . 1 (
31Nakamura[3]). A reduced alternating diagram of a positive alternating knot is positive.
Proposition 4 . 1 .
41Let D = D(a 1 , a 2 , . . . , a m ) be a positive diagram or a negative diagram of a 2-bridge knot such that a i > 0 for all i with 1 ≤ i ≤ m. Then D must be one of the following forms.
Figure 7 .
7The orientations of two arcs with positive crossings Proof of Proposition 4.1.
Figure 8 .
8The orientations of two arcs with negative crossings
Figure 9 .
9The symbolic convention
Figure 10 .
10Orientation of diagram D = D(a 1 , a 2 , . . . , a m ), m is even.
Figure 11 .
11The orientation of arcs of rectangles
Figure 12 .
12Orientations 2i−1 must necessarily be even. This shows that the conditions (a1), (a2) and (a3) are satisfied.
Figure 13 .
13Orientations of arcs in D with all signs −
Figure 14 .Figure 15 .
1415Orientation of diagram D =D(a 1 , a 2 , . . . , a m ), m is odd,
Figure 16 .
16The orientation of a positive diagram D = D(a 1 , a 2 , . . . , a m ), m is odd
Figure 17 .
17The orientation of a negative diagram D= D(a 1 , a 2 , . . . , a m ), m is odd
Figure 18 .
18A chord diagram
Figure 20 .
20The sub-chord diagram corresponding to (a i )
Figure 22 .
22The sub chord diagram between (b i ) and (b j+1 )
Figure 23 .
23The chord diagram corresponding to the diagrams of type 1a.
Figure 24 .Figure 25 .
2425An oriented diagram D of type 1b with every a 2i even, and the chord diagram corresponding to D The chord diagram corresponding to the diagrams of type 1b.
Figure 26 .
26The chord diagram corresponding to the diagrams of type 2a.
Figure 27 .
27The chord diagram corresponding to the diagrams of type 2b.Theorem 5.3. Let D = D(a 1 , a 2 , . . . , a m ) be a positive diagram or a negative diagram of a 2-bridge knot such that a i > 0 for all i with 1 ≤ i ≤ m. Then for the trivializing number of D, the following holds. (1) When D is of type 1a. (a) If every a 2i−1 is even, then tr(D) = n i=1 a 2i−1 . (b) Otherwise,
(a) If every a 2i−1 is even, then we have the chord diagram as shown in Figure21(2). Since any two I-chords in this chord diagram cross each other, we can leave at most only one I-chord when we attempt to gain a trivial chord diagram. Moreover every two chords in any Ichord also cross each other. This means that the number of the chords corresponding to the crossings in lower rectangles which we can leave is at most only one. In addition, the P-chords corresponding to the crossings in upper rectangles are all parallel and any P-chord crosses at least one I-chord. Hence the minimal number of the chords which we must delate in order to get a trivial chord diagram is the number of all the chords which correspond to the crossings in lower rectangles. Therefore, we have the following:tr(D) = n i=1 a 2i−1 (b)Otherwise, the chord diagram as shown inFigure 23, and we can see the P-chord represented by c 2r crosses all P-chords represented by c 2j−1 (1 ≤ j ≤ r) and all I-chords represented by b k (1 ≤ k ≤ r).
If every a 2i is even, then we can consider in a similar fashion to type 1a and can easily see tr(D) = n i=1 a 2i . (b) Otherwise, from the chord diagram as shown in Figure25, we know the P-chord c 2r in Figure23 is replaced by c 0 in Figure 25. In this case, if we delete the P-chords represented by c 2j (0 ≤ j ≤ s) step by step in the way {c 0 } → {c 0 , c 2 } → {c 0 , c 2 , c 4 } . . ., then we can leave {c 1 } → {c 1 , c 3 } → {c 1 , c 3 , c 5 } . . ., by way of compensation. Thus the following holds.
the chord diagram is as shown inFigure 27. In this chord diagram, c 0 and c 2u+1 dose not cross each other. Moreover they dose not cross any other P-chord or I-chord. Therefore, we can leave both c 0 and c 2u+1 . However for I-chords b j (1 ≤ j ≤ 2u + 1), any two of them cross each other, so we can leave at most only one I-chord among {b j }. Thus, if we leave all P-chords corresponding to (c 2k ) (1 ≤ k ≤ u), we must delete all P-chords corresponding to (c 2k−1 ) (1 ≤ k ≤ u). Hence the number of all chords which we must delete isn i=1 a 2i+1 + u−1 j=0 c 2j+1 − 1.Besides, if we orderly delete some P-chords step by step such as {c 2u } → {c 2u , c 2u−2 } → {c 2u , c 2u−2 , c 2u−4 } · · · , we can leave other P-chords such as {c 2u−1 } → {c 2u−1 , c 2u−3 } → {c 2u−1 , c 2u−3 , c 2u−5 } · · · by way of compensation. Finally the following holds.
(K)| ≤ u(K) ≤ u(D) In addition for a alternating diagram, it is known that σ(D) = −w(D)/2 + (W − B)/2 ([4]), where w(D) is the sum of local writhes of all crossings, B is the number of domains colored with a grayish color when we give checkerboard coloring as shown in Figure 29, and W is the number of domains which are not colored. For example, in the case as shown in Figure 29, the number of + crossings is 2, and that of − crossings is 4, then σ(D) = 2 + (−4) = −2, and W = 5, B = 3. Therefore we can get σ(D) = σ(K) = −(−2)/2 + (5 − 3)/2 = 2, and |σ(D)|/2 = 1 ≤ u(K) ≤ u(D).In actually, we can obtain a diagram of a trivial knot with one crossing change, hence u(D) = u(K) = 1.
Figure 29 .
29An example of the checkerboard coloring and local writhes
Figure 30 .
30The checkerboard coloring of a 2-bridge knot diagram in which any a 2i−1 is an even number Therefore, next equality holds. σ(D) = − 1 2 (w(D)) + 1 2
Figure 31 .
31By some crossing changes, we can obtain a trivial knot diagram.
Figure 32 .Figure 33 .
3233The standard diagram D of K The chord diagram of K Then we can easily obtain the trivializing number of D. Namely, tr(D) = 2n i=1 p i − 2n + 1. Furthermore, by the checkerboard coloring as shown inFigure 34, the signature of K is the following:σ(K) = σ(D) = − 1 2 w(D) + 1 2 (W − B) = −( 2n i=1 p i − 2n + 1)By the inequality |σ(K)| ≤ 2u(K) ≤ tr(K) ≤ tr(D), we can conclude tr(K) = 2u(K).This completes the proof.
Figure 34 .
34An example of checkerboard coloring
Trivializing number of knots. R Hanaki, J. Math. Soc. Japan. to appearR. Hanaki, Trivializing number of knots, J. Math. Soc. Japan, to appear.
Knot theory and its applications, translatedby. K Murasugi, Bohdan KurpitaK. Murasugi, Knot theory and its applications, translatedby. Bohdan Kurpita, (2010).
Positive alternating links are positively alternating. T Nakanura, J. Knot Theory Ramifications. 9T. Nakanura, Positive alternating links are positively alternating, J. Knot Theory Ramifica- tions. 9(2000), 107-112.
A combinatorial formula for the signature og alternating diagrams. P Traczyk, Fund. Math. 184P. Traczyk, A combinatorial formula for the signature og alternating diagrams, Fund. Math 184 (2004) 311-316.
Motooka, Nishi-ku. Fukuoka; Japan744Graduate School of Mathematics, Kyusyu UniversityGraduate School of Mathematics, Kyusyu University, 744, Motooka, Nishi-ku, Fukuoka, 819-0395, Japan
|
[] |
[] |
[
"J Ferretti \nInstitute of Theoretical Physics\nCAS Key Laboratory of Theoretical Physics\nChinese Academy of Sciences\n100190BeijingChina\n",
"E Santopinto \nINFN\nSezione di Genova\nVia Dodecaneso 3316146GenovaItaly\n",
"M N Anwar \nInstitute of Theoretical Physics\nCAS Key Laboratory of Theoretical Physics\nChinese Academy of Sciences\n100190BeijingChina\n\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n",
"M A Bedolla \nINFN\nSezione di Genova\nVia Dodecaneso 3316146GenovaItaly\n\nInstituto de Física y Matemáticas\nUniversidad Michoacana de San Nicolás de Hidalgo\nCiudad UniversitariaC-3, 58040MoreliaEdificio, MichoacánMéxico\n"
] |
[
"Institute of Theoretical Physics\nCAS Key Laboratory of Theoretical Physics\nChinese Academy of Sciences\n100190BeijingChina",
"INFN\nSezione di Genova\nVia Dodecaneso 3316146GenovaItaly",
"Institute of Theoretical Physics\nCAS Key Laboratory of Theoretical Physics\nChinese Academy of Sciences\n100190BeijingChina",
"University of Chinese Academy of Sciences\n100049BeijingChina",
"INFN\nSezione di Genova\nVia Dodecaneso 3316146GenovaItaly",
"Instituto de Física y Matemáticas\nUniversidad Michoacana de San Nicolás de Hidalgo\nCiudad UniversitariaC-3, 58040MoreliaEdificio, MichoacánMéxico"
] |
[] |
The baryo-quarkonium picture for hidden-charm and bottom pentaquarks and LHCb P c (4380) and P c (4450) statesWe study baryo-charmonium [ηc-and J/ψ-N * , ηc(2S)-, ψ(2S)-and χc(1P )-N ] and baryobottomonium [η b (2S)-, Υ(2S)-and χ b (1P )-N ] bound states, where N is the nucleon and N * a nucleon resonance. In the baryo-quarkonium model, the five qqqQQ quarks are arranged in terms of a heavy quarkonium core, QQ, embedded in light baryonic matter, qqq, with q = u or d. The interaction between the QQ core and the light baryon can be written in terms of the QCD multipole expansion. The spectrum of baryo-charmonium states is calculated and the results compared with the existing experimental data. In particular, we can interpret the recently discovered Pc(4380)and Pc(4450) pentaquarks as ψ(2S)-N and χc2(1P )-N bound states, respectively. We observe that in the baryo-bottomonium sector the binding energies are, on average, slightly larger than those of baryo-charmonia. Because of this, the hidden-bottom pentaquarks are more likely to form than their hidden-charm counterparts. We thus suggest the experimentalists to look for five-quark states in the hidden-bottom sector in the 10.4 − 10.9 GeV energy region.
|
10.1016/j.physletb.2018.09.047
|
[
"https://arxiv.org/pdf/1807.01207v3.pdf"
] | 119,426,596 |
1807.01207
|
06a386134ba7cc35242d8e4a8a512a5182c14380
|
6 Jul 2018
J Ferretti
Institute of Theoretical Physics
CAS Key Laboratory of Theoretical Physics
Chinese Academy of Sciences
100190BeijingChina
E Santopinto
INFN
Sezione di Genova
Via Dodecaneso 3316146GenovaItaly
M N Anwar
Institute of Theoretical Physics
CAS Key Laboratory of Theoretical Physics
Chinese Academy of Sciences
100190BeijingChina
University of Chinese Academy of Sciences
100049BeijingChina
M A Bedolla
INFN
Sezione di Genova
Via Dodecaneso 3316146GenovaItaly
Instituto de Física y Matemáticas
Universidad Michoacana de San Nicolás de Hidalgo
Ciudad UniversitariaC-3, 58040MoreliaEdificio, MichoacánMéxico
6 Jul 2018numbers: 1239Mk1240Yx1375Lb1440Rt
The baryo-quarkonium picture for hidden-charm and bottom pentaquarks and LHCb P c (4380) and P c (4450) statesWe study baryo-charmonium [ηc-and J/ψ-N * , ηc(2S)-, ψ(2S)-and χc(1P )-N ] and baryobottomonium [η b (2S)-, Υ(2S)-and χ b (1P )-N ] bound states, where N is the nucleon and N * a nucleon resonance. In the baryo-quarkonium model, the five qqqQQ quarks are arranged in terms of a heavy quarkonium core, QQ, embedded in light baryonic matter, qqq, with q = u or d. The interaction between the QQ core and the light baryon can be written in terms of the QCD multipole expansion. The spectrum of baryo-charmonium states is calculated and the results compared with the existing experimental data. In particular, we can interpret the recently discovered Pc(4380)and Pc(4450) pentaquarks as ψ(2S)-N and χc2(1P )-N bound states, respectively. We observe that in the baryo-bottomonium sector the binding energies are, on average, slightly larger than those of baryo-charmonia. Because of this, the hidden-bottom pentaquarks are more likely to form than their hidden-charm counterparts. We thus suggest the experimentalists to look for five-quark states in the hidden-bottom sector in the 10.4 − 10.9 GeV energy region.
I. INTRODUCTION
Recently, LHCb reported the observation of two new resonances, P + c (4380) and P + c (4450), in Λ b → J/ψΛ * and Λ b → P + c K − → (J/ψp)K − decays [1]. Their quark structure is |P + c = |uudcc , whence the name pentaquarks. The pentaquarks were introduced in the LHCb analysis of Λ b decays to improve the fit upon the experimental data, because the use of known Λ * states alone was not sufficient to get a satisfactory description of the J/ψp spectrum [2]. The pentaquarks masses, resulting from the LHCb best fit, are M P + c (4380) = 4380 ± 8 ± 29 and M P + c (4450) = 4449.8 ± 1.7 ± 2.5 MeV, with widths Γ P + c (4380) = 205 ± 18 ± 86 and Γ P + c (4450) = 39 ± 5 ± 19 MeV. The preferred J P quantum numbers are ( [1]; indeed, all the preferred fits from LHCb require pentaquarks with opposite parities.
From a theoretical point of view, there are a few possible interpretations for a five-quark bound state, including: I) Baryon-meson molecules [3][4][5][6][7][8][9][10][11][12], such as Σ + cD * 0 , the P + c (4450) lying 10 MeV below the Σ + cD * 0 threshold. Other molecular model assignments, like Σ * cD , are also possible; II) Diquark-diquark-antiquark states [13][14][15][16], made up of a charm antiquark,c, a heavy-light diquark, [cq], and a light-light one, [qq], where q = u or d; III) Baryo-charmonium systems [17][18][19], like ψ(2S)-N , bounded by gluon-exchange forces; IV) The result of kinematical or threshold-rescattering effects [20][21][22], like in the case of χ c1 p resonances or anomalous threshold singularities; V) The bound state of open-color configurations [23]; VI) The bound states of a soliton and two pseudoscalar mesons, D andD [24]. For a review, see Refs. [25][26][27][28]. In this work, we discuss the baryocharmonium (baryo-quarkonium) one.
The hypothesis of charmonium-nuclei bound states dates back to the early nineties. At that time, it was shown that QCD van der Waals-type interactions, due to multiple gluon-exchange, may provide a strong enough binding to produce charmonium-nuclei bound states if A 4 [29][30][31][32], where A is the atomic mass number. On studying the charmonium-nucleon systems, the interaction, though attractive [O(10) MeV], is too weak to produce a cc-N bound state. Notwithstanding, it is still unclear if a similar interaction may give rise to cc-qqq bound states if the nucleon is replaced by its radial or orbital excitations, or the charmonium ground-state by its radial excitations. These possibilities are worth to be investigated in the baryo-charmonium picture.
By analogy with four-quark hadro-charmonia [33][34][35][36][37][38][39], namely cc-qq states, the baryo-charmonium is a pentaquark configuration, where a compact cc state, ψ, is embedded in light baryonic matter, B [17][18][19]. The interaction between the two components, ψ and B, takes place via a QCD analog of the van der Waals force of molecular physics. It can be written in terms of the multipole expansion in QCD [40], with the leading term being the E1 interaction with chromo-electric field E a .
In the present manuscript, we use the baryocharmonium model to discuss the possible emergence of η c -and J/ψ-N * , η c (2S)-, ψ(2S)-and χ c (1P )-N bound states, where N is the nucleon and N * a nucleon resonance. The energies of baryo-charmonia are computed by solving the Schrödinger equation for the baryocharmonium potential [33,39]. This is approximated as a finite well whose width and size can be expressed as a function of the N (N * ) radius and the charmonium chromo-electric polarizability, α ψψ . The baryocharmonium masses and quantum numbers are compared with the existing experimental data and some tentative assignments are discussed; in particular, we can interpret the recently discovered P c (4380) and P c (4450) pentaquarks as ψ(2S)-N and χ c2 (1P )-N bound states, respectively.
Furthermore, we extend the previous calculations to the bottom sector and calculate the spectrum of bottomonium-N bound states. Our results are compatible with the emergence of 2S and 1P bottomoniumnucleon bound states, with binding energies of the order of a few hundreds of MeV. We observe that in the baryobottomonium sector the binding energies are, on average, slightly larger than those of baryo-charmonia. Because of this, the hidden-bottom pentaquarks are more likely to form than their hidden-charm counterparts. We thus suggest the experimentalists to look for five-quark states in the hidden-bottom sector in the 10.4−10.9 GeV energy region.
II. BARYO-QUARKONIUM HAMILTONIAN
The baryo-quarkonium is a particular pentaquark configuration, where five quarks are arranged in terms of a compact QQ state embedded in light baryonic matter.
The interaction between the quarkonium core, Q, and the gluonic field inside the light-baryon, B, can be written in terms of the QCD multipole expansion [40,41]. In particular, one considers as leading term the E1 interaction with chromo-electric field E [31,33],
H eff = − 1 2 α ij E i · E j ,(1)
α ij being the quarkonium chromo-electric polarizability. In order to calculate the baryo-quarkonium masses, one has to compute the expectation value of Eq. (1) on |QB states. The chromo-electric field matrix elements can be calculated in terms of the QCD energy-momentum tensor, θ µ µ ≈ 9
16π 2 E 2 [42]. Its expectation value on a nonrelativistic normalized |B at rest gives the mass of this state [33],
B| θ µ µ (q = 0) |B ≃ M B .(2)
The baryo-quarkonium effective potential, V bq , describing the coupling between Q and B, can be approximated by a finite well [33,39] RB 0
d 3 r V bq ≈ − 8π 2 9 α QQ M B ,(3)
where R B is the radius of B [2,43] and α QQ the quarkonium diagonal chromo-electric polarizability. Thus, we have:
V bq (r) = − 2παQQMB 3R 3 B for r < R B 0 for r > R B .(4)
The kinetic energy term is
T bq = k 2 2µ ,(5)
where k is the relative momentum (with conjugate coordinate r) between Q and B, and µ the reduced mass of the QB system. Finally, the total baryo-quarkonium Hamiltonian is:
H bq = M Q + M B + V bq (r) + T bq .(6)
The baryo-quarkonium quantum numbers are obtained by combining those of the quarkonium core, Q, and light baryon, B, as
|Φ bq = (L Q , L B )L bq , (S Q , S B )S bq ; (J bq , ℓ bq )J P tot ,(7)
where the baryo-quarkonium parity is P = (−1) ℓ bq P Q P B , and ℓ bq is the relative angular momentum between Q and B. From now on, unless we indicate an explicit value, we will assume ℓ bq = 0.
III. CHROMO-ELECTRIC POLARIZABILITY
In this section, we depict two different procedures for the quarkonium diagonal chromo-electric polarizabilities.
A. Chromo-electric polarizabilities of charmonia as pure Coulombic systems
There are several possible approaches for the quarkonium diagonal chromo-electric polarizability. One possibility is to calculate it by considering quarkonia as pure Coulombic systems. The perturbative result in the framework of the 1/N c expansion is [44]
α ψψ (nS) = 16πn 2 c n a 3 0 3g 2 c N 2 c .(8)
Here, n is the radial quantum number; c 1 = 7 4 and c 2 = 251 8 ; N c = 3 is the number of colors; g c = √ 4πα s ≃ 2.5, and α s is the QCD running coupling constant at the charm quark mass-scale; finally,
a 0 = 2 m c C F α s(9)
is the Bohr radius of nonrelativistic charmonium [37], with color factor C F = N 2 c −1 2Nc , and m c = 1.5 GeV is the charm quark mass. A nonperturbative calculation of the chromo-electric polarizability was carried out in Refs. [45,46]. The result is
α ψψ (nℓ) = 2n 6 ǫ nℓ m 3 c β 4 ,(10)
where ℓ is the orbital angular momentum, β = 4 3 α s and ǫ nℓ 's are numerical coefficients, with ǫ 10 = 1.468. The use of Eqs. (8,9) or (10) provides the same result. One obtains [39] α Coul
ψψ (1S) ≃ 4.1 GeV −3(11)
and
α Coul ψψ (2S) ≃ 296 GeV −3 .(12)
B. Chromo-electric polarizabilities from charmonium-nucleon scattering lengths
The second approach to extract charmonium diagonal chromo-electric polarizabilities is to fit them to results of charmonium-nucleon scattering lengths. The latter can be written as [47,Eq. (104)]
a N ψ ≈ − 4πM N 9 µα ψψ ,(13)
where M N is the nucleon mass and µ the charmoniumnucleon reduced mass. The results are shown in Table I. It is interesting to observe that the calculated values of α ψψ (1S) span a wide interval, α ψψ ∈ [0.25 − 3.8] GeV −3 .
In particular, a global fit to both differential and total cross sections from available data on J/ψp scattering provides a value a pJ/ψ = −0.046 ± 0.005 fm [48], which is consistent with the value a N ηc = −0.05 fm from Ref. [31]. The corresponding value of the binding energy of J/ψ in nuclear matter is ≈ 3 MeV, which is close to the deuteron binding energy. On the other hand, the binding energy of J/ψ in nuclear matter was found to be 21 MeV for α ψψ (1S) = 2 GeV −3 [32,49], corresponding to a scattering length of a N J/ψ = −0.37 fm. An even larger value for the charmonium-nucleon scattering length was obtained by means of quenched lattice QCD calculations, a N ψ ≈ −0.7 fm [50].
Finally, we extract the values of the chromo-electric polarizabilities of 2S and 1P charmonia. That of 2S states can be estimated as four times the ratio between c 2 = 251 8 and c 1 = 7 4 . One gets
α ψψ (2S) = 502 7 α ψψ (1S) .(14)
The chromo-electric polarizability of 1P charmonia can be estimated by means of Eq. (10) and Ref.
[45, Table 1]. This means
α ψψ (1P ) = ǫ 21 ǫ 20 α ψψ (2S) ,(15)
where ǫ 20 = 1.585 and ǫ 21 = 0.998. It is still not possible to fit α ψψ (nℓ) to the experimental data. So, as previously discussed, α ψψ (nℓ)'s have to be estimated phenomenologically. This could be one of the main sources of theoretical uncertainty on our results.
C. Chromo-electric polarizabilities of bottomonia
Finally, we calculate bottomonium diagonal chromoelectric polarizabilities by considering them as pure Coulombic systems. See Eqs. (8) and (9), where we substitute the charm-quark mass, m c , with the bottom one, m b = 5.0 GeV, and evaluate α s at the m b mass-scale. We get
α max ΥΥ (1S) ≃ 0.47 GeV −3(16)
and
α max ΥΥ (2S) ≃ 33 GeV −3 .(17)
Similar values are obtained by using the nonperturbative results of Refs. [45,46]. On the contrary, if we define the bottomonium Bohr radius as [17,18] a 0 = 16π
g 2 b N c m b ,(18)
we get
α min ΥΥ (1S) ≃ 0.33 GeV −3(19)
and
α min ΥΥ (2S) ≃ 23 GeV −3 .(20)
The chromo-electric polarizabilities of 1P bottomonia can be estimated using Eq. (15); we get
α max ΥΥ (1P ) ≃ 21 GeV −3(21)
and
α min ΥΥ (1P ) ≃ 14 GeV −3 .(22)
IV. BARYO-CHARMONIA AND THE Pc(4380) AND Pc(4450) PENTAQUARKS
In this section, we give results for the binding energies of charmonium-N and N * bound states. The previous observables are computed by using the values of the charmonium chromo-electric polarizabilities from Sec. III.
The spectrum of η c -and J/ψ-N * , η c (2S)-, ψ(2S)and χ c (1P )-N bound states is calculated in the baryocharmonium picture by solving the eigenvalue problem of Eq. (6). See Table II and Fig. 1 The results strongly depend on the values of charmonium diagonal chromo-electric polarizabilities, α ψψ (nℓ). These values are not defined unambiguously, but span a wide interval. Up to now, α ψψ (nℓ)'s cannot be fitted to the experimental data; they have to be estimated phenomenologically. Because of this, they represent one of the main sources of theoretical uncertainty on our results. We have thus decided to present two sets of results for the baryo-charmonium spectrum.
In the first case, we use the values α scatt ψψ (1S) = 0.25 GeV −3 , α scatt ψψ (2S) = 18 GeV −3 and α scatt ψψ (1P ) = 11 GeV −3 , corresponding to a charmonium-nucleon scattering length a N J/ψ ≃ a N ηc ≃ −0.05 fm [31,48]. These values of α ψψ (nℓ) are of the same order of magnitude as those of [17,Eq. (4)]. Baryo-charmonium bound states will give rise due to the interaction between N and 2S and 1P charmonia.
In the second case, we use α Coul ψψ (1S) = 4.1 GeV −3 and α Coul ψψ (2S) = 296 GeV −3 . These values are extracted by considering charmonia as pure Coulombic systems [44][45][46]. Similar values of α Coul ψψ (1S) are obtained by fitting the 1S chromo-electric polarizability to the quenched lattice QCD result for the charmonium-nucleon scattering length of Ref. [50]. Thus, the baryo-charmonium bound states will give rise due to the interaction between N * and 1S charmonia, α Coul ψψ (2S) being too large [39]. It is worth noticing that we are able to make at least a clear assignment in the α scatt ψψ (2S) case. In particular, the P c (4380) pentaquark is interpreted as a ψ(2S)⊗N bound state with J P = 3 2 − quantum numbers. Besides, we can also speculate on assigning the P c (4450) to a χ c2 (1P )⊗N baryo-charmonium state although, in this second case, the theoretical prediction for the mass falls outside the experimental mass interval.
However, our predictions do not agree with the baryocharmonium results of Refs. [17,18], where the P c (4450) pentaquark is interpreted as a ψ(2S) ⊗ N bound state. This difference could be related to the different choices on α ψψ (2S): our values are calculated with 18 GeV −3 and theirs with 12 GeV −3 .
V. BARYO-BOTTOMONIUM PENTAQUARK STATES
Below, we calculate the spectrum of η b (2S)-, Υ(2S)and χ b (1P )-N bound states in the baryo-bottomonium picture by solving the eigenvalue problem of Eq. (6). The results are enlisted in Table III. The baryo-bottomonium quantum numbers, shown in the third column of Table III, are obtained by means of the prescription of Eq. (7). In order to show that the binding energies of the bb − qqq system are strongly dependent on the values of the chromo-electric polarizabilities, we present two sets of results for the spectrum of this system. The results are listed on the fourth column of Table III. In the first case, we use α ΥΥ (2S) and α ΥΥ (1P ) from Eqs. (20) and (22), respectively; in the second case, the values of α ΥΥ (2S) and α ΥΥ (1P ) are given by Eqs. (17) and (21), respectively. The two sets of α ΥΥ 's values are calculated by using two different definitions of the Bohr radius of bottomonium; see Eqs. (9) and (18).
Our results for the baryo-bottomonium pentaquarks span a wide energy interval, 10.4 − 10.9 GeV. The presence of a heavier (nonrelativistic) QQ pair is expected to make the system more stable: this is why the hiddenbottom pentaquarks are more tightly bounded than their hidden-charm counterparts. See Tables II and III. Our conjecture agrees with an unitary coupled-channel model [55], and with a molecular model [11] approach. For this reason, after the recent observation of hidden-charm P c (4380) and P c (4450) pentaquarks, we suggest to experimentalists that looking for pentaquark states in the hidden-bottom sector may be essential on the analysis of new bound states. Moreover, in some cases, the baryobottomonium potential well is deep enough to give rise to a ground state plus its excited state, as can be observed in the case of η b (2S) ⊗ N and Υ(2S) ⊗ N from the lower half of Table III, though the binding energy of the excited state is just a few tens of MeV. The emergence of these excitations is a consequence of the value of the bottomonium chromo-electric polarizability. If the value of α ΥΥ (2S) is decreased from 33 to 23 GeV −3 , these excitations disappear.
Once the values of the 2S and 1P bottomonium chromo-electric polarizabilities are fitted to available experimental data, it will be interesting to discuss the possible emergence of deeply bound baryo-bottomonium pentaquarks by using more realistic values of α ΥΥ (nℓ).
VI. CONCLUSION
We adopted the baryo-charmonium model to discuss the possible emergence of η c -and J/ψ-N * , η c (2S)-, ψ(2S)-and χ c (1P )-N bound states, where N is the nucleon and N * a nucleon resonance. The energies of baryocharmonia were computed by solving the Schrödinger equation for the baryo-charmonium potential [33,39], which was approximated as a finite well whose width and size could be expressed as a function of the N (N * ) radius and the charmonium chromo-electric polarizability, α ψψ . The baryo-charmonium masses and quantum numbers were compared with the existing experimental data, so that we could interpret the recently discovered P c (4380) and P c (4450) pentaquarks as ψ(2S) ⊗ N and χ c2 (1P ) ⊗ N baryo-charmonia, respectively.
We also provided results for bottomonium-nucleon bound states, where the beauty partners of the LHCb pentaquarks, P b , were found to be more deeply bound. In some cases, the potential well describing the interaction between the bottomonium core and the baryonic matter was found to be deep enough to give rise to a ground-plus excited state.
For this reason, we believe that it is more probable to detect hidden-bottom pentaquarks than their hiddencharm counterparts. We suggest to experimentalists to look for five-quark states in the hidden-bottom sector in the 10.4 − 10.9 GeV energy region.
In conclusion, a possible way to disentangle the in-
FIG. 1 :
1Baryo-charmonium spectrum of ηc(2S)-, ψ(2S)-, and χc(1P )-N bound states (dotted lines), calculated with α scatt ψψ (2S) = 18 GeV −3 and α scatt ψψ (1P ) = 11 GeV −3 , and ηc-and J/ψ-N * bound states (black lines), calculated with α Coul ψψ (1S) = 4.1 GeV −3 . The theoretical results are compared to the experimental masses of Pc(4380) and Pc(4450) pentaquarks (boxes) [1].
FIG. 2 :
2Baryo-bottomonium spectrum of η b (2S)-, Υ(2S) and χ b (1P )-N bound states, calculated with: 1) α scatt ΥΥ (2S) = 33 GeV −3 and α scatt ΥΥ (1P ) = 21 GeV −3 (dotted lines); α scatt ΥΥ (2S) = 23 GeV −3 and α scatt ΥΥ (1P ) = 14 GeV −3 (black lines).
. The time-independent Schrödinger equation is solved numerically by means of a finite differences algorithm [54, Vol. 3, Sec..Source
a Nψ
α scatt
ψψ (1S) α scatt
ψψ (2S) α scatt
ψψ (1P )
[fm]
[GeV −3 ] [GeV −3 ] [GeV −3 ]
Refs. [31, 48] ≈ −0.05
0.25
18
11
Ref. [32, 49]
−0.37
2.0
143
90
Ref. [50] −0.70 ± 0.66
3.8
273
172
Ref. [51]
−0.24
1.3
93
59
Ref. [52] −0.10 ± 0.02
0.54
39
25
Ref. [53]
≈ 0.3
1.6
115
72
TABLE I :
IThe chromo-electric polarizabilities, α scatt ψψ , are fitted to charmonium-nucleon scattering lengths, a Nψ , via Eq.(13).Composition
α ψψ (nℓ) [GeV −3 ]
J P
tot
Binding [MeV] Mass [MeV] Assignment
ηc ⊗ N (1440)
4.1
1
2
−
−16
4397
-
ηc ⊗ N (1520)
4.1
1
2
+ or 3
2
+
−22
4476
-
ηc ⊗ N (1535)
4.1
TABLE II :
IIBaryo-charmonium model predictions (fourth and fifth columns), calculated by solving the Schrödinger equation(6) with the chromo-electric polarizabilities α Coul ψψ (1S) (upper part of the table) or α scatt ψψ (2S) and α scatt ψψ (1P ) (lower part).
ternal structure of the baryo-quarkonium pentaquarks is to study their strong decay patterns in the baryoquarkonium picture. This will be the argument of a subsequent paper.
. R Aaij, LHCb CollaborationPhys. Rev. Lett. 11572001R. Aaij et al. [LHCb Collaboration], Phys. Rev. Lett. 115, 072001 (2015).
. C Patrignani, Particle Data GroupChin. Phys. C. 40100001C. Patrignani et al. (Particle Data Group), Chin. Phys. C 40, 100001 (2016).
. J J Wu, R Molina, E Oset, B S Zou, Phys. Rev. Lett. 105232001J. J. Wu, R. Molina, E. Oset and B. S. Zou, Phys. Rev. Lett. 105, 232001 (2010);
. Phys. Rev. C. 8415202Phys. Rev. C 84, 015202 (2011);
. C W Shen, F K Guo, J J Xie, B S Zou, Nucl. Phys. A. 954393C. W. Shen, F. K. Guo, J. J. Xie and B. S. Zou, Nucl. Phys. A 954, 393 (2016).
. Z C Yang, Z F Sun, J He, X Liu, S L Zhu, Chin. Phys. C. 366Z. C. Yang, Z. F. Sun, J. He, X. Liu and S. L. Zhu, Chin. Phys. C 36, 6 (2012).
. A Feijoo, V K Magas, A Ramos, E Oset, Phys. Rev. D. 9239905Phys. Rev. DA. Feijoo, V. K. Magas, A. Ramos and E. Oset, Phys. Rev. D 92, 076015 (2015) Erratum: [Phys. Rev. D 95, 039905 (2017)].
. J He, Phys. Lett. B. 753547J. He, Phys. Lett. B 753, 547 (2016).
. M Karliner, J L Rosner, Phys. Rev. Lett. 115122001M. Karliner and J. L. Rosner, Phys. Rev. Lett. 115, 122001 (2015).
. R Chen, X Liu, X Q Li, S L Zhu, Phys. Rev. Lett. 115132002R. Chen, X. Liu, X. Q. Li and S. L. Zhu, Phys. Rev. Lett. 115, 132002 (2015).
. H X Chen, W Chen, X Liu, T G Steele, S L Zhu, Phys. Rev. Lett. 115172001H. X. Chen, W. Chen, X. Liu, T. G. Steele and S. L. Zhu, Phys. Rev. Lett. 115, 172001 (2015).
. K Azizi, Y Sarac, H Sundu, Phys. Rev. D. 9594030K. Azizi, Y. Sarac and H. Sundu, Phys. Rev. D 95, 094016 (2017); 96, 094030 (2017);
. Phys. Lett. B. 782694Phys. Lett. B 782, 694 (2018).
. Y Yamaguchi, E Santopinto, Phys. Rev. D. 9614018Y. Yamaguchi and E. Santopinto, Phys. Rev. D 96, 014018 (2017);
. Y Yamaguchi, A Giachino, A Hosaka, E Santopinto, S Takeuchi, M Takizawa, Phys. Rev. D. 96114031Y. Yamaguchi, A. Giachino, A. Hosaka, E. Santopinto, S. Takeuchi and M. Takizawa, Phys. Rev. D 96, 114031 (2017).
. P G Ortega, D R Entem, F Fernndez, Phys. Lett. B. 764207P. G. Ortega, D. R. Entem and F. Fernndez, Phys. Lett. B 764, 207 (2017).
. L Maiani, A D Polosa, V Riquer, Phys. Lett. B. 749289L. Maiani, A. D. Polosa and V. Riquer, Phys. Lett. B 749, 289 (2015).
. R F Lebed, Phys. Lett. B. 749454R. F. Lebed, Phys. Lett. B 749, 454 (2015).
. Z G Wang, Eur. Phys. J. C. 7670Z. G. Wang, Eur. Phys. J. C 76, 70 (2016).
. G N Li, X G He, M He, JHEP. 1512128G. N. Li, X. G. He and M. He, JHEP 1512, 128 (2015).
. M I Eides, V Y Petrov, M V Polyakov, Phys. Rev. D. 9354039M. I. Eides, V. Y. Petrov and M. V. Polyakov, Phys. Rev. D 93, 054039 (2016);
. Eur. Phys. J. C. 7836Eur. Phys. J. C 78, 36 (2018).
. I A Perevalova, M V Polyakov, P Schweitzer, Phys. Rev. D. 9454024I. A. Perevalova, M. V. Polyakov and P. Schweitzer, Phys. Rev. D 94, 054024 (2016).
. M Alberti, G S Bali, S Collins, F Knechtli, G Moir, W Söldner, Phys. Rev. D. 9574501M. Alberti, G. S. Bali, S. Collins, F. Knechtli, G. Moir and W. Söldner, Phys. Rev. D 95, 074501 (2017).
. X H Liu, Q Wang, Q Zhao, Phys. Lett. B. 757231X. H. Liu, Q. Wang and Q. Zhao, Phys. Lett. B 757, 231 (2016).
. U G Meißner, J A Oller, Phys. Lett. B. 75159U. G. Meißner and J. A. Oller, Phys. Lett. B 751, 59 (2015).
. F K Guo, U G Meißner, W Wang, Z Yang, Phys. Rev. D. 9271502F. K. Guo, U. G. Meißner, W. Wang and Z. Yang, Phys. Rev. D 92, 071502 (2015).
. A Mironov, A Morozov, Pisma Zh. Eksp. Teor. Fiz. 102271JETP Lett.A. Mironov and A. Morozov, JETP Lett. 102, 271 (2015) [Pisma Zh. Eksp. Teor. Fiz. 102, 302 (2015)].
. N N Scoccola, D O Riska, M Rho, Phys. Rev. D. 9251501N. N. Scoccola, D. O. Riska and M. Rho, Phys. Rev. D 92, 051501 (2015).
. H X Chen, W Chen, X Liu, S L Zhu, Phys. Rept. 6391H. X. Chen, W. Chen, X. Liu and S. L. Zhu, Phys. Rept. 639, 1 (2016).
. A Ali, J S Lange, S Stone, Prog. Part. Nucl. Phys. 97123A. Ali, J. S. Lange and S. Stone, Prog. Part. Nucl. Phys. 97, 123 (2017).
. S L Olsen, T Skwarnicki, D Zieminska, Rev. Mod. Phys. 9015003S. L. Olsen, T. Skwarnicki and D. Zieminska, Rev. Mod. Phys. 90, 015003 (2018).
. F K Guo, C Hanhart, U G Meiner, Q Wang, Q Zhao, B S Zou, Rev. Mod. Phys. 9015004F. K. Guo, C. Hanhart, U. G. Meiner, Q. Wang, Q. Zhao and B. S. Zou, Rev. Mod. Phys. 90, 015004 (2018)
. S J Brodsky, I A Schmidt, G F De Teramond, Phys. Rev. Lett. 641011S. J. Brodsky, I. A. Schmidt and G. F. de Teramond, Phys. Rev. Lett. 64, 1011 (1990).
. M E Luke, A V Manohar, M J Savage, Phys. Lett. B. 288355M. E. Luke, A. V. Manohar and M. J. Savage, Phys. Lett. B 288, 355 (1992).
. A B Kaidalov, P E Volkovitsky, Phys. Rev. Lett. 693155A. B. Kaidalov and P. E. Volkovitsky, Phys. Rev. Lett. 69, 3155 (1992).
. A Sibirtsev, M B Voloshin, Phys. Rev. D. 7176005A. Sibirtsev and M. B. Voloshin, Phys. Rev. D 71, 076005 (2005).
. S Dubynskiy, M B Voloshin, Phys. Lett. B. 666344S. Dubynskiy and M. B. Voloshin, Phys. Lett. B 666, 344 (2008).
. F K Guo, C Hanhart, U G Meißner, Phys. Lett. B. 66526F. K. Guo, C. Hanhart and U. G. Meißner, Phys. Lett. B 665, 26 (2008);
. Phys. Rev. Lett. 102242004Phys. Rev. Lett. 102, 242004 (2009).
. M B Voloshin, Phys. Rev. D. 8791501M. B. Voloshin, Phys. Rev. D 87, 091501 (2013);
. X Li, M B Voloshin, Mod. Phys. Lett. A. 291450060X. Li and M. B. Voloshin, Mod. Phys. Lett. A 29, 1450060 (2014).
. Q Wang, M Cleven, F K Guo, C Hanhart, U G Meißner, X G Wu, Q Zhao, Phys. Rev. D. 89334001Q. Wang, M. Cleven, F. K. Guo, C. Hanhart, U. G. Meißner, X. G. Wu and Q. Zhao, Phys. Rev. D 89, no. 3, 034001 (2014);
. M Cleven, F K Guo, C Hanhart, Q Wang, Q Zhao, Phys. Rev. D. 9214005M. Cleven, F. K. Guo, C. Han- hart, Q. Wang and Q. Zhao, Phys. Rev. D 92, 014005 (2015).
. N Brambilla, G Krein, J Castellà, A Vairo, Phys. Rev. D. 9354002N. Brambilla, G. Krein, J. Tarrús Castellà and A. Vairo, Phys. Rev. D 93, 054002 (2016).
. J Y Panteleeva, I A Perevalova, M V Polyakov, P Schweitzer, arXiv:1802.09029J. Y. Panteleeva, I. A. Perevalova, M. V. Polyakov and P. Schweitzer, arXiv:1802.09029.
. J Ferretti, Phys. Lett. B. 782702J. Ferretti, Phys. Lett. B 782, 702 (2018).
. K Gottfried, Phys. Rev. Lett. 40598K. Gottfried, Phys. Rev. Lett. 40, 598 (1978);
. M B Voloshin, Nucl. Phys. B. 154365M. B. Voloshin, Nucl. Phys. B 154, 365 (1979);
. T M Yan, Phys. Rev. D. 221652T. M. Yan, Phys. Rev. D 22, 1652 (1980).
. M N Anwar, Y Lu, B S Zou, Phys. Rev. D. 95114031M. N. Anwar, Y. Lu and B. S. Zou, Phys. Rev. D 95, 114031 (2017).
. M B Voloshin, V I Zakharov, Phys. Rev. Lett. 45688M. B. Voloshin and V. I. Zakharov, Phys. Rev. Lett. 45, 688 (1980).
. E Santopinto, J Ferretti, Phys. Rev. C. 9225202E. Santopinto and J. Ferretti, Phys. Rev. C 92, 025202 (2015).
. M E Peskin, Nucl. Phys. B. 156365M. E. Peskin, Nucl. Phys. B 156, 365 (1979);
. G Bhanot, M E Peskin, Nucl. Phys. B. 156391G. Bhanot and M. E. Peskin, Nucl. Phys. B 156, 391 (1979).
. H Leutwyler, Phys. Lett. 98447H. Leutwyler, Phys. Lett. 98B, 447 (1981).
. M B Voloshin, Sov. J. Nucl. Phys. 36143M. B. Voloshin, Sov. J. Nucl. Phys. 36, 143 (1982).
. G Krein, A W Thomas, K Tsushima, Prog. Part. Nucl. Phys. 100161G. Krein, A. W. Thomas and K. Tsushima, Prog. Part. Nucl. Phys. 100, 161 (2018).
. O Gryniuk, M Vanderhaeghen, Phys. Rev. D. 9474001O. Gryniuk and M. Vanderhaeghen, Phys. Rev. D 94, 074001 (2016).
. M B Voloshin, Prog. Part. Nucl. Phys. 61455M. B. Voloshin, Prog. Part. Nucl. Phys. 61, 455 (2008).
. K Yokokawa, S Sasaki, T Hatsuda, A Hayashigaki, Phys. Rev. D. 7434504K. Yokokawa, S. Sasaki, T. Hatsuda and A. Hayashigaki, Phys. Rev. D 74, 034504 (2006).
. S J Brodsky, G A Miller, Phys. Lett. B. 412125S. J. Brodsky and G. A. Miller, Phys. Lett. B 412, 125 (1997).
. A Hayashigaki, Prog. Theor. Phys. 101923A. Hayashigaki, Prog. Theor. Phys. 101, 923 (1999).
. T Kawanai, S Sasaki, Phys. Rev. D. 8291501T. Kawanai and S. Sasaki, Phys. Rev. D 82, 091501 (2010);
. PoS. 2010156PoS LATTICE 2010, 156 (2010).
R P Feynman, R B Leighton, M L Sands, The Feynman Lectures on Physics. Addison-Wesley Pub. CoR. P. Feynman, R. B. Leighton and M. L. Sands, The Feynman Lectures on Physics, Addison-Wesley Pub. Co. (1963-1965).
. C W Shen, D Rönchen, U G Meißner, B S Zou, Chin. Phys. C. 4223106C. W. Shen, D. Rönchen, U. G. Meißner and B. S. Zou, Chin. Phys. C 42, 023106 (2018).
|
[] |
[
"METROPOLIS-HASTINGS VIEW ON VARIATIONAL INFERENCE AND ADVERSARIAL TRAINING",
"METROPOLIS-HASTINGS VIEW ON VARIATIONAL INFERENCE AND ADVERSARIAL TRAINING"
] |
[
"Neklyudov Kirill \nSamsung AI Center Higher School of Economics Moscow\nSamsung AI Center Higher School of Economics Moscow\nSamsung AI Center Higher School of Economics Moscow\nRussia, Russia, Russia\n",
"Pavel Shvechikov [email protected] \nSamsung AI Center Higher School of Economics Moscow\nSamsung AI Center Higher School of Economics Moscow\nSamsung AI Center Higher School of Economics Moscow\nRussia, Russia, Russia\n",
"Dmitry Vetrov [email protected] \nSamsung AI Center Higher School of Economics Moscow\nSamsung AI Center Higher School of Economics Moscow\nSamsung AI Center Higher School of Economics Moscow\nRussia, Russia, Russia\n"
] |
[
"Samsung AI Center Higher School of Economics Moscow\nSamsung AI Center Higher School of Economics Moscow\nSamsung AI Center Higher School of Economics Moscow\nRussia, Russia, Russia",
"Samsung AI Center Higher School of Economics Moscow\nSamsung AI Center Higher School of Economics Moscow\nSamsung AI Center Higher School of Economics Moscow\nRussia, Russia, Russia",
"Samsung AI Center Higher School of Economics Moscow\nSamsung AI Center Higher School of Economics Moscow\nSamsung AI Center Higher School of Economics Moscow\nRussia, Russia, Russia"
] |
[] |
In this paper we propose to view the acceptance rate of the Metropolis-Hastings algorithm as a universal objective for learning to sample from target distribution -given either as a set of samples or in the form of unnormalized density. This point of view unifies the goals of such approaches as Markov Chain Monte Carlo (MCMC), Generative Adversarial Networks (GANs), variational inference. To reveal the connection we derive the lower bound on the acceptance rate and treat it as the objective for learning explicit and implicit samplers. The form of the lower bound allows for doubly stochastic gradient optimization in case the target distribution factorizes (i.e. over data points). We empirically validate our approach on Bayesian inference for neural networks and generative models for images.
| null |
[
"https://arxiv.org/pdf/1810.07151v1.pdf"
] | 53,160,152 |
1810.07151
|
9c9630d85f6d2b8023eb0c0bb540e3b522113519
|
METROPOLIS-HASTINGS VIEW ON VARIATIONAL INFERENCE AND ADVERSARIAL TRAINING
Neklyudov Kirill
Samsung AI Center Higher School of Economics Moscow
Samsung AI Center Higher School of Economics Moscow
Samsung AI Center Higher School of Economics Moscow
Russia, Russia, Russia
Pavel Shvechikov [email protected]
Samsung AI Center Higher School of Economics Moscow
Samsung AI Center Higher School of Economics Moscow
Samsung AI Center Higher School of Economics Moscow
Russia, Russia, Russia
Dmitry Vetrov [email protected]
Samsung AI Center Higher School of Economics Moscow
Samsung AI Center Higher School of Economics Moscow
Samsung AI Center Higher School of Economics Moscow
Russia, Russia, Russia
METROPOLIS-HASTINGS VIEW ON VARIATIONAL INFERENCE AND ADVERSARIAL TRAINING
Under review as a conference paper at ICLR 2019
In this paper we propose to view the acceptance rate of the Metropolis-Hastings algorithm as a universal objective for learning to sample from target distribution -given either as a set of samples or in the form of unnormalized density. This point of view unifies the goals of such approaches as Markov Chain Monte Carlo (MCMC), Generative Adversarial Networks (GANs), variational inference. To reveal the connection we derive the lower bound on the acceptance rate and treat it as the objective for learning explicit and implicit samplers. The form of the lower bound allows for doubly stochastic gradient optimization in case the target distribution factorizes (i.e. over data points). We empirically validate our approach on Bayesian inference for neural networks and generative models for images.
INTRODUCTION
Bayesian framework and deep learning have become more and more interrelated during recent years. Recently Bayesian deep neural networks were used for estimating uncertainty (Gal & Ghahramani, 2016), ensembling (Gal & Ghahramani, 2016) and model compression (Molchanov et al., 2017). On the other hand, deep neural networks may be used to improve approximate inference in Bayesian models (Kingma & Welling, 2014).
Learning modern Bayesian neural networks requires inference in the spaces with dimension up to several million by conditioning the weights of DNN on hundreds of thousands of objects. For such applications, one has to perform the approximate inference -predominantly by either sampling from the posterior with Markov Chain Monte Carlo (MCMC) methods or approximating the posterior with variational inference (VI) methods.
MCMC methods are non-parametric, provide the unbiased (in the limit) estimate but require careful hyperparameter tuning especially for big datasets and high dimensional problems. The large dataset problem has been addressed for different MCMC algorithms: stochastic gradient Langevin dynamics (Welling & Teh, 2011), stochastic gradient Hamiltonian Monte Carlo , minibatch Metropolis-Hastings algorithms (Korattikara et al., 2014;Chen et al., 2016). One way to address the problem of high dimension is the design of a proposal distribution. For example, for the Metropolis-Hastings (MH) algorithm there exists a theoretical guideline for scaling the variance of a Gaussian proposal (Roberts et al., 1997;2001). More complex proposal designs include adaptive updates of the proposal distribution during iterations of MH algorithm (Holden et al., 2009;Giordani & Kohn, 2010).
Variational inference is extremely scalable but provides a biased estimate of the target distribution. Using the doubly stochastic procedure (Titsias & Lázaro-Gredilla, 2014;Hoffman et al., 2013) VI can be applied to extremely large datasets and high dimensional spaces, such as a space of neural network weights (Kingma et al., 2015;Gal & Ghahramani, 2015;2016). The bias introduced by variational approximation can be mitigated by using flexible approximations (Rezende & Mohamed, 2015) and resampling (Grover et al., 2018).
Generative Adversarial Networks (Goodfellow et al., 2014) (GANs) is a different approach to learn samplers. Under the framework of adversarial training different optimization problems could be solved efficiently (Arjovsky et al., 2017;Nowozin et al., 2016). The shared goal of "learning to sample" inspired the connection of GANs with VI (Mescheder et al., 2017) and MCMC (Song et al., 2017).
In this paper, we propose a novel perspective on learning to sample from a target distribution by optimizing parameters of either explicit or implicit probabilistic model. Our objective is inspired by the view on the acceptance rate of the Metropolis-Hastings algorithm as a quality measure of the sampler. We derive a lower bound on the acceptance rate and maximize it with respect to parameters of the sampler, treating the sampler as a proposal distribution in the Metropolis-Hastings scheme.
We consider two possible forms of the target distribution: unnormalized density (density-based setting) and a set of samples (sample-based setting). Each of these settings reveals a unifying property of the proposed perspective and the derived lower bound. In the density-based setting, the lower bound is the sum of forward and reverse KL-divergences between the true posterior and its approximation, connecting our approach to VI. In the sample-based setting, the lower bound admit a form of an adversarial game between the sampler and a discriminator, connecting our approach to GANs.
The closest work to ours is of Song et al. (2017). In contrast to their paper our approach (1) is free from hyperparameters; (2) is able to optimize the acceptance rate directly; (3) avoids minimax problem in the density based setting.
Our main contributions are as follows:
1. We introduce a novel perspective on learning to sample from the target distribution by treating the acceptance rate in the Metropolis-Hastings algorithm as a measure of sampler quality.
2. We derive the lower bound on the acceptance rate allowing for doubly stochastic optimization of the proposal distribution in case when the target distribution factorizes (i.e. over data points).
3. For sample-based and density-based forms of target distribution we show the connection of the proposed algorithm to variational inference and GANs.
The rest of the paper is organized as follows. In Section 2 we introduce the lower bound on the AR. Special forms of target distribution are addressed in Section 3. We validate our approach on the problems of approximate Bayesian inference in the space of high dimensional neural network weights and generative modeling in the space of images in Section 4. We discuss results and directions of the future work in Section 5.
ACCEPTANCE RATE FOR METROPOLIS-HASTINGS ALGORITHM
PRELIMINARIES
In MH algorithm we need to sample from target distribution p(x) while we are only able to sample from proposal distribution q(x | x). One step of the MH algorithm can be described as follows.
1. sample proposal point x ∼ q(x | x), given previously accepted point x
2. accept x , if p(x )q(x | x ) p(x)q(x | x) > u, u ∼ Uniform[0, 1] x, otherwise
If the proposal distribution q(x | x) does not depend on x, i.e. q(x | x) = q(x ), the algorithm is called independent MH algorithm.
The quality of the proposal distribution is measured by acceptance rate and mixing time. Mixing time defines the speed of convergence of the Markov chain to the stationary distribution. The acceptance rate of the MH algorithm is defined as
AR = E ξ min{1, ξ} = dxdx p(x)q(x | x) min 1, p(x )q(x | x ) p(x)q(x | x) ,(1)where ξ = p(x )q(x | x ) p(x)q(x | x) , x ∼ p(x), x ∼ q(x | x).(2)
In case of independent proposal distribution we show that the acceptance rate defines a semimetric in distribution space between p and q (see Appendix B).
OPTIMIZING THE LOWER BOUND ON ACCEPTANCE RATE
Although, we can maximize the acceptance rate of the MH algorithm (Eq. 1) directly w.r.t. parameters φ of the proposal distribution q φ (x | x), we propose to maximize the lower bound on the acceptance rate. As our experiments show (see Section 4) the optimization of the lower bound compares favorably to the direct optimization of the acceptance rate. To introduce this lower bound we first express the acceptance rate in terms of total variation distance.
Theorem 1 For random variable ξ = p(x )q(x | x ) p(x)q(x | x) , x ∼ p(x), x ∼ q(x | x) E ξ min{1, ξ} = 1 − 1 2 E ξ |ξ − 1| = 1 − TV p(x )q(x | x ) p(x)q(x | x) ,(3)
where TV is the total variation distance.
The proof of Theorem 1 can be found in Appendix A. This reinterpretation in terms of total variation allows us to lower bound the acceptance rate through the Pinsker's inequality
AR ≥ 1 − 1 2 · KL p(x )q(x | x ) p(x)q(x | x) .(4)
The maximization of this lower bound can be equivalently formulated as
KL p(x )q φ (x | x ) p(x)q φ (x | x) → min φ .(5)
In the following sections, we show the benefits of this optimization problem in two different settings -when the target distribution is given in a form of unnormalized density and as a set of samples.
OPTIMIZATION OF PROPOSAL DISTRIBUTION
From now on we consider only optimization problem Eq. 5 but the proposed algorithms can be also used for the direct optimization of the acceptance rate (Eq. 1).
To estimate the loss function (Eq. 5) we need to evaluate the density ratio. In the density-based setting unnormalized density of the target distribution is given, so we suggest to use explicit proposal distribution to compute the density ratio explicitly. In the sample-based setting, however, we cannot compute the density ratio, so we propose to approximate it via adversarial training (Goodfellow et al., 2014). The brief summary of constraints for both settings is shown in Table 1.
The following subsections describe the algorithms in detail.
Density-based givenp(x) ∝ p(x) explicit model q(x ) explicit Sample-based set of samples X ∼ p(x) implicit model q(x ) learned discriminator implicit model q(x |x)
DENSITY-BASED SETTING
In the density-based setting, we assume the proposal to be an explicit probabilistic model, i.e. the model that we can sample from and evaluate its density at any point up to the normalization constant. We also assume that the proposal is reparametrisable (Kingma & Welling, 2014;Rezende et al., 2014;Gal, 2016).
If the proposal belongs to a parametric family, e.g. q φ (x | x) = N (x | x, σ) we might face the collapsing to the delta-function problem. To tackle this problem one can properly choose a parametric family of the proposal, or make the proposal independent q φ (x | x) = q φ (x ). In Appendix C we provide an intuition that shows why the Markov chain proposal can collapse to delta-function and the independent proposal can't. In this section, we consider only the independent proposal. We also provide empirical evidence in section 4 that collapsing to the delta-function does not happen for independent proposal distribution.
Considering q φ (x ) as the proposal, optimization problem 5 takes the form
L(p, q φ ) = KL p(x )q φ (x) p(x)q φ (x ) = E x ∼ p(x) x ∼ q φ (x ) log p(x)q φ (x ) p(x )q φ (x) → min φ .(6)
Explicit form of the proposal q φ (x ) and the target p(x) distributions allows us to obtain density ratios q φ (x)/q φ (x ) and p(x )/p(x) for any points x, x . But to estimate the loss in Eq. 6 we also need to obtain samples from the target distribution x ∼ p(x) during training. For this purpose, we use the current proposal q φ and run the independent MH algorithm. After obtaining samples from the target distribution it is possible to perform optimization step by taking stochastic gradients w.r.t. φ. Pseudo-code for the obtained procedure is shown in Algorithm 1.
Algorithm 1 Optimization of proposal distribution in density-based case
Require: explicit probabilistic model q φ (x )
Require: density of target distributionp(x) ∝ p(x) while φ not converged do sample {x k } K k=1 ∼ q φ (x ) sample {x k } K k=1 ∼ p(x) using independent MH with current proposal q φ L(p, q φ ) 1 K K k=1 log p(x k )q φ (x k ) p(x k )q φ (x k ) approximate loss with finite number of samples φ ← φ − α∇ φ L(p, q φ )
perform gradient descent step end while return optimal parameters φ Algorithm 1 could also be employed for the direct optimization of the acceptance rate (Eq. 1). Now we apply this algorithm for Bayesian inference problem and show that during optimization of the lower bound we can use minibatches of data, while it is not the case for direct optimization of the acceptance rate. We consider Bayesian inference problem for discriminative model on dataset
D = {(x i , y i )} N i=1
, where x i is the feature vector of ith object and y i is its label. For the discriminative model we know likelihood p(y i | x i , θ) and prior distribution p(θ). In order to obtain predictions for some object x i , we need to evaluate the predictive distribution
p(y i | x i ) = E p(θ | D) p(y i | x i , θ).(7)
To obtain samples from posterior distribution p(θ | D) we suggest to learn proposal distribution q φ (θ) and perform independent MH algorithm. Thus the optimization problem 6 can be rewritten as follows.
L p(θ | D), q φ (θ) = KL p(θ | D)q φ (θ) p(θ | D)q φ (θ ) → min φ (8)
Note that due to the usage of independent proposal, the minimized KL-divergence splits up into the sum of two KL-divergences.
KL p(θ | D)q φ (θ) p(θ | D)q φ (θ ) = KL q φ (θ) p(θ | D) + KL p(θ | D) q φ (θ ) → min φ (9) Minimization of the first KL-divergence corresponds to the variational inference procedure. KL q φ (θ) p(θ | D) = −E θ∼q φ (θ) N i=1 log p(y i | x i , θ) + KL(q φ (θ) p(θ)) + log p(D)(10)
The second KL-divergence has the only term that depends on φ. Thus we obtain the following optimization problem
− E θ∼q φ (θ) N i=1 log p(y i | x i , θ) + KL(q φ (θ) p(θ)) − E θ∼p(θ | D) log q φ (θ) → min φ .(11)
The first summand here contains the sum over all objects in dataset D. We follow doubly stochastic variational inference and suggest to perform unbiased estimation of the gradient in Eq. 11 using only minibatches of data. Moreover, we can use recently proposed techniques (Korattikara et al., 2014;Chen et al., 2016) that perform the independent MH algorithm using only minibatches of data. Combination of these two techniques allows us to use only minibatches of data during iterations of algorithm 1. In the case of the direct optimization of the acceptance rate, straightforward usage of minibatches results in biased gradients. Indeed, for the direct optimization of the acceptance rate (Eq. 1) we have the product over the all training data inside min function.
SAMPLE-BASED SETTING
In the sample-based setting, we assume the proposal to be an implicit probabilistic model, i.e. the model that we can only sample from. As in the density-based setting, we assume that we are able to perform the reparameterization trick for the proposal.
In this subsection we consider only Markov chain proposal q φ (x | x), but everything can be applied to independent proposal q φ (x ) by simple substitution q φ (x | x) with q φ (x ). From now we will assume our proposal distribution to be a neural network that takes x as its input and outputs x . Considering proposal distribution parameterized by a neural network allows us to easily exclude delta-function from the space of solutions. We avoid learning the identity mapping by using neural networks with the bottleneck and noisy layers. For the detailed description of the architectures see Appendix E.
The set of samples from the true distribution X ∼ p(x) allows for the Monte Carlo estimation of the loss
L(p, q φ ) = E x ∼ p(x) x ∼ q φ (x | x) log p(x)q φ (x | x) p(x )q φ (x | x ) → min φ .(12)
To compute the density ratio
p(x)q φ (x | x) p(x )q φ (x | x )
we suggest to use well-known technique of density ratio estimation via training discriminator network. Denoting discriminator output as D(x, x ), we suggest the following optimization problem for the discriminator.
− E x ∼ p(x) x ∼ q φ (x | x) log D(x, x ) − E x ∼ p(x) x ∼ q φ (x | x) log(1 − D(x , x)) → min D(13)
Speaking informally, such discriminator takes two images as input and tries to figure out which image is sampled from true distribution and which one is generated by the one step of proposal distribution. It is easy to show that optimal discriminator in problem 13 will be
D(x, x ) = p(x)q φ (x | x) p(x)q φ (x | x) + p(x )q φ (x | x ) .(14)
Note that for optimal discriminator we have D(x, x ) = 1 − D(x , x). In practice, we have no optimal discriminator and these values can differ significantly. Thus, we have four ways for density ratio estimation that may differ significantly.
p(x)q φ (x | x) p(x )q φ (x | x ) ≈ D(x, x ) 1 − D(x, x ) ≈ 1 − D(x , x) D(x , x) ≈ 1 − D(x , x) 1 − D(x, x ) ≈ D(x, x ) D(x , x)(15)
To avoid the ambiguity we suggest to use the discriminator of a special structure. Let D(x, x ) be a convolutional neural network with scalar output. Then the output of discriminator D(x, x ) is defined as follows.
D(x, x ) = exp( D(x, x )) exp( D(x, x )) + exp( D(x , x))(16)
In other words, such discriminator can be described as the following procedure. For single neural network D(·, ·) we evaluate two outputs D(x, x ) and D(x , x). Then we take softmax operation for these values. Summing up all the steps, we obtain algorithm 2.
Algorithm 2 Optimization of proposal distribution in sample-based case
Require: implicit probabilistic model q φ (x | x) Require: large set of samples X ∼ p(x) for n iterations do sample {x k } K k=1 ∼ X sample {x k } K k=1 ∼ q φ (x |x) train discriminator D by optimizing 13 L(p, q φ ) ≈ 1 K K k=1 log D(x k ,x k ) 1−D(x k ,x k ) approximate loss with finite number of samples φ ← φ − α∇ φ L(p, q φ )
perform gradient descent step end for return parameters φ Algorithm 2 could also be employed for direct optimization of the acceptance rate (Eq. 1). But, in Appendix F we provide an intuition for this setting that the direct optimization of the acceptance rate may struggle from vanishing gradients.
EXPERIMENTS
In this section, we provide experiments for both density-based and sample-based settings, showing the proposed procedure is applicable to high dimensional target distributions. Code for reproducing all of the experiments will be published with the camera-ready version of the paper. This experiment shows that it is possible to optimize the acceptance rate, optimizing its lower bound.
For the target distribution we consider bimodal Gaussian p(x) = 0.5 · N (x | − 2, 0.5) + 0.5 · N (x | 2, 0.7), for the independent proposal we consider unimodal gaussian q(x) = N (x | µ, σ). We perform stochastic gradient optimization from the same initialization for both objectives (Fig. 1) and obtain approximately the same local maximums.
DENSITY-BASED SETTING
In density-based setting, we consider Bayesian inference problem for the weights of a neural network. In our experiments we consider approximation of predictive distribution (Eq. 7) as our main goal. To estimate the goodness of the approximation we measure negative log-likelihood and accuracy on the test set.
In subsection 3.1 we show that lower bound on acceptance rate can be optimized more efficiently than acceptance rate due to the usage of minibatches. But other questions arise.
1. Does the proposed objective in Eq. 11 allow for better estimation of predictive distribution compared to the variational inference?
2. Does the application of the MH correction to the learned proposal distribution allow for better estimation of the predictive distribution (Eq. 7) than estimation via raw samples from the proposal?
To answer these questions we consider reduced LeNet-5 architecture (see Appendix D) for classification task on 20k images from MNIST dataset (for test data we use all of the MNIST test set). Even after architecture reduction we still face a challenging task of learning a complex distribution in 8550-dimensional space. For the proposal distribution we use fully-factorized gaussian q φ (θ) = d j=1 N (θ j | µ j , σ j ) and standard normal distribution for prior p(θ) = d j=1 N (θ j | 0, 1). For variational inference, we train the model using different initialization and pick the model according to the best ELBO. For our procedure, we do the same and choose the model by the maximum value of the acceptance rate lower bound. In algorithm 1 we propose to sample from the posterior distribution using the independent MH and the current proposal. It turns out in practice that it is better to use the currently learned proposal q φ (θ) = N (θ | µ, σ) as the initial state for random-walk MH algorithm. That is, we start with the mean µ as an initial point, and then use random-walk proposal q(θ | θ) = N (θ | θ, σ) with the variances σ of current independent proposal. This should be considered as a heuristic that improves the approximation of the loss function. In both procedures we apply the independent MH algorithm to estimate the predictive distribution.
The optimization of the acceptance rate lower bound results in the better estimation of predictive distribution than the variational inference (see Fig. 2). Optimization of acceptance rate for the same number of epochs results in nearly 30% accuracy on the test set. That is why we do not report results for this procedure in Fig. 2.
To answer the second question we estimate predictive distribution in two ways. The first way is to perform 100 accept/reject steps of the independent MH algorithm with the learned proposal q φ (θ) after each epoch, i.e. perform MH correction of the samples from the proposal. The second way is to take the same number of samples from q φ (θ) without MH correction. For both estimations of predictive distribution, we evaluate negative log-likelihood on the test set and compare them.
The MH correction of the learned proposal improves the estimation of predictive distribution for the variational inference (right plot of Fig. 3) but does not do so for the optimization of the acceptance rate lower bound (left plot of Fig. 3). This fact may be considered as an implicit evidence that our procedure learns the proposal distribution with higher acceptance rate.
SAMPLE-BASED SETTING
In the sample-based setting, we estimate density ratio using a discriminator. Hence we do not use the minibatching property (see subsection 3.1) of the obtained lower bound, and optimization problems for the acceptance rate and for the lower bound have the same efficiency in terms of using data. That is why our main goal in this setting is to compare the optimization of the acceptance rate and the optimization of the lower bound. Also, in this setting, we have Markov chain proposal that is interesting to compare with the independent proposal. Summing up, we formulate the following questions:
1. Does the optimization of the lower bound has any benefits compared to the direct optimization of the acceptance rate?
2. Do we have mixing issue while learning Markov chain proposal in practice?
3. Could we improve the visual quality of samples by applying the MH correction to the learned proposal?
We use DCGAN architecture for the proposal and discriminator (see Appendix E) and apply our algorithm to MNIST dataset. We consider two optimization problems: direct optimization of the acceptance rate and its lower bound. We also consider two ways to obtain samples from the approximation of the target distribution -use raw samples from the learned proposal, or perform the MH algorithm, where we use the learned discriminator for density ratio estimation.
In case of the independent proposal, we show that the MH correction at evaluation step allows to improve visual quality of samples -figures 4(a) and 4(b) for the direct optimization of acceptance rate, figures 4(c) and 4(d) for the optimization of its lower bound. Note that in Algorithm 2 we do not apply the independent MH algorithm during training. Potentially, one can use the MH algorithm considering any generative model as a proposal distribution and learning a discriminator for density ratio estimation. Also, for this proposal, we demonstrate the negligible difference in visual quality of samples obtained by the direct optimization of acceptance rate (see Fig. 4(a)) and by the optimization of the lower bound (see Fig. 4(c)). Figure 4: Samples from the learned independent proposal obtained via optimization: of acceptance rate (4(a), 4(b)) and its lower bound (4(c), 4(d)). In Fig. 4(b), 4(d) we show raw samples from the learned proposal. In Fig. 4(a), 4(c) we show the samples after applying the independent MH correction to the samples, using the learned discriminator for density ratio estimation.
In the case of the Markov chain proposal, we show that the direct optimization of acceptance rate results in slow mixing (see Fig. 5(a)) -most of the time the proposal generates samples from one of the modes (digits) and rarely switches to another mode. When we perform the optimization of the lower bound the proposal switches between modes frequently (see Fig. 5(b)). Figure 5: Samples from the chain obtained via the MH algorithm with the learned proposal and the learned discriminator for density ratio estimation. Fig. 5(a) corresponds to the direct optimization of the acceptance rate. Fig. 5(b) -to optimization of the lower bound on acceptance rate. Samples in the chain are obtained one by one from left to right from top to bottom.
To show that the learned proposal distribution has the Markov property rather than being totally independent, we show samples from the proposal conditioned on two different points in the dataset (see Fig. 6). The difference in samples from two these distributions ( Fig. 6(a), 6(a)) reflects the dependence on the conditioning.
Additionally, in Appendix G we present samples from the chain after 10000 accepted images and also samples from the chain that was initialized with noise.
DISCUSSION AND FUTURE WORK
This paper proposes to use the acceptance rate of the MH algorithm as the universal objective for learning to sample from some target distribution. We also propose the lower bound on the acceptance rate that should be preferred over the direct maximization of the acceptance rate in many cases. The proposed approach provides many ways of improvement by the combination with techniques from the recent developments in the field of MCMC, GANs, variational inference. For example
• The quality of a sampler in density-based setting could be improved with the normalizing flows (Rezende & Mohamed, 2015).
• We can use stochastic Hamiltonian Monte Carlo for the loss estimation in Algorithm 1.
• In sample-based setting one can use more advanced techniques of density ratio estimation.
Another interesting direction of further research is the design of the family of explicit Markov chain proposals resistant to the collapsing to the delta-function problem. Application of the MH algorithm to improve the quality of generative models also requires exhaustive further exploration and rigorous treatment.
A PROOF OF THEOREM 1
Remind that we have random variables
ξ = p(x )q(x | x ) p(x)q(x | x) , x ∼ p(x), x ∼ q(x | x)
and u ∼ Uniform[0, 1], and want to prove the following equalities.
E ξ min{1, ξ} = P{ξ > u} = 1 − 1 2 E ξ |ξ − 1|(17)
Equality E ξ min{1, ξ} = P{ξ > u} is obvious.
E ξ min{1, ξ} = ∞ 0 p ξ (x) min{1, x}dx = x≥1 p ξ (x)dx + x<1 p ξ (x)xdx (18) P{ξ > u} = ∞ 0 dxp ξ (x) x 0 [0 ≤ u ≤ 1]du = x≥1 p ξ (x)dx + x<1 p ξ (x)xdx(19)
Equality P{ξ > u} = 1 − 1 2 E ξ |ξ − 1| can be proofed as follows.
P{ξ > u} = 1 0 du +∞ u p ξ (x)dx = 1 0 (1 − F ξ (u))du = (20) = 1 − uF ξ (u) 1 0 − 1 0 up ξ (u)du = 1 − F ξ (1) + 1 0 up ξ (u)du,(21)
where F ξ (u) is CDF of random variable ξ. Note that F ξ (0) = 0 since ξ ∈ (0, +∞]. Eq. 21 can be rewritten in two ways.
1 − F ξ (1) + 1 0 up ξ (u)du = 1 + 1 0 (u − 1)p ξ (u)du = 1 − 1 0 |u − 1|p ξ (u)du(22)
To rewrite Eq. 21 in the second way we note that Eξ = 1. (23) Summing equations 22 and 23 results in the following formula
1−F ξ (1)+ 1 0 up ξ (u)du = +∞ 1 p ξ (u)du+1− +∞ 1 up ξ (u)du = 1− +∞ 1 |u−1|p ξ (u)duP{ξ > u} = 1 − 1 2 E ξ |ξ − 1|.(24)
Using the form of ξ we can rewrite the acceptance rate as
1 − 1 2 E ξ |ξ − 1| = 1 − TV p(x )q(x | x ) p(x)q(x | x) .(25)
B ACCEPTANCE RATE OF INDEPENDENT MH DEFINES SEMIMETRIC IN
DISTRIBUTION SPACE
In independent case we have ξ = p(x )q(x) p(x)q(x ) , x ∼ p(x), x ∼ q(x ) and we want to prove that E ξ |ξ − 1| is semimetric (or pseudo-metric) in space of distributions. For this appendix, we denote D(p, q) = E ξ |ξ − 1|. The first two axioms for metric obviously holds
But weaker inequality can be proved.
D(p, s) + D(q, s) = |p(x)s(y) − p(y)s(x)|dydx + |q(x)s(y) − q(y)s(x)|dydx = (27) = | p(x)s(y)q(z) a − p(y)s(x)q(z) b | + | q(x)s(y)p(z) c − q(y)s(x)p(z) d | dxdydz (28) D(p, s) + D(q, s) = |p(z)s(y)q(x) − p(y)s(z)q(x)|dxdydz+ (29) + |q(x)s(z)p(y) − q(z)s(x)p(y)|dxdydz ≥ q(x)s(y)p(z) c − p(y)s(x)q(z) b dxdydz (30) D(p, s) + D(q, s) = |p(z)s(x)q(y) − p(x)s(z)q(y)|dxdydz+ (31) + |q(y)s(z)p(x) − q(z)s(y)p(x)|dxdydz ≥ q(y)s(x)p(z) d − p(x)s(y)q(z) a dxdydz(32)
Summing up equations 28, 30 and 32 we obtain
3(D(p, s) + D(q, s)) ≥ dxdydz |a − b| + |c − d| + |c − b| + |d − a| ≥ 2 dxdydz|d − b| = (33) = 2 dxdydzs(x) q(y)p(z) − q(z)p(y) = 2D(p, q) (34) D(p, s) + D(q, s) ≥ 2 3 D(p, q)(35)
C ON COLLAPSING TO THE DELTA-FUNCTION
Firstly, let's consider the case of gaussian random-walk proposal q(x | x) = N (x | x, σ). The optimization problem for the acceptance rate takes the form
AR = dxdx p(x)N (x | x, σ) min 1, p(x ) p(x) → max σ .(36)
It is easy to see that we can obtain acceptance rate arbitrarly close to 1, taking σ small enough.
In the case of the independent proposal, we don't have the collapsing to the delta-function problem. In our work, it is important to show non-collapsing during optimization of the lower bound, but the same hold for the direct optimization of the acceptance rate. To provide such intuition we consider one-dimensional case where we have some target distribution p(x) and independent proposal q(x) = N (x | µ, σ). Choosing σ small enough, we approximate sampling with the independent MH as sampling on some finite support x ∈ [µ − a, µ + a]. For this support, we approximate the target distribution with the uniform distribution (see Fig. 7).
For such approximation, optimization of lower bound takes the form
KL(p(x) q(x)) + KL(q(x) p(x)) → min q (37) KL(Uniform[−a, a] N (x | 0, σ, −a, a)) + KL(N (x | 0, σ, −a, a) Uniform[−a, a]) → min σ(38)
Here N (x | 0, σ, −a, a) is truncated normal distribution. The first KL-divergence can be written as follows.
KL(Uniform[−a, a] N (x | 0, σ, −a, a)) = − 1 2a a −a dx log N (x | 0, σ, −a, a) − log 2a = (39) = − 1 2a − 2a log(σZ) − a log 2π − 1 2σ 2 2a 3 3 − log 2a =(40)
= log σ + log Z + a 2 6σ 2 + 1 2 log 2π − log 2a Here Z is normalization constant of truncated log normal distribution and
Z = Φ(a/σ) − Φ(−a/σ), where Φ(x) is CDF of standard normal distribution. The second KL-divergence is KL(N (x | 0, σ, −a, a) Uniform[−a, a]) = (42) = − 1 2 log(2πe) − log σ − log Z + a √ 2πσZ exp − a 2 2σ 2 + log 2a(43)
Summing up two KL-divergencies and taking derivative w.r.t. σ we obtain
∂ ∂σ KL(Uniform[−a, a] N (x | 0, σ, −a, a)) + KL(N (x | 0, σ, −a, a) Uniform[−a, a]) = (44) = − a 2 3σ 3 + a 3 √ 2πσ 4 Z exp − a 2 2σ 2 + a √ 2π exp − a 2 2σ 2 − 1 σ 2 Z − 1 σZ 2 −2a σ 2 √ 2π exp − a 2 2σ 2 = (45) = 1 a − a 3 3σ 3 + a 2 √ 2πσ 2 Z exp − a 2 2σ 2 a 2 σ 2 − 1 + 2a √ 2πσZ exp − a 2 2σ 2(46)
To show that the derivative of the lower bound w.r.t. σ is negative, we need to prove that the following inequality holds for positive x.
− 1 3 x 3 + x 2 √ 2π(Φ(x) − Φ(−x)) exp(−x 2 /2) x 2 −1+ 2x √ 2π(Φ(x) − Φ(−x)) exp(−x 2 /2) < 0, x > 0 (47) Defining φ(x) = x 0 e −t 2 /2 dt and noting that 2φ(x) = √ 2π(Φ(x) − Φ(−x)) we can rewrite in- equality 47 as 1 φ(x) e −x 2 /2 x 2 − 1 + 2xe −x 2 /2 φ(x) < 2x 3 , x > 0(48)
By the fundamental theorem of calculus, we have
xe −x 2 /2 = x 0 e −t 2 /2 (1 − t 2 )dt(49)
Hence,
φ(x) − xe −x 2 /2 = x 0 e −t 2 /2 t 2 dt ≥ e −x 2 /2 x 0 t 2 dt = e −x 2 /2 x 3 3(50)
Or equivalently,
φ(x) ≥ e −x 2 /2 x 3 + 3x 3(51)
Using this inequality twice, we obtain
e −x 2 /2 φ(x) ≤ 3 x(x 2 + 3)(52)
and
x 2 − 1 + xe −x 2 /2 φ(x) ≤ x 2 − 1 + 3 x 2 + 3 = x 2 (2 + x 2 ) x 2 + 3(53)
Thus, the target inequality can be verified by the verification of
3x(2 + x 2 ) (x 2 + 3) 2 ≤ 2x 3 .(54)
F INTUITION FOR BETTER GRADIENTS IN SAMPLE-BASED SETTING
In this section, we provide an intuition for sample-based setting that the loss function for lower bound has better gradients than the loss function for acceptance rate. Firstly, we remind that in the sample-based setting we use a discriminator for density ratio estimation.
D(x, x ) = p(x)q(x | x) p(x)q(x | x) + p(x )q(x | x )(55)
For this purpose we use the discriminator of special structure D(x, x ) = exp( D(x, x )) exp( D(x, x )) + exp( D(x , x)) = 1
1 + exp − ( D(x, x ) − D(x , x))(56)
We denote d(x, x ) = D(x, x ) − D(x , x) and consider the case when the discriminator can easily distinguish fake pairs from valid pairs. So D(x, x ) is close to 1 and d(x, x ) 0 for x ∼ p(x) and x ∼ q(x | x). To evaluate gradients we consider Monte Carlo estimations of each loss and take gradients w.r.t. x in order to obtain gradients for parameters of proposal distribution. We do not introduce the reparameterization trick to simplify the notation but assume it to be performed. For the optimization of the acceptance rate we have
dxdx p(x)q(x | x) p(x )q(x | x ) p(x)q(x | x) − 1 p(x )q(x | x ) p(x)q(x | x) − 1 (57) L AR = p(x )q(x | x ) p(x)q(x | x) − 1 ≈ 1 − D(x, x ) D(x, x ) − 1 (58) ∂L AR ∂x = 1 D 2 (x, x ) ∂D(x, x ) ∂x = exp(−d(x, x )) ∂d(x, x ) ∂x(59)
While for the optimization of the lower bound we have
dxdx p(x)q(x | x) log p(x)q(x | x) p(x )q(x | x ) log p(x)q(x | x) p(x )q(x | x )(60)L LB = − log p(x )q(x | x ) p(x)q(x | x) ≈ − log 1 − D(x, x ) D(x, x )(61)∂L LB ∂x = 1 (1 − D(x, x ))D(x, x ) ∂D(x, x ) ∂x = ∂d(x, x ) ∂x(62)
Now we compare Eq. 59 and Eq. 62. We see that in case of strong discriminator we have vanishing gradients in Eq. 59 due to exp(−d(x, x )), while it is not the case for Eq. 62.
G ADDITIONAL FIGURES FOR MARKOV CHAIN PROPOSALS IN
SAMPLE-BASED SETTING
In this section, we show additional figures for Markov chain proposals. In Fig. 8 we show samples from the chain that was initialized by the noise. In Fig. 9 we show samples from the chain after 10000 accepted samples. To obtain samples we use the MH algorithm with the learned proposal and the learned discriminator for density ratio estimation. In Fig. 5(a) we use proposal and discriminator that are learned during optimization of acceptance rate. In Fig. 5(b) we use proposal and discriminator that are learned during the optimization of the acceptance rate lower bound. Samples in the chain are obtained one by one from left to right from top to bottom starting with noise (first image in the figure). Figure 9: Samples from the chain after 10000 accepted samples. To obtain samples we use the MH algorithm with the learned proposal and the learned discriminator for density ratio estimation. In Fig. 5(a) we use proposal and discriminator that are learned during optimization of acceptance rate. In Fig. 5(b) we use proposal and discriminator that are learned during the optimization of the acceptance rate lower bound. Samples in chain are obtained one by one from left to right from top to bottom.
Figure 1 :
1Level-plots in parameter space for the toy problem. Left: level-plot for the acceptance rate of the MH algorithm. Right: level-plot for the lower bound of the acceptance rate.
Figure 2 :
2Negative log-likelihood (left) and accuracy (right) on test set of MNIST dataset for variational inference (blue lines) and the optimization of the acceptance rate lower bound (orange lines).
Figure 3 :
3Test negative log-likelihood for two approximations of the predictive distribution based on samples: from proposal distribution nll q and after MH correction nll M H . Left figure corresponds to the optimization of the acceptance rate lower bound, right figure corresponds to the variational inference.
Figure 6 :
6Samples from the proposal distribution and conditioned on the digit in the red box. The proposal was optimized according to the lower bound on the acceptance rate.
1 .
1D(p, q) = 0 ⇐⇒ p = q 2. D(p, q) = D(q, p) There is an example when triangle inequality does not hold. For distributions p = Uniform[0, 2/3], q = Uniform[1/3, 1], s = Uniform[0, 1] D(p, s) + D(q, s) = 4 3 < 3 2 = D(p, q).
Figure 7 :
7In this figure we show schematic view of approximation of of target distribution with uniform distribution. Red bounding box is made bigger for better comprehension.
Figure 8 :
8Samples from the chain initialized with noise.
Table 1 :
1Constraints for two settings of learning sampling algorithmsSetting
Target distribution
Proposal distribution Density Ratio
E ARCHITECTURES OF NEURAL NETWORKS IN SAMPLE-BASED SETTINGIn sample-based setting we use usual DCGAN architecture for independent proposal distribution And a little be modified acrhitecture for Markov chain proposal distribution For both proposals we use the proposed discriminator with the following architecture. yx = torch.cat([y, x], dim=1) for module in self.children(): yx = module(yx) return F.softmax(torch.cat([xy, yx], dim=1), dim=1)Thus, we show that partial derivative of our lower bound w.r.t. σ is negative. Using that knowledge
we can improve our loss by taking a bigger value of σ. Hence, such proposal does not collapse to
delta-function.
D ARCHITECTURE OF THE REDUCED LENET-5
class LeNet5(BayesNet):
def __init__(self):
super(LeNet5, self).__init__()
self.num_classes = 10
self.conv1 = layers.ConvFFG(1, 10, 5, padding=0)
self.relu1 = nn.ReLU(True)
self.pool1 = nn.MaxPool2d(2, padding=0)
self.conv2 = layers.ConvFFG(10, 20, 5, padding=0)
self.relu2 = nn.ReLU(True)
self.pool2 = nn.MaxPool2d(2, padding=0)
self.flatten = layers.ViewLayer([20 * 4 * 4])
self.dense1 = layers.LinearFFG(20 * 4 * 4, 10)
self.relu3 = nn.ReLU()
self.dense2 = layers.LinearFFG(10, 10)
class Generator(layers.ModuleWrapper):
def __init__(self):
super(Generator, self).__init__()
self.fc = nn.Linear(100, 128 * 8 * 8)
self.unflatten = layers.ViewLayer([128, 8, 8])
self.in1 = nn.InstanceNorm2d(128)
self.us1 = nn.ConvTranspose2d(128, 128, 2, 2)
self.conv1 = nn.Conv2d(128, 128, 3, stride=1, padding=1)
self.in2 = nn.InstanceNorm2d(128, 0.8)
self.lrelu1 = nn.LeakyReLU(0.2, inplace=True)
self.us2 = nn.ConvTranspose2d(128, 128, 2, 2)
self.conv2 = nn.Conv2d(128, 64, 3, stride=1, padding=1)
self.in3 = nn.InstanceNorm2d(64, 0.8)
self.lrelu2 = nn.LeakyReLU(0.2, inplace=True)
self.conv3 = nn.Conv2d(64, 1, 3, stride=1, padding=1)
self.tanh = nn.Tanh()
class Generator(layers.ModuleWrapper):
def __init__(self):
super(Generator, self).__init__()
self.d_conv1 = nn.Conv2d(1, 16, 5, stride=2, padding=2)
self.d_lrelu1 = nn.LeakyReLU(0.2, inplace=True)
self.d_do1 = nn.Dropout2d(0.5)
self.d_conv2 = nn.Conv2d(16, 4, 5, stride=2, padding=2)
self.d_in2 = nn.InstanceNorm2d(4, 0.8)
self.d_lrelu2 = nn.LeakyReLU(0.2, inplace=True)
self.d_do2 = nn.Dropout2d(0.5)
self.b_view = layers.ViewLayer([4 * 8 * 8])
self.b_fc = nn.Linear(4 * 8 * 8, 256)
self.b_lrelu = nn.LeakyReLU(0.2, inplace=True)
self.b_fc = nn.Linear(256, 128 * 8 * 8)
self.b_do = layers.AdditiveNoise(0.5)
self.e_unflatten = layers.ViewLayer([128, 8, 8])
self.e_in1 = nn.InstanceNorm2d(128, 0.8)
self.e_us1 = nn.ConvTranspose2d(128, 128, 2, 2)
self.e_conv1 = nn.Conv2d(128, 128, 3, stride=1, padding=1)
self.e_in2 = nn.InstanceNorm2d(128, 0.8)
self.e_lrelu1 = nn.LeakyReLU(0.2, inplace=True)
self.e_us2 = nn.ConvTranspose2d(128, 128, 2, 2)
self.e_conv2 = nn.Conv2d(128, 64, 3, stride=1, padding=1)
self.e_in3 = nn.InstanceNorm2d(64, 0.8)
self.e_lrelu2 = nn.LeakyReLU(0.2, inplace=True)
self.e_conv3 = nn.Conv2d(64, 1, 3, stride=1, padding=1)
self.e_tanh = nn.Tanh()
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.conv1 = nn.Conv2d(2, 16, 3, 2, 1)
self.lrelu1 = nn.LeakyReLU(0.2, inplace=True)
self.conv2 = nn.Conv2d(16, 32, 3, 2, 1)
self.lrelu2 = nn.LeakyReLU(0.2, inplace=True)
self.in2 = nn.InstanceNorm2d(32, 0.8)
self.conv3 = nn.Conv2d(32, 64, 3, 2, 1)
self.lrelu3 = nn.LeakyReLU(0.2, inplace=True)
self.in3 = nn.InstanceNorm2d(64, 0.8)
self.conv4 = nn.Conv2d(64, 128, 3, 2, 1)
self.lrelu4 = nn.LeakyReLU(0.2, inplace=True)
self.in4 = nn.InstanceNorm2d(128, 0.8)
self.flatten = layers.ViewLayer([128 * 2 * 2])
self.fc = nn.Linear(128 * 2 * 2, 1)
def forward(self, x, y):
xy = torch.cat([x, y], dim=1)
for module in self.children():
xy = module(xy)
. Martin Arjovsky, Soumith Chintala, Léon Bottou, arXiv:1701.07875Wasserstein gan. arXiv preprintMartin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
An efficient minibatch acceptance test for metropolis-hastings. Haoyu Chen, Daniel Seita, Xinlei Pan, John Canny, arXiv:1610.06848arXiv preprintHaoyu Chen, Daniel Seita, Xinlei Pan, and John Canny. An efficient minibatch acceptance test for metropolis-hastings. arXiv preprint arXiv:1610.06848, 2016.
Stochastic gradient hamiltonian monte carlo. Tianqi Chen, Emily Fox, Carlos Guestrin, International Conference on Machine Learning. Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient hamiltonian monte carlo. In International Conference on Machine Learning, pp. 1683-1691, 2014.
Uncertainty in deep learning. Yarin Gal, University of CambridgePhD thesisYarin Gal. Uncertainty in deep learning. PhD thesis, University of Cambridge, 2016.
Yarin Gal, Zoubin Ghahramani, arXiv:1506.02158Bayesian convolutional neural networks with bernoulli approximate variational inference. arXiv preprintYarin Gal and Zoubin Ghahramani. Bayesian convolutional neural networks with bernoulli approx- imate variational inference. arXiv preprint arXiv:1506.02158, 2015.
Dropout as a bayesian approximation: Representing model uncertainty in deep learning. Yarin Gal, Zoubin Ghahramani, international conference on machine learning. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050-1059, 2016.
Adaptive independent metropolis-hastings by fast estimation of mixtures of normals. Paolo Giordani, Robert Kohn, Journal of Computational and Graphical Statistics. 192Paolo Giordani and Robert Kohn. Adaptive independent metropolis-hastings by fast estimation of mixtures of normals. Journal of Computational and Graphical Statistics, 19(2):243-259, 2010.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural infor- mation processing systems, pp. 2672-2680, 2014.
Variational rejection sampling. Aditya Grover, Ramki Gummadi, Miguel Lazaro-Gredilla, Dale Schuurmans, Stefano Ermon, Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics. Amos Storkey and Fernando Perez-Cruzthe Twenty-First International Conference on Artificial Intelligence and StatisticsPlaya Blanca, Lanzarote, Canary IslandsProceedings of Machine Learning ResearchAditya Grover, Ramki Gummadi, Miguel Lazaro-Gredilla, Dale Schuurmans, and Stefano Ermon. Variational rejection sampling. In Amos Storkey and Fernando Perez-Cruz (eds.), Proceed- ings of the Twenty-First International Conference on Artificial Intelligence and Statistics, vol- ume 84 of Proceedings of Machine Learning Research, pp. 823-832, Playa Blanca, Lanzarote, Canary Islands, 09-11 Apr 2018. PMLR. URL http://proceedings.mlr.press/v84/ grover18a.html.
Stochastic variational inference. D Matthew, Hoffman, M David, Chong Blei, John Wang, Paisley, The Journal of Machine Learning Research. 141Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational infer- ence. The Journal of Machine Learning Research, 14(1):1303-1347, 2013.
Adaptive independent metropolis-hastings. Lars Holden, Ragnar Hauge, Marit Holden, The Annals of Applied Probability. 191Lars Holden, Ragnar Hauge, Marit Holden, et al. Adaptive independent metropolis-hastings. The Annals of Applied Probability, 19(1):395-413, 2009.
Auto-encoding variational bayes. ICLR. P Diederik, Max Kingma, Welling, Diederik P Kingma and Max Welling. Auto-encoding variational bayes. ICLR, 2014.
Variational dropout and the local reparameterization trick. P Diederik, Tim Kingma, Max Salimans, Welling, Advances in Neural Information Processing Systems. Diederik P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparame- terization trick. In Advances in Neural Information Processing Systems, pp. 2575-2583, 2015.
Austerity in mcmc land: Cutting the metropolishastings budget. Anoop Korattikara, Yutian Chen, Max Welling, International Conference on Machine Learning. Anoop Korattikara, Yutian Chen, and Max Welling. Austerity in mcmc land: Cutting the metropolis- hastings budget. In International Conference on Machine Learning, pp. 181-189, 2014.
Lars Mescheder, Sebastian Nowozin, Andreas Geiger, arXiv:1701.04722Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. arXiv preprintLars Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. arXiv preprint arXiv:1701.04722, 2017.
Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov, arXiv:1701.05369Variational dropout sparsifies deep neural networks. arXiv preprintDmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. arXiv preprint arXiv:1701.05369, 2017.
f-gan: Training generative neural samplers using variational divergence minimization. Sebastian Nowozin, Botond Cseke, Ryota Tomioka, Advances in Neural Information Processing Systems. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural sam- plers using variational divergence minimization. In Advances in Neural Information Processing Systems, pp. 271-279, 2016.
Danilo Jimenez Rezende, Shakir Mohamed, arXiv:1505.05770Variational inference with normalizing flows. arXiv preprintDanilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015.
Stochastic backpropagation and approximate inference in deep generative models. Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra, ICMLDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. ICML, 2014.
Weak convergence and optimal scaling of random walk metropolis algorithms. The annals of applied probability. O Gareth, Andrew Roberts, Gelman, Walter R Gilks, 7Gareth O Roberts, Andrew Gelman, Walter R Gilks, et al. Weak convergence and optimal scaling of random walk metropolis algorithms. The annals of applied probability, 7(1):110-120, 1997.
Optimal scaling for various metropolis-hastings algorithms. O Gareth, Jeffrey S Roberts, Rosenthal, Statistical science. 164Gareth O Roberts, Jeffrey S Rosenthal, et al. Optimal scaling for various metropolis-hastings algo- rithms. Statistical science, 16(4):351-367, 2001.
A-nice-mc: Adversarial training for mcmc. Jiaming Song, Shengjia Zhao, Stefano Ermon, Advances in Neural Information Processing Systems. Jiaming Song, Shengjia Zhao, and Stefano Ermon. A-nice-mc: Adversarial training for mcmc. In Advances in Neural Information Processing Systems, pp. 5140-5150, 2017.
Doubly stochastic variational bayes for non-conjugate inference. Michalis Titsias, Miguel Lázaro-Gredilla, International Conference on Machine Learning. Michalis Titsias and Miguel Lázaro-Gredilla. Doubly stochastic variational bayes for non-conjugate inference. In International Conference on Machine Learning, pp. 1971-1979, 2014.
Bayesian learning via stochastic gradient langevin dynamics. Max Welling, Yee W Teh, Proceedings of the 28th International Conference on Machine Learning (ICML-11). the 28th International Conference on Machine Learning (ICML-11)Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 681-688, 2011.
|
[] |
[
"An open-source simulation package for power electronics education",
"An open-source simulation package for power electronics education"
] |
[
"Mahesh B Patil \nDepartment of Electrical Engineering\nIndian Institute of Technology\nBombay\n",
"V V S Pavan ",
"Kumar Hari \nDepartment of Energy Science and Engineering\nIndian Institute of Technology\nBombay\n",
"Ruchita D Korgaonkar \nDepartment of Electrical Engineering\nIndian Institute of Technology\nBombay\n",
"Kumar Appaiah \nDepartment of Electrical Engineering\nIndian Institute of Technology\nBombay\n"
] |
[
"Department of Electrical Engineering\nIndian Institute of Technology\nBombay",
"Department of Energy Science and Engineering\nIndian Institute of Technology\nBombay",
"Department of Electrical Engineering\nIndian Institute of Technology\nBombay",
"Department of Electrical Engineering\nIndian Institute of Technology\nBombay"
] |
[] |
Extension of the open-source simulation package GSEIM [1] for power electronics applications is presented. Recent developments in GSEIM, including those oriented specifically towards power electronic circuits, are described. Some examples of electrical element templates, which form a part of the GSEIM library, are discussed.Representative simulation examples in power electronics are presented to bring out important features of the simulator. Advantages of GSEIM for educational purposes are discussed. Finally, plans regarding future developments in GSEIM are presented.Recent developments in GSEIMThe currently available GSEIM program [8] allows the user to enter the schematic diagram of the system of interest using a graphical user interface (GUI), specify component values, run simulation, and plot results interactively. In addition, it allows the user to create new elements (blocks) either in terms of equations or 1 arXiv:2204.12924v1 [eess.SY]
|
10.48550/arxiv.2204.12924
|
[
"https://arxiv.org/pdf/2204.12924v1.pdf"
] | 248,405,614 |
2204.12924
|
5a036466d064abf7101786fe9a36df4f009c0cd2
|
An open-source simulation package for power electronics education
April 28, 2022
Mahesh B Patil
Department of Electrical Engineering
Indian Institute of Technology
Bombay
V V S Pavan
Kumar Hari
Department of Energy Science and Engineering
Indian Institute of Technology
Bombay
Ruchita D Korgaonkar
Department of Electrical Engineering
Indian Institute of Technology
Bombay
Kumar Appaiah
Department of Electrical Engineering
Indian Institute of Technology
Bombay
An open-source simulation package for power electronics education
April 28, 2022
Extension of the open-source simulation package GSEIM [1] for power electronics applications is presented. Recent developments in GSEIM, including those oriented specifically towards power electronic circuits, are described. Some examples of electrical element templates, which form a part of the GSEIM library, are discussed.Representative simulation examples in power electronics are presented to bring out important features of the simulator. Advantages of GSEIM for educational purposes are discussed. Finally, plans regarding future developments in GSEIM are presented.Recent developments in GSEIMThe currently available GSEIM program [8] allows the user to enter the schematic diagram of the system of interest using a graphical user interface (GUI), specify component values, run simulation, and plot results interactively. In addition, it allows the user to create new elements (blocks) either in terms of equations or 1 arXiv:2204.12924v1 [eess.SY]
Introduction
Simulation can be a very effective tool for improving students' understanding of fundamental concepts since it allows verification of the concepts as well as quick exploration of several "what-if" scenarios. In the context of power electronics, for example, simulation allows the student to view the effect of changing a duty ratio or an inductance value on the voltage and current waveforms in the circuit under discussion, thus reinforcing the concepts being taught in class. Several commercial simulation tools are currently being used for teaching power electronics, including PSIM [2], PSCAD [3], Matlab Simulink/Simscape [4], and PLECS [5]. While academic versions of these packages at a lower cost or free student versions (with limitations) are generally available, open-source options are certainly advantageous, especially for engineering colleges in developing countries.
There are currently few open-source options for power electronics.
Of those, Openmodelica [6] is based on the hardware description language Modelica, while GeckoCIRCUIT [7] is a java-based platform. Open-source tools are currently not being used for power electronics education on a large scale probably because of attractive features such as ease of use and customer support associated with commercial packages.
An open-source simulator GSEIM was recently reported [1]. In the first version, GSEIM was aimed at simulation of power electronic systems which can be represented by a flow-graph, e.g., V/f control of an induction motor. Subsequently, GSEIM has been extended, both in terms of GUI features and numerical engine, to enable simulation of a number of power electronic circuits covered in typical undergraduate and postgraduate courses. It is the purpose of this paper to report the current status of the GSEIM package and point out its potential as an open-source tool for power electronics education.
The paper is organised as follows. In Sec. 2, recent developments in GSEIM are reported. The most important development, viz., addition of electrical elements in the form of templates, is described in Sec. 3 as hierarchical blocks made up of elements already available in the library. Applications are limited to power electronic systems which can be represented as a flow-graph, with each element having input and output nodes. The primary objective of the new GSEIM version presented in this paper is to allow simulation of electrical circuits. For convenience, we will call the new GSEIM version GSEIM-Electrical (GSEIM-E) and the original GSEIM program described in [1] as GSEIM-Flowchart (GSEIM-F). In the following, we summarise the salient features of GSEIM-E.
A. Numerical engine: The numerical engine (C++) of the GSEIM-F program was extended to handle electrical elements. The modified nodal analysis (MNA) approach, along with the Newton-Raphson method for nonlinear circuits, was implemented.
When the system being simulated has electrical elements, only implicit methods -backward Euler or trapezoidal method with constant or variable time steps -are allowed for numerical integration. The details of these techniques can be found in [9] and references therein.
B. Steady-state waveform (SSW) analysis: In several converter applications, the steady-state waveform is of interest. In priniciple, transient simulation performed for a sufficiently large number of cycles can yield the steady-state solution. However, this process can take too long if the circuit time constants are large. The Newton-Raphson time-domain steady-state waveform (NRTDSSW) method described in [10] is implemented in GSEIM-E for directly obtaining the steady-state solution. An example would be presented in Sec. 4.
C.
Rectilinear wiring and electrical nodes: In GSEIM-F, the GUI was built by making suitable changes in the GNURadio [11] GUI, and like its predecessor, the GSEIM-F GUI allowed only curved wires (using splines). For electrical circuits, rectilinear wires were incorporated in GSEIM-E, and electrical nodes (ports) were added. D. Element symbols: In GSEIM-F, elements (blocks) were displayed using rectangles, with the type of the element appearing inside the rectangle. In GSEIM-E, circuit symbols, such as resistor and capacitor, are also incorporated. A symbol is rendered in the GUI using a python file associated with that symbol. The user can add a new symbol by simply adding a python file with a suitable name, without making any changes in the GUI code. Fig. 1 shows the python file associated with the capacitor symbol. The code between #begin cord and #end cord prepares the points involved in the symbol, and the code between #begin draw and #end draw does the rendering.
# begin_coord delx = 60 dely = 24 k_width = 0.06 k_height = 0.5 dely0 = int(round(0.5*dely)) dely1 = int(round(k_height*dely)) dely2a = dely0 -dely1 dely2b = dely0 + dely1 In some aspects, GSEIM is similar to SEQUEL [9].
delx1 = int(round(0.5*delx)) delx2 = int(round(k_width*delx)) delx2a = delx1 -delx2 delx2b = delx1 + delx2 c_ = [] c_.append((0, dely0)) # 0 c_.append((delx2a, dely0)) # 1 c_.append((delx2a, dely2a)) # 2 c_.append((delx2a, dely2b)) # 3 c_.append((delx2b, dely2a)) # 4 c_.append((delx2b, dely2b)) # 5 c_.append((delx2b, dely0)) # 6 c_.append((delx, dely0)) # 7 # end_coord #
However, the organisation of GSEIM is significantly different. In particular, the SEQUEL library involves both basic and compound elements, whereas the GSEIM library involves only basic elements, the compound elements being treated through the hierarchical block facility provided by the GSEIM GUI. The other important difference is that GSEIM is oriented mainly toward power electronics while SEQUEL is more general.
Electrical element templates
As discussed in [1], the equations governing the behaviour of an element is incorporated in GSEIM in the form of "templates." Some of the flow-graph type element templates have been discussed in [1]. Here, we look at a few electrical basic element (ebe) templates.
A.
Resistor: Fig. 2 shows the resistor template.
The terminal currents are given by
i p = v p − v n R , i n = − v p − v n R .
The derivatives of these functions ∂i p ∂v p , ∂i p ∂v n , etc. are constants, and that is indicated by the Jacobian statement. The nodes and the rparms statements specify the nodes and real parameters of the element, respectively. The outparms statement specifies the quantities made available by this element to the user for plotting.
The main program expects three types of functions to be supplied by an electrical element template.
(a) Functions f 1 , f 2 , · · · are related to terminal currents in transient simulation. If the element has N nodes, the first N of these equations give the node currents, while the remaining equations (if any) are auxiliary equations.
(b) Functions g 1 , g 2 , · · · are related to state variables, as we will see with respect to the capacitor template.
(c) Functions h 1 , h 2 , · · · are related to "start-up" simulation, which involves solving the circuit equations while holding state variables such as capacitor voltages and inductor currents constant, at some specified values [9]. For a resistor, there are no state variables, and therefore the f and h equations are identical.
The statement n f=2 conveys that there are two f functions for this element. The statements starting with f 1: and f 2: indicate which variables these ebe name=r Jacobian: constant nodes: p n rparms: r=1.0 g=0 k_scale=1 Fig. 3 are used to implement these equations. In start-up simulation, the capacitor behaves like a dc voltage source, satisfying the equations, i p = i 1 , i n = −i 1 , and v p − v n = V 0 , where the current i 1 is an auxiliary variable, and V 0 is a start-up parameter. Implementation of these equations is shown in the start-up part of the capacitor template (see Fig. 4) where the variable cur p is used to denote i 1 .
outparms: i v n_f=2 f_1: v(p) v(n) f_2: v(p) v(n) n_g=0 n_h=2 h_1: v(p) v(n) h_2: v(p) v(n) C
Simulation examples
We now present a few simulation examples to demonstrate the capabilities of GSEIM-E. As explained in [1], simulation of a circuit with GSEIM-F involves drawing the circuit, assigning component values, setting output variables for plotting, and preparing an appropriate "solve block" to specify parameters related to a specific simulation. This procedure remains the same for GSEIM-E except for minor changes to handle electrical elements. The details would be explained in the on-line GSEIM-E documentation, currently under preparation.
The circuit schematics shown in this section are taken directly from the GSEIM-E GUI, by exporting them to pdf files. Apart from the pdf format, the GSEIM-E GUI, like its predecessor GNURadio, also allows circuit schematics to be exported in svg and png formats. This feature is useful in preparing presentations or reports.
A. V/ f control of an induction motor: This example has been described in [1]. Here, we show only the schematic diagram as it appears in the GSEIM-E GUI in order to demonstrate some of the new features of the GUI, viz, rectilinear wiring and the use of element symbols. B. Buck converter: The buck converter circuit, shown in Fig. 6 was simulated for different values of duty ratio D and inductance L. In each case, i L = 0 A and v C = 0 V is taken as the starting point. The output voltage V o (t) is plotted in Fig. 7 for three cases. As seen from the figure, the output voltage takes some time to settle to its steady-state value. Typically, when teaching a power electronics course, the steady-state situation is of interest, and not the trajectory of the circuit to the steady state. Following the transient simulation approach for this circuit -and also several other converter circuitsis therefore wasteful. From Fig. 7, we see that, for the component values specified, the circuit takes about 10 msec or 250 cycles to reach the steady state, whereas only the last one cycle is of interest. Furthermore, the time taken to reach the steady state depends on the parameter values and is generally not know a priori. In the classroom, if a teacher wants to demonstrate, for example, continuous and discontinuous conduction by changing L or C or D, transient simulation is clearly not a good option, and a method to directly obtain the steady-state solution is desirable.
GSEIM-E incorporates the Newton-Raphson time-domain steady-state waveform (NRTDSSW) approach described in [10] for steady-state waveform (SSW) computation. GSEIM-E SSW results for the inductor current are shown in Fig. 8 for the same parameter sets as in Fig. 7.
To summarise, the SSW approach offers two major advantages over transient simulation: (a) it is much faster, (b) it does not require the user to guess the number of cycles required to reach the steady state. We expect the SSW feature of GSEIM-E to become one of its most useful features for power electronics education. To the authors' best knowledge, SEQUEL [9] and PLECS [5] are the only other simulation packages with direct SSW computation capability in the context of power electronic circuits.
C. Neutral point clamped inverter: Fig. 9 shows the schematic diagram of a neutral clamped inverter. The clock generation blocks and the switch-diode blocks are implemented using subcircuits (hierarchical blocks). For this circuit, the Fourier spectrum of the load current is of interest. Fig. 10 shows the spectrum for the load current, as obtained with GSEIM's plotting GUI.
The above examples, together with the machine control examples presented in [1], represent the current focus and scope of GSEIM. We have been able to perform simulation speed comparison for a set of problems involving electrical machines. We found GSEIM to be 2 to 5 times faster than Simulink in this study [12]. A more detailed comparison with Simulink/Simscape and other commonly used commercial packages is planned; the results will be presented elsewhere.
GSEIM as an open-source package
The examples presented in Sec. 4 bring out the potential of GSEIM in teaching power electronics courses.
Furthermore, the open-source nature of GSEIM is advantageous in several ways:
A. Vendors of commercial packages are constrained from revealing several implementation details. As a result, their documentation is mostly about "know-how" rather than "know-why". Creators of open-source packages are not limited by the need for intellectual property protection and can therefore afford to make their documentation richer and academically far more rewarding for the users.
As an example, consider the thyristor block from Simscape [13] as shown in Fig. 11. The purpose served by the inductor L on is not explained. Apart from that, consider the following statements in the documentation for this block: elements, element symbols, hierarchical blocks (subcircuits), simulation examples as well as documentation, with the hope that the package will grow into a valuable resource for power electronics education.
C. Commercial packages often tend to hide, apart from implementation details, even data files created by the package, thus forcing the user to use a commercial tool -generally a part of the same package -for viewing the results. Open-source packages on the other hand are generally designed keeping in mind free exchange of the output files generated by the package in ASCII or csv format, for example. GSEIM creates output files in ASCII format, and they can be viewed not only with the plotting GUI provided with GSEIM, but also with any other plotting program including open-source programs like gnuplot and matplotlib. Figure 9: Schematic diagram of neutral point clamped inverter. particularly in developing countries, cannot afford licenses for commercial packages. As a consequence, teachers are unable to assign home-work exercises involving simulation, and students are deprived of the precious learning experience offered by simulation. Open-source packages completely remove this constraint since no licenses are involved. On the other hand, if two commercial packages are combined, the user has to pay for each of them, e.g., see [14].
Figure 1 :
1Python file associated with the capacitor symbol.E. Plotting: The following post-processing features have been added to the plotting GUI: (i) Average and rms values (ii) Fourier spectrum and total harmonic distortion (THD)
Figure 2 :
2Resistor template (partial).functions depend on. The main program passes two objects to the template: (a) X which carries information about the specific element being called, and (b) G which carries global information such as the current time point. By checking the flags of G, the template computes appropriate quantities, and passes them to the main program by assigning suitable variables of X. Some flags of G are listed below.(a) i one time parms:compute "one-time" Capacitor: A capacitor involves a time derivative and therefore calls for a very different treatment as compared to a resistor. The terminal currents can be written as i p = dQ p dt , i n = dQ m dt , where Q p = C (v p − v n ) and Q m = −Q p are state variables. The functions f 1 , f 2 , g 1 , g 2 in the capacitor template shown in
Figure 3 :
3Capacitor template (partial).
Figure 4 :
4Start-up section of the capacitor template.
(a ) "
)The Inductance Lon parameter is normally set to 0 except when the Resistance Ron parameter is set to 0." (b) "The Thyristor block cannot be connected in series with an inductor, a current source, or an open circuit, unless its snubber circuit is in use."From the user's perspective, these statements appear esoteric and create the (wrong) impression that circuit simulation is very complex. On the other hand, if the reasons behind these limitations were explained, it would have led to a far better understanding of the simulation process.
Figure 5 :
5Schematic diagram for V/ f control of an induction motor.
Figure 6 :Figure 7 :Figure 8 :
678Schematic Output voltage versus time for the buck converter of Fig. 6. (a) D = 0.4, L = 600 µH, (b) D = 0.6, L = 600 µH, (c) D = 0.6, L = 200 µH. B. Contributions from users to library elements and simulation examples can be easily incorporated in an open-source package like GSEIM. Indeed, the basic philosophy behind open-source packages is user involvement in not only using the package but also its continuous evolution. In the development of GSEIM, special care has been taken in order to allow users' contribution in terms of new basic Steady-state inductor current versus time for the buck converter of Fig. 6. (a) D = 0.4, L = 600 µH, (b) D = 0.6, L = 600 µH, (c) D = 0.6, L = 200 µH.
D.
Students in several engineering colleges,
Figure 10 :
10Fourier spectrum for neutral point clamped inverter ofFig. 9.
Figure 11 :
11E. Open-source packages can take advantage of other open-source tools such as compilers, libraries, and Thyristor block from Simscape[13].plotting programs. This can lead to improved capabilities, performance, and implementation.
F.
Open-source packages can be combined with other open-source packages to create new capabilities at no cost to the user. For example, GSEIM can be easily called by an optimisation package for circuit design.
education, has been presented in this paper. The organisation and features of GSEIM have been described. Incorporation of new elements in the GSEIM library has been discussed with the help of specific examples. A few simulation examples have been considered to illustrate the potential of GSEIM in power electronics education. Future plans for GSEIM include the following. (a) manual preparation and uploading of the revised GSEIM version on github [8] (b) video tutorials for new users (c) course material development based on GSEIM simulation examples (d) additional features such as "bus" connections, real-time plotting of simulation results.
, with the help of examples. In Sec. 4, simulation examples are presented to bring out the scope and capabilities of the program. Advantages of GSEIM as an open-source package have been pointed out in Sec. 5. Finally, in Sec. 6, conclusions of this work are summarised, and future developments envisaged in GSEIM are listed.
X.val_nd[nnd_p]; vn = X.val_nd[nnd_n]; if (G.flags[G.i_trns]) { g = X.rprm[nr_g]; if (G.flags[G.i_function]) { X.f[nf_1] = g*(vp-vn); X.f[nf_2] = -X.f[nf_1]; } if (G.flags[G.i_jacobian]) {:
variables:
double vp,vn,r_eff;
source:
if (G.flags[G.i_one_time_parms]) {
r = X.rprm[nr_r];
k_scale = X.rprm[nr_k_scale];
r_eff = r*k_scale;
g = 1.0e0/r_eff;
X.rprm[nr_g] = g;
}
vp = J.dfdv[nf_1][nnd_p] = g;
J.dfdv[nf_1][nnd_n] = -g;
J.dfdv[nf_2][nnd_p] = -g;
J.dfdv[nf_2][nnd_n] = g;
}
}
if (G.flags[G.i_outvar]) {
g = X.rprm[nr_g];
X.outprm[no_v] = vp-vn;
X.outprm[no_i] = g*(vp-vn);
}
endC
GSEIM: a general-purpose simulator with explicit and implicit methods. M B Patil, R D Korgaonkar, K Appaiah, Sādhanā. 464M. B. Patil, R. D. Korgaonkar, and K. Appaiah, "GSEIM: a general-purpose simulator with explicit and implicit methods," Sādhanā, vol. 46, no. 4, pp. 1-13, 2021.
. Psim, PSIM. [Online]. Available: https://powersimtech. com/products/psim/capabilities-applications/
. Pscad, PSCAD. [Online]. Available: https://www.pscad. com/
Simulink -Simulation and Model-Based Design. Simulink -Simulation and Model-Based Design. [Online]. Available: https://in.mathworks.com/ products/simulink.html
PLECS. PLECS. [Online]. Available: https://www.plexim. com/products/plecs
. Openmodelica, OpenModelica. [Online]. Available: https: //openmodelica.org/
Successful online education-geckocircuits as open-source simulation platform. A Müsing, J W Kolar, 2014 International Power Electronics Conference. IPEC-Hiroshima 2014-ECCE ASIAA. Müsing and J. W. Kolar, "Successful online education-geckocircuits as open-source simulation platform," in 2014 International Power Electronics Conference (IPEC-Hiroshima 2014-ECCE ASIA).
. IEEE. IEEE, 2014, pp. 821-828.
GSEIM. GSEIM. [Online]. Available: https://github.com/ gseim/gseim
SEQUEL Users' Manual: Part-1. M B Patil, M.B. Patil. SEQUEL Users' Manual: Part-1. [Online]. Available: http://www.ee.iitb.ac.in/ ∼ sequel
Computation of steady-state response in power electronic circuits. M B Patil, M C Chandorkar, B G Fernandes, K Chatterjee, IETE journal of research. 486M. B. Patil, M. C. Chandorkar, B. G. Fernandes, and K. Chatterjee, "Computation of steady-state response in power electronic circuits," IETE journal of research, vol. 48, no. 6, pp. 471-477, 2002.
. C++ Gnu Radio Manual, Api Reference, GNU Radio Manual and C++ API Reference. [Online]. Available: http://www.gnuradio.org
Comparison of GSEIM with Simulink with respect to simulation speed. A Nandan, M Patil, International Conference on Signal Processing. IEEECommunication and Energy SystemsA. Nandan and M. Patil, "Comparison of GSEIM with Simulink with respect to simulation speed," in International Conference on Signal Processing, Informatics, Communication and Energy Systems. IEEE, 2022.
. Thyristor, Thyristor. [Online].
Available: https. Available: https:
. Plecs, Blockset, PLECS Blockset. [Online]. Available: https: //www.plexim.com/download/blockset
|
[] |
[
"SOME REMARKS ON A GENERALIZED VECTOR PRODUCT",
"SOME REMARKS ON A GENERALIZED VECTOR PRODUCT"
] |
[
"Primitivo B Acosta-Humánez ",
"ANDMoisés Aranda ",
"Reinaldo Núñez "
] |
[] |
[] |
In this paper we use a generalized vector product to construct an exterior form ∧ :Finally, for n = k − 1 we introduce the reversing operation to study this generalized vector product over palindromic and antipalindromic vectors.MSC 2010. Primary 15A75, Secondary, 15A72
| null |
[
"https://arxiv.org/pdf/1111.1116v2.pdf"
] | 54,861,179 |
1111.1116
|
99b80746dda1041277a88f7afcb967b2a72afe25
|
SOME REMARKS ON A GENERALIZED VECTOR PRODUCT
14 Mar 2012
Primitivo B Acosta-Humánez
ANDMoisés Aranda
Reinaldo Núñez
SOME REMARKS ON A GENERALIZED VECTOR PRODUCT
14 Mar 2012and Phrases Alternating multilinear functionantipalindromic vectorexterior productpalindromic vectorreversingvector product
In this paper we use a generalized vector product to construct an exterior form ∧ :Finally, for n = k − 1 we introduce the reversing operation to study this generalized vector product over palindromic and antipalindromic vectors.MSC 2010. Primary 15A75, Secondary, 15A72
Introduction
It is well known that the vector product over R 3 is an alternating 2-linear function from R 3 × R 3 onto R 3 . Although this vector product is a natural topic to be studied in any course of basic linear algebra, there is a plenty of textbooks on this subject in where it is not considered over R n . The following definition, with interesting remarks, can be found also in [3,7,8]. Let A1 = (a11, a12, . . . , a1n) , . . . , An−1 = a (n−1)1 , a (n−1)2 , . . . , a (n−1)n be n − 1 vectors in R n . The vector product over R n is a function × : (R n ) n−1 → R n such that × (A1, A2, . . . , An−1) = A1 × A2 × · · · × An−1 = n k=1
(−1) 1+k det (X k ) e k ,(1)
where e k is the k−th unity vector of the standard basis of R n and X k is the square matrix obtained through the deleting of the k−th column of the (aij) (n−1)×n . Notice that in this case the function is not binary and sends a matrix M of size (n − 1) × n to a vector of its n n−1 maximal minors. One aim of this paper is to give an algorithm to construct, using elementary techniques, a function with domain in (R n ) k and codomain R ( n k ) which will be an alternating k-linear function that obviously generalizes the previous vector product defined over R n .
Using techniques and methods of algebraic geometry we can see that the vector product obtained here, without signs, corresponds to the Plücker coordinates of the matrix M , see [4,5]. Although this vector product is known and useful to define the concept of Grassmanian variety, see [4], we present an alternative construction, avoiding algebraic geometry, which lead us to known results that can be found as for example in [6].
Another aim of this work, following [1,2], is the presentation of some original results concerning to the vector product for n = k − 1 in palindromic and antipalindromic vectors by means of reversing operation.
The way as is presented this paper can allow to students and teachers of basic linear algebra the implementation of these results on their courses, this is our final aim.
A generalized vector product
In this section we set some preliminaries, properties and the Cramer´s rule as application of the generalized vector product.
1.1. Preliminaries. Following [3,7] we define the generalized vector product over R n as the function × : (R n ) n−1 → R n such that for A1 = (a11, a12, . . . , a1n) , . . . , An−1 = a (n−1)1 , a (n−1)2 , . . . , a (n−1)n , n − 1 vectors of R n , their vector product is given by
× (A1, A2, . . . , An−1) = A1 × A2 × · · · × An−1 = n k=1 (−1) 1+k det (X k ) e k ,(2)
where e k is the k−th element of the canonical basis for R n and X k is the square matrix obtained after the elimination of the k−th column of the matrix (aij ) (n−1)×n . The definition presented in expression (2) corresponds to a natural generalization of the vector product of two vectors belonging to R 3 .
1.2. Some properties. Let A1, A2, . . . , An be vectors of R n . The following statements hold. 1) × (A1, A2, . . . , An−1) is an orthogonal vector for the given vectors.
2) Assume α, β ∈ R, Bi ∈ R n :
A1 × A2 × · · · × (αAi + βBi) × · · · × An−1 = A1 × A2 × · · · × αAi × · · · × An−1 + A1 × A2 × · · · × βBi × · · · × An−1.
3) Let the matrix A given by A = (A1, A2, . . . , An):
det A = A1 · (A2 × · · · × An) = (−1) 1+j Aj · (A1 × · · · × Aj−1 × Aj+1 × An) .
4) The vectors A1, A2, . . . , An−1 are n − 1 linearly dependent vectors for R n if and only if A1 × A2 × · · · × An−1 = 0. It is well known that these properties can be proven using the properties of the determinant, see for example [3,7]. 1.3. Cramer's rule. Consider the following system of linear equations a11x1 + a12x2 + · · · + a1nxn = b1 a21x1 + a22x2 + · · · + a2nxn = b2 . . .
an1x1 + an2x2 + · · · + annxn = bn that can be expressed in vectorial way as
x1A1 + x2A2 + · · · + xnAn = B,(3)
being Ai = (a1i, a2i, . . . , ani) with i = 1, 2, . . . , n and B = (b1, b2, . . . , bn). Suppose that det (A1, A2, . . . , An) = 0. For instance such system has unique solution that can be obtained applying the scalar product between the equation (3) and A2 × A3 × · · · × An, so we obtain
(x1A1 + x2A2 + · · · + xnAn) · A2 × A3 × · · · × An = B · A2 × A3 × · · · × An x1A1 · A2 × A3 × · · · × An = B · A2 × A3 × · · · × An
since Aj · A2 × A3 × · · · × An = 0 for j = 2, 3, . . . , n. Therefore
x1 = B · A2 × A3 × · · · × An A1 · A2 × A3 × · · · × An = det (B, A2, A3, · · · , An) det(A1, A2, A3, · · · , An) .(4)
In a general way, we can obtain
xi = B · A1 × A2 × · · · × Ai−1 × Ai+1 × · · · × An Ai · A1 × A2 × · · · Ai−1 × Ai+1 × · · · × An = (−1) i+1 det (A1, A2, . . . , Ai−1, B, Ai+1, · · · , An) (−1) i+1 det(A1, A2, A3, · · · , An) = det (A1, A2, . . . , Ai−1, B, Ai+1, · · · , An) det(A1, A2, A3, · · · , An) ,
that is, the well-known Cramer's rule.
Didactic way to define ∧: algorithm and properties
In this section we propose a didactic way to define the exterior product ∧. To do this, we set an algorithm to the construction of ∧ and as consequence of this construction arise some properties.
2.1.
Algorithm to the construction of ∧. Here we present an algorithm and some simple examples to illustrate it.
Step 1. Consider n ∈ N and 1 ≤ k ≤ n, being k an integer. We define
I = {i1i2 · · · i k : 1 ≤ i1 < i2 < · · · < i k ≤ n} ,
this means that the elements belonging to I are chains of numbers conformed in agreement with the lexicographic order. Step 2. We set that I should be ordered lexicographically.
I (1) < I (2) < · · · < I (( n k )) In this way, if Is ∈ I, then there exists p (only one) such that Is = I (p) . Thus, we can define p as the rank of Is and will be denoted by r (Is) = p. That is, p is the place of Is in I as set of ordered elements lexicographically.
In Example 1 we can see that r (234) = 7, r (345) = 10. The same for Example 2, r (25) = 7, r (35) = 9.
Step 3. Let u1 = (u11, u12, . . . , u1n) , . . . , u k = (u k1 , u k2 , . . . , u kn ), be k vectors of R n , with k ≤ n. Consider the matrix U = (uij ) of order k × n conformed by these vectors. Assume i1i2 · · · i k ∈ I and let Ui 1 i 2 ···i k be the matrix of order k, conformed by the k columns i1, i2, · · · , i k of U . From now on, U always will be a matrix of this kind. Notice that when we choose a particular number of columns of such matrix U exactly corresponds to delete of U the non-selected columns.
Step 4. Consider (R n ) k := R n × R n × · · · × R n k−times . Now we define the function exterior product ∧ : (R n ) k → R ( n k ) as follows:
∧ (U ) = i∈I (−1) ( n k )−r(i) det (Ui) e ( n k )−r(i)+1 ,
where e ( n k )−r(i)+1 corresponds to the n k − r(i) + 1 −th unity vector of the standard basis of R ( n k ) . For convenience, we can write As we can see, the set
∧ (U ) = ∧ (u1, u2, . . . , u k ) = u1 ∧ u2 ∧ . . . ∧ u k .B = {e1 ∧ e2, e1 ∧ e3, e1 ∧ e4, e2 ∧ e3, e2 ∧ e4, e3 ∧ e4} ⊂ R 6 is one basis for R 6 .
Notice that in a given basis B for R n , the exterior product of them taken in sets of k-elements without repetition constitutes a basis B ′ for R ( n k ) . 6) If u1, . . . , u k are k (≤ n) linear dependent vectors of R n , then ∧ (u1, . . . , u k ) = 0 ∈ R ( n k ) .
Proof. We proceed according to each item.
1) Assuming k = n we have n k = n n = 1 and r(i) = 1 (due to I has only one element). For instance
∧ (U ) = i∈I (−1) ( n k )−r(i) det (Ui) e ( n k )−r(i)+1 = det(Ui).
Trivially we can see that for R, e1 = 1.
2) Assuming k = n−1, we have n k = n n − 1 = n, in this way, I has n elements.
Owing to the symmetry of n k , the election of n − 1 columns of the matrix U corresponds to the elimination of one column of U (precisely the avoided column in the election). In other words, we can see that
Ui = X n−r(i)+1
where X n−r(i)+1 corresponds to the matrix that has been obtained throughout U deleting the (n − r(i) + 1)-th column such that
∧ (U ) = i∈I (−1) n−r(i) det (Ui) e n−r(i)+1 = i∈I (−1) (n−r(i)+1)+1 det (Ui) e n−r(i)+1 = n j=1 (−1) j+1 det (Xj ) ej = u1 × . . . × u k .
3) For n = 2p and k = 1, we have 2p 1 = 2p, thus, the cardinality of I is even and I = {1, 2, . . . , p, p + 1, . . . , 2p} .
Furthermore, r (i) = 1. In this way, ∧ (U ) ∈ R 2p . On the other hand, considering U = (u1, u2, . . . , u2p) and ∧ (U ) = (v1, v2, . . . , v2p), we obtain
∧ (U ) = i∈I (−1) 2p−i det (Ui) e2p−i+1 = 2p i=1 (−1) i uie2p−i+1 = (u2p, −u2p−1, . . . , u2, −u1) ,
where it follows that vj = (−1) j+1 u2p−j+1 for j = 1, 2, . . . , 2p. Therefore Items 4), 5) and 6) can be proven using the properties of the determinant in similar way as the previous ones.
Reversing operation over ∧
The reversing operation has been applied successfully over rings and vector spaces, see [1,2]. In this section we apply the reversing operation to obtain some results that involve the exterior product with the palindromic and antipalindromic vectors. The following results correspond to a generalization of some results presented in [2]. Consider the matrix M = (mi,j) of size m × n. The reversing of M , denoted by ← − M is given by
← − M = ( ← − mi,j), where ← − mi,j = mi,n−j+1.
We can see that the size of ← − M is m × n too. We denote by
Jn = ←−
In the reversing of the identity matrix In of size n. Thus, the following properties can be proven, see [2].
1. The double reversing: As we can see, the palindromic matrix M satisfies that mi,j = mi,n−j+1 and for instance M has at least n 2 pair of equal columns whether n is even (as well n 2 − 1 when n is odd).
← − ← − M = ( ← − ← − mi,j ) = ( ← − mi,n−j+1) = m i,
This fact lead us to the following result. Proof. We proceed by induction over n. Assuming n = 1, we have that In = 1 and Jn = 1, thus det (Jn) = 1 = (−1) 1+3 2 . Let the proposition be true for n, thus we will prove that is also true for n + 1. We start considering that n is even, so we get det (Jn+1) = 1 (−1) 1+(n+1) det (Jn)
= (−1) n+2 (−1) n 2 = (−1) n 2 = (−1) (n+1)+3 2
. Now, considering n as an positive odd integer, we have
det (Jn+1) = 1 (−1) 1+(n+1) det (Jn) = (−1) n+2 (−1) n+3 2 = (−1) (−1) n+3 2 = (−1) n+5 2 = (−1) n+1 2 .
Now, we study the relationship between the exterior product ∧ and the reversing operation. We start considering k = n − 1, that is, the generalized vector product over R n . Consider M1 = (m11, m12, . . . , m1n) , . . . , Mn−1 = m (n−1),1 , a (n−1),2 , . . . , m (n−1),n , n − 1 vectors in R n . The generalized vector product is given by the equation (1), therefore we obtain
× (M1, M2, . . . , Mn−1) = n k=1 (−1) 1+k det M (k) e k ,(5)
being e k the k-th element of the canonical basis for R n and M (k) is the square matrix obtained after the deleting of the k-th column of the matrix M = (mij) (n−1)×n . The matrix M (k) is a square matrix of size (n − 1) × (n − 1) and is given by
M (k) = m (k) i,j = mi,j , si j < k mi,j+1 si j ≥ k .(6)
Proposition 7. If we consider M = (mij) (n−1)×n , then
← − M (k) = M (n−k+1) Jn−1, for 1 ≤ k ≤ n.
Proof. We know that
← − M = M Jn, that is, ( ← − mi,j) = (mi,n−j+1), 1 ≤ j ≤ n. Therefore ← − M (k) = ← − m (k) i,j = ← − mi,j, si j < k ← − mi,j+1 si j ≥ k = mi,n−j+1, si j < k m i,n−(j+1)+1 si j ≥ k .
On the other hand,
M (n−k+1) = m (n−k+1) i,j = mi,j , si j < n − k + 1 mi,j+1 si j ≥ n − k + 1 .(7)
Now, we obtain Thus, in general, the exterior product does not satisfies U = (−1) p ← − U , for some p ∈ Z.
Finally, although this paper is presented in a didactic way, there are original results corresponding to the relations between the reversing operation and the generalized vector product.
Example 1 .
1For n = 5 and k = 3 we have I = {123, 124, 125, 134, 135, 145, 234, 235, 245, 345} . As we can see, #I = n k = 5 3 = 10. Example 2. For n = 5 and k = 2, we obtain 5 2 = 10 and for instance I is given by I = {12, 13, 14, 15, 23, 24, 25, 34, 35, 45} .
Example 4 .Example 5 .
45Consider the vectors (2, 3, −1, 5) , (4, 7, 2, 0) ∈ R 4 . The vector (2, 3, −1, 5) ∧ (4, 7, 2, 0) belongs to R ( 4 2 ) = R 6 . In this caseI = {12, 13, 14, 23, 24, 34} , Consider the canonical basis for R 4 , that is, e1 = (1, 0, 0, 0), e2 = (0, 1, 0, 0), e3 = (0, 0, 1, 0) and e1 = (0, 0, 0, 1). Thus, the exterior product ei ∧ ej for i < j is given bye1 ∧ e2 = − (e1 ∈ R 6 .
2. 2 .
2Some properties of ∧. The following properties are satisfied by ∧: 1) If k = n, then ∧ (U ) = det (U ). 2) If k = n − 1, then ∧ is the generalized vector product. 3) If n is even and k = 1, then U is orthogonal to ∧ (U ). 4) ∧ is k−linear, ∧ (u1, . . . , ui + b, . . . , u k ) = ∧ (u1, . . . , ui, . . . , u k ) + ∧ (u1, . . . , b, . . . , u k ) . 5) If Mp is a permutation of two rows (being fixed the other ones) of M , then ∧ (Mp) = − ∧ (M ).
U
· ∧ (U ) = (u1, u2, . . . , u2p−1, u2p) · (u2p, −u2p−1, . . . , u2, −u1) = u1u2p − u2u2p−1 + . . . + u2p−1u2 − u2pu1 = (u1u2p − u2pu1) + . . . + (−1) p+1 (upup+1 − up+1up) = 0.
.
JnJn = In. The following definitions were introduced in [2]. A matrix M is called palindromic whether it satisfies ← − M = M , in the same way, a matrix M is called antipalindromic whether it satisfies ← − M = −M . In particular, for m = 1, we get palindromic and antipalindromic vectors respectively.
Proposition 6 .
6det(Jn) = (−1) n/2 , n = 2k, k ∈ Z + (−1) n+3 2 , n = 2k − 1, k ∈ Z + .
i,(n−j) , si n − j < n − k + 1 m i,(n−j)+1 si n − j ≥ n − k + 1 = m i,(n−j) , si j > k − 1 m i,(n−j)+1 si j ≤ k − 1 =mi,n−j, si j ≥ k mi,n−j+1 si j
AcknowledgementsThe first author is partially supported by MICIIN/FEDER grant number MTM2009-06973 and by Universidad del Norte. The second author is supported by Pontificia Universidad Javeriana. The third author is partially supported by Universidad Sergio Arboleda. The authors thanks to the anonymous referees by their useful comments and suggestions.The following proposition is a generalization of one result presented in[2], where was analyzed the reversing of the vector product in R 3 .From now on, for suitability we denote M = (M1, M2, . . . , Mn−1), i.e., M is the matrix that has as rows the vectors M1, M2, . . . , Mn−1, thus we obtainIn the same way, for suitability we write
Pasting and Reversing operations over some rings. P Acosta-Humánez, A Chuquen, & A Rodríguez, Boletín de Matemáticas. 17P. Acosta-Humánez, A. Chuquen & A. Rodríguez, Pasting and Reversing operations over some rings, Boletín de Matemáticas, 17, (2010) 143-164
Pasting and Reversing operations over some vector spaces. P Acosta-Humánez, A Chuquen, & A Rodríguez, PreprintP. Acosta-Humánez, A. Chuquen & A. Rodríguez, Pasting and Reversing operations over some vector spaces, Preprint (2011)
The Cramer's rule via generalized vector product over R n (Spanish). M Aranda & R, Núñez, Universitas Scientorum, 8, Investigaciones Matemáticas. M. Aranda & R. Núñez, The Cramer's rule via generalized vector product over R n (Spanish), Universitas Scientorum, 8, Investigaciones Matemáticas (2003), 13-15.
Algebraic Geometry, A First Course. J Harris, SpringerNew YorkJ. Harris, Algebraic Geometry, A First Course, Springer, New York, (1992)
. W V D Hodge, & D Pedoe, Methods of Algebraic Geometry. ICambridge University PressW. V. D. Hodge & D. Pedoe, Methods of Algebraic Geometry, vol I, Cambridge Uni- versity Press, (1994)
Linear Algebra. S Lang, Undergraduate Texts in Mathematics. SpringerS. Lang, Linear Algebra , Undergraduate Texts in Mathematics, Springer, New York (1987).
Vector product over R n : The Lagrange's general identity (Spanish). M Marmolejo, Matemáticas enseñanza universitaria. 3M. Marmolejo, Vector product over R n : The Lagrange's general identity (Spanish), Matemáticas enseñanza universitaria, vol. 3 (1994), 109-117.
Structures of multilinear algebra (Spanish). J Olivert, ValenciaUniversidad de ValenciaJ. Olivert, Structures of multilinear algebra (Spanish), Universidad de Valen- cia, Valencia, (1996).
Colombia e-mail: [email protected] Moisés Aranda Departamento de Matemáticas Pontificia Universidad Javeriana Bogotá, Colombia e-mail: maranda@javeriana. Primitivo Acosta-Humánez Departamento de Matemáticas y Estadística Universidad del Norte Barranquilla ; Reinaldo Nunez Escuela de Matemáticas Universidad Sergio Arboleda BogotáColombia e-mail: [email protected] Acosta-Humánez Departamento de Matemáticas y Estadística Universidad del Norte Barranquilla, Colombia e-mail: [email protected] Moisés Aranda Departamento de Matemáticas Pontificia Universidad Javeriana Bogotá, Colombia e-mail: [email protected] Reinaldo Nunez Escuela de Matemáticas Universidad Sergio Arboleda Bogotá, Colombia e-mail: [email protected]
|
[] |
[
"The orbital parameters of the gamma-ray binary LMC P3",
"The orbital parameters of the gamma-ray binary LMC P3"
] |
[
"B Van Soelen \nDepartment of Physics\nUniversity of the Free State\nPO Box 3399300BloemfonteinSouth Africa\n",
"† N Komin \nSchool of Physics\nUniversity of the Witwatersrand\n1 Jan Smuts Avenue2050Braamfontein, JohannesburgSouth Africa\n",
"‡ A Kniazev \nSouth African Astronomical Observatory\nPO Box 97935Observatory, Cape TownSouth Africa\n\nSouthern African Large Telescope\nPO Box 97935Observatory, Cape TownSouth Africa\n",
"P Väisänen \nSouth African Astronomical Observatory\nPO Box 97935Observatory, Cape TownSouth Africa\n\nSouthern African Large Telescope\nPO Box 97935Observatory, Cape TownSouth Africa\n"
] |
[
"Department of Physics\nUniversity of the Free State\nPO Box 3399300BloemfonteinSouth Africa",
"School of Physics\nUniversity of the Witwatersrand\n1 Jan Smuts Avenue2050Braamfontein, JohannesburgSouth Africa",
"South African Astronomical Observatory\nPO Box 97935Observatory, Cape TownSouth Africa",
"Southern African Large Telescope\nPO Box 97935Observatory, Cape TownSouth Africa",
"South African Astronomical Observatory\nPO Box 97935Observatory, Cape TownSouth Africa",
"Southern African Large Telescope\nPO Box 97935Observatory, Cape TownSouth Africa"
] |
[
"MNRAS"
] |
LMC P3 is the most luminous gamma-ray binary discovered to date and the first detected outside of the Galaxy, with an orbital period of 10.301 d. We report on optical spectroscopic observations undertaken with the Southern African Large Telescope (SALT) using the High Resolution spectrograph (HRS). We find the binary is slightly eccentric, e = 0.40 ± 0.07, and place the time of periastron at HJD 2457412.13 ± 0.29. Stellar model fitting finds an effective temperature of T eff = 36351 ± 53 K. The mass function, f = 0.0010 ± 0.0004 M , favours a neutron star compact object. The phases of superior and inferior conjunctions are 0.98 and 0.24, respectively (where phase 0 is at the Fermi-LAT maximum), close to the reported maxima in the GeV and TeV light curves.
|
10.1093/mnras/stz289
|
[
"https://arxiv.org/pdf/1901.08911v2.pdf"
] | 119,352,584 |
1901.08911
|
25bc9f11d6934d723485cd45278b665051ccceed
|
The orbital parameters of the gamma-ray binary LMC P3
2017
B Van Soelen
Department of Physics
University of the Free State
PO Box 3399300BloemfonteinSouth Africa
† N Komin
School of Physics
University of the Witwatersrand
1 Jan Smuts Avenue2050Braamfontein, JohannesburgSouth Africa
‡ A Kniazev
South African Astronomical Observatory
PO Box 97935Observatory, Cape TownSouth Africa
Southern African Large Telescope
PO Box 97935Observatory, Cape TownSouth Africa
P Väisänen
South African Astronomical Observatory
PO Box 97935Observatory, Cape TownSouth Africa
Southern African Large Telescope
PO Box 97935Observatory, Cape TownSouth Africa
The orbital parameters of the gamma-ray binary LMC P3
MNRAS
0002017Accepted XXX. Received YYY; in original form ZZZPreprint 1 March 2019 Compiled using MNRAS L A T E X style file v3.0Gamma rays: starsbinaries: spectroscopicStars: massivestars: neu- tron
LMC P3 is the most luminous gamma-ray binary discovered to date and the first detected outside of the Galaxy, with an orbital period of 10.301 d. We report on optical spectroscopic observations undertaken with the Southern African Large Telescope (SALT) using the High Resolution spectrograph (HRS). We find the binary is slightly eccentric, e = 0.40 ± 0.07, and place the time of periastron at HJD 2457412.13 ± 0.29. Stellar model fitting finds an effective temperature of T eff = 36351 ± 53 K. The mass function, f = 0.0010 ± 0.0004 M , favours a neutron star compact object. The phases of superior and inferior conjunctions are 0.98 and 0.24, respectively (where phase 0 is at the Fermi-LAT maximum), close to the reported maxima in the GeV and TeV light curves.
INTRODUCTION
Gamma-ray binaries are a distinct class of high mass binary systems, defined by having spectral energy distributions that peak (in a νF ν distribution) in the gamma-ray regime (see e.g. Dubus 2013, for a detailed review of these sources). There are only seven such systems known, the most recent of which, PSR J2032+4127, was recently detected at very high energies around periastron in November 2017 (Ho et al. 2017;Mirzoyan & Mukherjee 2017;The VERITAS Collaboration et al. 2018). In this paper we present high resolution spectroscopic optical observations of the recently discovered source LMC P3, the most luminous of the gamma-ray binaries and the first detected outside of the Galaxy, lying in the Large Magellanic Cloud (LMC; Corbet et al. 2016).
All gamma-ray binaries consist of a compact object, within the mass range of a neutron star or a black hole, which is in orbit around an O or B type star (Dubus 2013;Corbet et al. 2016). However, for only two systems, PSR B1259−63/LS 2883 and the recently detected PSR J2032+4127, is the nature of the compact known, since they have been detected as pulsars (Johnston et al. 1992;
Based on observations made with the Southern African Large Telescope (SALT) under program 2016-1- .
† E-mail: [email protected] ‡ E-mail: [email protected] Abdo et al. 2009). In these systems, the non-thermal emission is believed to arise due to the particle acceleration that occurs at the shock that forms between the pulsar and stellar winds. In the other sources a black hole compact object cannot be ruled out, and microquasar scenarios are still considered. LMC P3 was a point-like source "P3" detected in Fermi-LAT observations of the LMC (Ackermann et al. 2016). The binary nature of LMC P3 was discovered through a search for periodicity in Fermi-LAT observations, finding a 10.301 ± 0.002 d period (Corbet et al. 2016). LMC P3 is associated with the previously detected point-like X-ray source CXOU 053600.0−673507 located in the supernova remnant DEM L241 (Bamba et al. 2006). It was previously suggested by Seward et al. (2012) that based on the variability of the X-ray flux and small variations of the radial velocity of an O5III(f) star (V = 13.5) coincident with this X-ray source, that the object was a High-Mass X-ray Binary (HMXB) with a period of tens of days. X-ray and radio observations confirmed the multi-wavelength modulation on the same 10.301 d period while optical radial velocity measurements of the O5III(f) star (the earliest type of any gamma-ray binary) also showed a variation consistent with this period (Corbet et al. 2016). The binary solution to the radial velocities found a mass function of f (M) = 1.3 +1.1 −0.6 × 10 −3 M , however, the eccentricity of the system could not be constrained.
The radio and X-ray light curves are in phase, but are in anti-phase with the Fermi-LAT light curves (Corbet et al. 2016). The H.E.S.S. telescope has subsequently reported detection at TeV energies (though only in a single phase bin) which is also in anti-phase with the Fermi-LAT observations (HESS Collaboration et al. 2018). This is very similar to what is observed for the gamma-ray binary LS 5039 (Aharonian et al. 2005;Kishishita et al. 2009;Abdo et al. 2009). Key to understanding the gamma-ray emission is obtaining a clear solution for the binary parameters of the source. Here we report on optical spectroscopic observations of LMC P3 undertaken with the Southern African Large Telescope (SALT) using the High Resolution spectrograph (HRS), to establish the binary parameters. The HRS is a dual-beam fibre-fedéchelle spectrograph, housed within a vacuum tank, inside a thermo-stable room.
The HRS is designed for extra-solar planet searches with velocity accuracies of 5 m s −1 in the High Stability Mode. As part of the HRS calibration plan, flats and ThAr hollowcathode lamp spectra are obtained weekly through both the object and sky fibres. Observations of radial velocity standards are taken as part of the HRS calibration plan. 1 Observations were undertaken using the Low Resolution Mode (R = 14 000), with each observation consisting of two camera exposures of 1 220 s, (except for two nights where the exposure was increased to 2×1 640 s). The different orders of the HRS spectra were extracted and wavelength calibrated using the HRS pipeline discussed in Kniazev et al. (2016). Each individual order of the spectrum was normalized and merged into a single one dimension spectrum using the standard iraf/pyraf packages. Heliocentric correction was performed for each individual exposure using rvcorrect/dopcor and then nightly observations were averaged together.
HRS observations are undertaken with a 2.2 fibre placed on the target and a separate "sky fibre" that must be placed at least 16 away from the target. Because the target lies within a nebula the sky lines are dominated by the Balmer emission lines arising from the nebula, and the background sky measured in the sky fibre was significantly different from the sky as measured at the target. As a result, the sky subtraction was not able to properly correct for the nebula emission and introduced more noise into the spectrum. For this reason, no sky subtraction was performed and the analysis was restricted to the "blue" arm of the HRS where the nebula and sky contamination was minimal.
Radial velocity determination
The radial velocity was investigated by fitting the position of individual lines and by cross-correlating the spectrum to a template.
The observed central wavelength of different absorption lines was determined by fitting Gaussian profiles, which showed that different line species have different radial velocities. This effect has previously been noted in O-type stars, and is most likely due to the contamination by the stellar wind (see e.g. Casares et al. 2005;Sarty et al. 2011;Puls et al. 1996;Waisberg & Romani 2015).
Because of the different velocities found for different lines the radial velocity was determined by cross-correlating individual spectra to a template, using the rvsao/xcsao package (Kurtz & Mink 1998). We followed a similar process to that described in, for example, Manick et al. (2015); Foellmi et al. (2003); Monageng et al. (2017), and created the template from the available observations. To create the reference template, first an average of all observations was found and the velocity shift between each observation and the average spectrum was determined through cross-correlation. Next, all the individual spectra were corrected by the shift to the average spectrum and the template was produced by averaging these velocity corrected spectra. The final template spectrum, in the 4150-4600Å wavelength range used for the cross-correlation, is shown in Fig. 1.
In order to determine the zero-velocity of the template spectrum, we used the ULySS program (Koleva et al. 2009) with a medium spectral-resolution MILES library to simultaneously determine the line-of-sight velocity for the star, and its T eff , log g and [Fe/H]. A fit over the 4160-5000Å wavelength range finds a redshift of cz = 320.7 ± 0.7 km s −1 (with a dispersion of 58.9±1.2 km s −1 ). The stellar model fitting method also provides a best fit to the atmospheric properties of the star, finding an effective temperature of T eff = 36351 ± 53 K, a surface gravity log g = 3.4 ± 0.1 [log(cm s −2 )], and a metallicity of [Fe/H] = 0.25 ± 0.01. This is compatible with an OIII type star, though the values are lower than those of an O5 III star in, for example, Martins et al. (2005). However, the exclusion of parts of the Balmer lines prevents us from undertaking more detailed stellar atmospheric modelling.
The final radial velocity, relative to the template, was calculated by performing the cross-correlation analysis in the 4150-4600Å wavelength range which contains four He lines and the Hγ line, and which has limited contamination from the sky lines and the nebula. The wavelength region 4343.5-4346.5Å was ignored in the cross-correlation analysis as it contained significant contamination from a narrow nebula emission line superimposed on the stellar absorption line.
Orbital parameter determination
The binary orbital parameters were determined from the fit to the radial velocities using the helio rv package which is part of the IDL Astronomy Library (Landsman 1993 ments to achieve a reduced χ 2 of exactly 1 (Lampton et al. 1976). This scaling factor was ∼ 1.2 − 5.2 depending on the data. This does not change the values of the fitted parameters, but gives a more accurate estimate of the error, since addition systematic errors are better accounted for. Table 1 shows the orbital parameters determined from the velocities of the individual lines. All fits were performed assuming a fixed period of 10.301 d. We find different velocities from the He I and He II lines, as was noted above. The measurements of the individual lines do suffer from lower signal-to-noise and for H β contamination from a nebula emission line.
The solution using the radial velocities determined from cross-correlation is shown in Table 2. If the orbital period is kept as a free parameter, the best-fitting orbital period is 10.314 ± 0.044 days, which is consistent with the 10.301 ± 0.002 d period found from the Fermi-LAT data (Corbet et al. 2016). However, searches for periodicity using the Lomb-Scargle technique could not detect a statistically significant period and we have, therefore, adopted an orbital period of 10.301 days for our final result.
We do note the systemic velocity we find, Γ = 321.18 ± 0.85 km s −1 , is higher than the 295.8 ± 2.0 km s −1 previously reported (Corbet et al. 2016). We undertook additional analysis to confirm that the wavelength calibration performed by the HRS pipeline was correct and that the comparison to the radial velocity standards was accurate to within the expected performance. We found no evidence of any discrepancy in the calibration, nor any long term systematic shift in the wavelength calibration over the period of observations. We believe that there are two possible reasons for this difference; there may be a possible systematic offset arising from the fit of the high resolution template fit to the medium resolution MILES libraries (however there is no significant offset to the radial velocity standards) or there may have been a systematic offset in the zero velocity in the field O-type star used in the previous analysis. However, this difference does not change the main results of determining the orbital parameters. If this difference in the systemic velocity is removed the radial velocities are in agreement with the previous results. The final radial velocity curve and the best-fitting model are shown in Fig. 2, and the data are given in Table 3. We find the binary is slightly eccentric, e = 0.40 ± 0.07, and place the time of periastron at HJD 2457412.13 ± 0.29.
DISCUSSION
Mass of the compact object
The binary parameter solution gives a mass function of f = 0.0010 ± 0.0004 M and the constraints on the mass of the compact object are shown in Fig. 3. The mass of an O5III(f) star is ∼ 40 M (Martins et al. 2005), but due to the binary evolution could be different. A mass range of 25 − 42M was considered in Seward et al. (2012). For this mass range both a neutron star or black hole mass is compatible with the mass function, though a neutron star is favoured. For a 40 M star, the mass of the compact object will be > 5 M for inclinations i ≤ 15 ± 2 • , and while the inclination must be i ≤ 11±1 • for a 25 M optical companion. Assuming the compact object is a pulsar, with a mass of 1.4 M , the inclination will lie between i = 39 ± 6 • and i = 59 ± 11 • for a 25 M and 40 M star, respectively. 13.80 ± 2.84 15.14 ± 2.10 24.23 ± 26.31 11.30 ± 1.21 11.87 ± 1.68 Eccentricity 0.40 ± 0.12 0.52 ± 0.07 0.69 ± 0.28 0.42 ± 0.06 0.46 ± 0.08 Longitude of periastron (degrees)
5.9 ± 20.9 29.9 ± 10.4 354.5 ± 10.9 12.0 ± 8.9 8.1 ± 13.34 Mass function (M ) 0.0021 ± 0.0014 0.0023 ± 0.0010 0.0058 ± 0.0201 0.0011 ± 0.0004 0.0013 ± 0.0006
Binary orientation
The orientation of the binary system is shown in Fig. 4. The system parameters are calculated assuming the optical star has a mass of M = 33.5 M and a radius of R = 14.5 R (average of the reported mass range and the corresponding average radius; e.g. Martins et al. 2005) and a mass of M p = 1.4 M for the compact object. This would correspond to an inclination of i ≈ 50 • . Following Corbet et al. (2016) phase φ = 0 is assumed to be at MJD = 57410.25. Superior conjunction (when the compact object is behind the optical star) occurs at φ sup = 0.98 and inferior conjunction is at φ inf = 0.24 with periastron occurring at φ per = 0.13.
Implications for gamma-ray emission
The SALT HRS observations have shown that LMC P3 has an eccentricity of ≈ 0.4 and established the orientation (longitude of periastron) of the system. Both the eccentricity and the orientation play an important role in the modulation of the observed gamma-ray emission, since the inverse Compton scattering is dependent on the energy density of the target photons and the angle of scattering. In the cases of the highly eccentric systems PSR B1259−63/LS 2883 and PSR J2032+4127 gamma-ray emission is only detected near periastron ( . Such an out of phase light curve between the GeV and TeV gamma-ray emission can arise since γγ absorption can modulate very high energy emission (e.g. Dubus 2006;Böttcher & Dermer 2005). In this scenario, because of the strong angular dependence of inverse Compton scattering and γγ absorption, both the maximum in the inverse Compton emission and γγ absorption should occur around superior conjunction. This could lead to a maximum in the GeV light curve near superior conjunction since the photons are below the pair-production threshold energy, while the maximum in the TeV light curve will occur around inferior conjunction where the γγ opacity is lowest. The binary solu- tion found in this paper supports this for the LMC P3, with superior conjunction lying at φ ≈ 0.98 at the peak of the Fermi-LAT light curve, while inferior conjunction, φ ≈ 0.24 is around the peak in the H.E.S.S. light curve.
Additionally Doppler boosting of the emission may play a role. The tail of the shock may obtain a relativistic bulk velocity, as is evident from hydrodynamical simulations of gamma-ray binaries (e.g. Bogovalov et al. 2008Bogovalov et al. , 2012. If, due to γγ absorption, we predominately observe TeV gamma-ray emission originating from this region (with the GeV emission originating from the apex of the shock), this would lead to an enhancement in the observed TeV emission near inferior conjunction when the material is directed towards us (see e.g. Dubus et al. 2010;Zabalza et al. 2013).
CONCLUSION
We have undertaken SALT/HRS observations of LMC P3 and established the best orbital parameters for this system so far. The best-fitting solution shows the binary has an eccentricity of e = 0.40 ± 0.07, which makes it similar to LS 5039. The best orbital parameter fit places superior conjunction at orbital phase φ = 0.98, close to the maximum in the Fermi-LAT light curve, while inferior conjunction is at phase φ = 0.24. This orientation may explain the antiphase between the GeV and TeV light curves. The determined mass function, f = 0.0010 ± 0.0004 M , favours a neu-tron star compact object, and subsequently favours a pulsar wind driven and not accretion driven system.
was successfully observed 24 times with the High Resolution Spectrograph (HRS; Bramall et al. 2010, 2012; Crause et al. 2014) on the Southern African Large Telescope (SALT; Buckley et al. 2006) between 2016 September 14 and 2017 February 06.
Figure 1 .
1Template spectrum created from averaging over all observations. The gap in the Hγ line is where a section was excluded from the cross-correlation calculation because of contamination by a emission line from the nebula.
Figure 2 .Figure 3 .
23Radial velocity of the O5III star in the binary system, as determined from cross-correlation. The solid line shows the best fit to this data, with a fixed 10.301 day orbital period. For clarity the plot is reported over two orbital phases with phase φ = 0 at MJD = 57 410.25, which corresponds to the given phase inCorbet et al. (2016). The error bars show the statistical errors reported by the rvsao package. The constraint on the mass of the compact object in LMC P3. The shaded area marks the range of assumed masses of the optical companion.
e.g. H.E.S.S. Collaboration et al. 2013; The VERITAS Collaboration et al. 2018). The Fermi-LAT observations of LMC P3 show a flux, peaking slightly after phase φ ∼ 0 (Corbet et al. 2016) while the H.E.S.S. telescope has only detected the VHE emission, in a single phase bin of φ bin = 0.2 − 0.4 (HESS Collaboration et al. 2018)
Figure 4 .
4The binary orientation of the LMC P3. This is calculated assuming a M = 33.5 M and radius of R = 14.5 R and M p = 1.4 M . The blue circle shows the relative size of the optical star while the black line traces the orbit of the compact object. The positions and orbital phase of superior and inferior conjunction are marked by + while the position and phase of periastron is marked by a ×.
//pysalt.salt.ac.za/proposal calls/current/ProposalCall.html.1 The full details of the calibration plan and the
stability
of
the
radial
velocitiy
determinations
are
given
in
the
SALT
proposal
call
documentation
http:
). 2 All reported errors on the fitted binary parameters are calculated by scaling the errors in the radial velocity measure-0.75
0.8
0.85
0.9
0.95
1
1.05
4150
4200
4250
4300
4350
4400
4450
4500
4550
4600
He II/Ne III
He I 4471.48 He II
4541.59
He I 4387.9
Hγ
Wavelength (Angstrom)
Table 1 .
1Orbital parameters as determined from the radial velocities calculated by Gaussian fits to individual lines.H beta
He I 4471
He I 4921
He II 4541
He II 5411
Time of periastron (HJD)
2457412.15 ± 0.50 2457412.45 ± 0.21 2457411.89 ± 0.36 2457411.99 ± 0.21 2457412.14 ± 0.33
Orbital period (fixed, days)
10.301 ± 0.000
10.301 ± 0.000
10.301 ± 0.000
10.301 ± 0.000
10.301 ± 0.000
Systemic velocity (km/s)
298.43 ± 1.26
331.02 ± 0.75
335.95 ± 2.77
340.00 ± 0.48
337.33 ± 0.71
K (velocity semi-amplitude)
Table 2 .
2Orbitalparameters determined from the velocities calculated by cross-correlation. Note the systemic velocity is relative to the
template file used.
Parameters
Free
Fixed
(adopted)
Time of periastron (HJD)
2457411.77 ± 1.34 2457412.13 ± 0.29
Orbital period ( days)
10.314 ± 0.044
10.301 ± 0.000
Systemic velocity relative to template (km/s)
0.73 ± 0.59
0.68 ± 0.55
Systemic velocity (km/s)
321.23 ± 0.88
321.18 ± 0.85
K (velocity semi-amplitude)
10.69 ± 1.24
10.69 ± 1.23
Eccentricity
0.39 ± 0.08
0.40 ± 0.07
Longitude of periastron (degrees)
12.9 ± 12.8
11.3 ± 12.0
Mass function (M )
0.0010 ± 0.0004
0.0010 ± 0.0004
Table 3. Radial velocities obtained from cross-correlation
HJD
Radial velocity (km s −1 )
2457655.60791
316.64 ± 1.35
2457665.60399
320.00 ± 1.48
2457691.50654
322.48 ± 0.94
2457708.46002
318.79 ± 1.20
2457711.56664
330.17 ± 1.03
2457722.44354
324.03 ± 0.87
2457723.39659
317.48 ± 1.96
2457724.46868
314.13 ± 1.11
2457725.46161
317.11 ± 0.76
2457728.46743
319.75 ± 0.74
2457730.47147
331.31 ± 0.87
2457731.53120
334.28 ± 1.04
2457732.40171
329.11 ± 1.15
2457733.38795
317.89 ± 1.32
2457734.39469
316.07 ± 0.86
2457736.37916
313.45 ± 1.65
2457737.37704
316.03 ± 1.64
2457738.38493
317.63 ± 1.48
2457774.33596
317.78 ± 1.51
2457776.41725
313.64 ± 1.06
2457780.33657
313.18 ± 0.99
2457781.31273
323.20 ± 0.89
2457788.31389
314.58 ± 0.82
2457791.30522
327.68 ± 1.04
https://idlastro.gsfc.nasa.gov/
MNRAS 000, 1-6 (2017)
ACKNOWLEDGEMENTSThe authors are grateful to P.A. Charles, A. Odendaal, A.F. Rajoelimanana and L.J. Townsend for valuable discussions. All of the observations reported in this paper were obtained with the Southern African Large Telescope (SALT). BvS and NK acknowledge that this work was supported by the Department of Science and Technology and the National Research Foundation of South Africa through a block grant to the South African Gamma-Ray Astronomy Consortium. AK and PV acknowledge the support of the National Research Foundation of South Africa.This paper has been typeset from a T E X/L A T E X file prepared by the author.
. A A Abdo, 10.1126/science.1175558Science. 325840Abdo A. A., et al., 2009, Science, 325, 840
. M Ackermann, 10.1051/0004-6361/201526920A&A. 58671Ackermann M., et al., 2016, A&A, 586, A71
. F Aharonian, 10.1126/science.1113764Science. 309746Aharonian F., et al., 2005, Science, 309, 746
. A Bamba, M Ueno, H Nakajima, K Mori, K Koyama, 10.1051/0004-6361:20054096A&A. 450585Bamba A., Ueno M., Nakajima H., Mori K., Koyama K., 2006, A&A, 450, 585
. S V Bogovalov, D V Khangulyan, A V Koldoba, G V Ustyugova, F A Aharonian, 10.1111/j.1365-2966.2008.13226.xMNRAS. 38763Bogovalov S. V., Khangulyan D. V., Koldoba A. V., Ustyugova G. V., Aharonian F. A., 2008, MNRAS, 387, 63
. S V Bogovalov, D Khangulyan, A V Koldoba, G V Ustyugova, F A Aharonian, 10.1111/j.1365-2966.2011.19983.xMNRAS. 4193426Bogovalov S. V., Khangulyan D., Koldoba A. V., Ustyugova G. V., Aharonian F. A., 2012, MNRAS, 419, 3426
. M Böttcher, C D Dermer, 10.1086/498615ApJ. 63481Böttcher M., Dermer C. D., 2005, ApJ, 634, L81
Ground-based and Airborne Instrumentation for Astronomy III. D G Bramall, 10.1117/12.85638277354Bramall D. G., et al., 2010, in Ground-based and Air- borne Instrumentation for Astronomy III. p. 77354F, doi:10.1117/12.856382
D G Bramall, 10.1117/12.925935Ground-based and Airborne Instrumentation for Astronomy. 84460Bramall D. G., et al., 2012, in Ground-based and Air- borne Instrumentation for Astronomy IV. p. 84460A, doi:10.1117/12.925935
D A H Buckley, G P Swart, J G Meiring, 10.1117/12.673750Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. 62670Buckley D. A. H., Swart G. P., Meiring J. G., 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. p. 62670Z, doi:10.1117/12.673750
. J Casares, M Ribó, I Ribas, J M Paredes, J Martí, A Herrero, 10.1111/j.1365-2966.2005.09617.xMNRAS. 364899Casares J., Ribó M., Ribas I., Paredes J. M., Martí J., Herrero A., 2005, MNRAS, 364, 899
. R H D Corbet, 10.3847/0004-637X/829/2/105ApJ. 829105Corbet R. H. D., et al., 2016, ApJ, 829, 105
L A Crause, 10.1117/12.2055635Ground-based and Airborne Instrumentation for Astronomy. 91476Crause L. A., et al., 2014, in Ground-based and Air- borne Instrumentation for Astronomy V. p. 91476T, doi:10.1117/12.2055635
. G Dubus, 10.1051/0004-6361:20054233A&A. 4519Dubus G., 2006, A&A, 451, 9
. G Dubus, 10.1007/s00159-013-0064-5A&ARv. 2164Dubus G., 2013, A&ARv, 21, 64
. G Dubus, B Cerutti, G Henri, 10.1051/0004-6361/201014023A&A. 51618Dubus G., Cerutti B., Henri G., 2010, A&A, 516, A18
. C Foellmi, A F J Moffat, M A Guerrero, 10.1046/j.1365-8711.2003.06052.xMNRAS. 338360Foellmi C., Moffat A. F. J., Guerrero M. A., 2003, MNRAS, 338, 360
. 10.1051/0004-6361/201220612A&A. 55194H.E.S.S. Collaboration et al., 2013, A&A, 551, A94
. 10.1051/0004-6361/201732426A&A. 61017HESS Collaboration et al., 2018, A&A, 610, L17
. W C G Ho, C.-Y Ng, A G Lyne, B W Stappers, M J Coe, J P Halpern, T J Johnson, I A Steele, 10.1093/mnras/stw2420MNRAS. 4641211Ho W. C. G., Ng C.-Y., Lyne A. G., Stappers B. W., Coe M. J., Halpern J. P., Johnson T. J., Steele I. A., 2017, MNRAS, 464, 1211
. S Johnston, R N Manchester, A G Lyne, M Bailes, V M Kaspi, G Qiao, N D'amico, 10.1086/186300ApJ. 38737Johnston S., Manchester R. N., Lyne A. G., Bailes M., Kaspi V. M., Qiao G., D'Amico N., 1992, ApJ, 387, L37
. T Kishishita, T Tanaka, Y Uchiyama, T Takahashi, 10.1088/0004-637X/697/1/L1ApJ. 6971Kishishita T., Tanaka T., Uchiyama Y., Takahashi T., 2009, ApJ, 697, L1
. A Y Kniazev, V V Gvaramadze, L N Berdnikov, 10.1093/mnras/stw8894593068MN-RASKniazev A. Y., Gvaramadze V. V., Berdnikov L. N., 2016, MN- RAS, 459, 3068
. M Koleva, P Prugniel, A Bouchard, Y Wu, 10.1051/0004-6361/200811467A&A. 5011269Koleva M., Prugniel P., Bouchard A., Wu Y., 2009, A&A, 501, 1269
. M J Kurtz, D J Mink, 10.1086/316207PASP. 110934Kurtz M. J., Mink D. J., 1998, PASP, 110, 934
. M Lampton, B Margon, S Bowyer, 10.1086/154592ApJ. 208177Lampton M., Margon B., Bowyer S., 1976, ApJ, 208, 177
W B Landsman, Astronomical Data Analysis Software and Systems II. Hanisch R. J., Brissenden R. J. V., Barnes J.52246Landsman W. B., 1993, in Hanisch R. J., Brissenden R. J. V., Barnes J., eds, Astronomical Society of the Pacific Confer- ence Series Vol. 52, Astronomical Data Analysis Software and Systems II. p. 246
. R Manick, B Miszalski, V Mcbride, 10.1093/mnras/stv074MNRAS. 4481789Manick R., Miszalski B., McBride V., 2015, MNRAS, 448, 1789
. F Martins, D Schaerer, D J Hillier, 10.1051/0004-6361:20042386A&A. 4361049Martins F., Schaerer D., Hillier D. J., 2005, A&A, 436, 1049
The Astronomer's Telegram. R Mirzoyan, R Mukherjee, 10971Mirzoyan R., Mukherjee R., 2017, The Astronomer's Telegram, 10971
. I M Monageng, V A Mcbride, L J Townsend, A Y Kniazev, S Mohamed, M Böttcher, 10.3847/1538-4357/aa87b7ApJ. 84768Monageng I. M., McBride V. A., Townsend L. J., Kniazev A. Y., Mohamed S., Böttcher M., 2017, ApJ, 847, 68
. J Puls, A&A. 305171Puls J., et al., 1996, A&A, 305, 171
. G E Sarty, 10.1111/j.1365-2966.2010.17757.xMNRAS. 4111293Sarty G. E., et al., 2011, MNRAS, 411, 1293
. F D Seward, P A Charles, D L Foster, J R Dickel, P S Romero, Z I Edwards, M Perry, R M Williams, 10.1088/0004-637X/759/2/123ApJ. 759123Seward F. D., Charles P. A., Foster D. L., Dickel J. R., Romero P. S., Edwards Z. I., Perry M., Williams R. M., 2012, ApJ, 759, 123
. Veritas The, Collaboration, arXiv:1810.05271arXiv:1810.05271preprintThe VERITAS Collaboration et al., 2018, preprint, p. arXiv:1810.05271 (arXiv:1810.05271)
. I R Waisberg, R W Romani, 10.1088/0004-637X/805/1/18ApJ. 80518Waisberg I. R., Romani R. W., 2015, ApJ, 805, 18
. V Zabalza, V Bosch-Ramon, F Aharonian, D Khangulyan, 10.1051/0004-6361/201220589A&A. 55117Zabalza V., Bosch-Ramon V., Aharonian F., Khangulyan D., 2013, A&A, 551, A17
|
[] |
[] |
[
"\nTomasz Tyl is with the Warsaw\nWarsaw University of Technolog\nWarsawPoland\n",
"\nUniversity of Technolog\nWarsawPoland\n"
] |
[
"Tomasz Tyl is with the Warsaw\nWarsaw University of Technolog\nWarsawPoland",
"University of Technolog\nWarsawPoland"
] |
[] |
This article presents a new method for detecting a source point of time based network steganography -MoveSteg. A steganography carrier could be an example of multimedia stream made with packets. These packets are then delayed intentionally to send hidden information using time based steganography methods. The presented analysis describes a method that allows finding the source of steganography stream in network that is under our management.
|
10.1515/eletel-2016-0046
|
[
"https://arxiv.org/pdf/1610.01955v1.pdf"
] | 3,043,467 |
1610.01955
|
d1963e5557e10cf1bccb387b6fc953bbcd7075ff
|
Tomasz Tyl is with the Warsaw
Warsaw University of Technolog
WarsawPoland
University of Technolog
WarsawPoland
CBMnetwork steganographyinformationhiding networkdetection of new attacks
This article presents a new method for detecting a source point of time based network steganography -MoveSteg. A steganography carrier could be an example of multimedia stream made with packets. These packets are then delayed intentionally to send hidden information using time based steganography methods. The presented analysis describes a method that allows finding the source of steganography stream in network that is under our management.
methods of detection of hidden information [2] [4] [7]. This refers to both information hidden in multimedia carriers such as images, movies or music, but also in network traffic. This paper focuses on another aspect of steganoanalysis, namely detection of a point from which hidden information is sent. Almost all newest solutions and security measures in cybersecurity treat protection from threats as a passive act, which consists in building a wall, as effective as possible, between the attackers and the sensitive resources we wish to protect. Using the method described in this paper, it is possible to find a point which is a source of steganography transmission in a network managed by us. This allows us to take appropriate measures at the source.
II. AN IDEA OF MOVESTEG
A. Description of the method
The method referred to in this paper was initially described in [6]. The main phenomenon occurring in telecommunication networks, which is used in this method, is blurring time dependencies between consecutive packets in a stream along with the number of passes through subsequent transmission devices on a full path length from the transmission source to its destination. This happens irrespective of the transmission medium, therefore in transmission systems which require very accurate synchronization (e.g. SDH), a strong emphasis is put on very accurate synchronization of devices building a network.
An example of this may be an IP network and a multimedia stream sent through it, using an RTP protocol to transmit sound. Due to a limited network efficiency and capacity, packets sent at equal intervals at the source, reach the addressee with various delays. This is a reason for using buffers on the receiver's side, which allows to compensate for the effects of irregular receiving parts of the signal sent. In the event when the network is heavily overloaded with the packets sent, a situation may occur that a part of a signal reaches the receiver too late that is after a consecutive part has reached it.
The phenomenon of blurring time dependencies may be used to find a source of network steganography. As it is demonstrated in the following part of this paper, by examining a delay between consecutive packets, we are able to state that parameters such as delay minimum value, delay maximum value, delay average value and standard deviation vary depending on the distance from the source. In addition, by analyzing a histogram of packets delays, we are able to assign packets belonging to one stream to one of several groups (the number of groups depends on the type of the steganographic method). By analyzing the distribution of delays, we are able to state where the beginning and the end of the steganographic channel are. An example of a histogram of delays between MoveSteg: A Method of Network Steganography Detection Krzysztof Szczypiorski and Tomasz Tyl S consecutive packets in a stream, along with an instruction on how to interpret it, is presented in Section 4 of this paper.
B. Assumptions and limitations
A success and efficiency of this method depend on several assumptions which have to be met. The first one is that it is necessary to analyze every stream between any pair of communicating hosts, as a separate stream. Information, that is the source host, the target host, the sequence number, and the time of receiving the last packet, must be stored for every stream flowing through every node participating in the measurements. Another assumption which has to be met is that there has to be one central place in which all the paths in our network are known, which allows to correlate results from the entire route of a given stream used as a steganographic carrier.
Another assumption, equally important to the previous one, is an ability to detect steganography via nodes used for analysis. The last assumption is a possibility to accurately measure the time interval between consecutive packets of a given stream.
Depending on the frequency of sending packets used as a steganographic carrier, measuring time with appropriate accuracy is necessary. This method has its several limitations which decrease its efficiency or even make it impossible to use it. The first and most important limitation to the method is the type of steganography detected. The method works only for steganography using time dependencies between consecutive stream packets, irrespective of whether these are methods directly using delays or the hybrid ones. Another limitation is a negative impact of multipath traffic on the method efficiency. This means that if a selected steganography method allows for a transmission via several separate paths (for in-stance, a steganographic carrier is several parallel multimedia streams, each of which is sent via different route), such traffic is much more difficult to analyze. An accurate synchronization and correlation of measurements taken from various nodes is then necessary, which is not always possible. The last Fig. 1. Line topology consisting of 50 hosts limitation worth mentioning is the impact of nodes disturbing time dependencies on the method efficiency. As demonstrated in the next section of the paper, nodes which modify delays (for instance, as a result of their overload), cause total blurring of time relations between the packets.
III. SIMULATION ENVIRONMENT
In order to examine the efficiency of the method presented in the previous section, a simulation environment was created, allowing to dynamically define input parameters and a topology of the network used. The environment consists of a group of virtual machines which are managed by one central server connected with a database storing measurement results. VirtualBox 5.0 was used as virtualization software, the managing server was written in Python 2.7, the database engine is MySQL Community 5.7, the system of virtual machines is Alpine Linux 3.2. The entire environment was launched on one physical machine with a quad core 3.2 GHz processor and 12 GB of RAM, an operating system launched on this machine was Ubuntu 14.04 LTS.
The main and most important part of the environment was the server which dealt with the setup of hosts and a virtualizer, but was also responsible for defining experiments, sending commands and collecting results of a completed test. The virtual machines were set up to send traffic in line with the routing table known to them, obtained through an OSPF protocol launched on every host. The network setup in the virtualizer involved creating a relevant number of private networks consisting of exactly two interfaces, one from each directly connected host.
This paper focuses on two network topologies, that is a line and a Manhattan-type network. The former will easily allow to present phenomena occurring in the network when sending network steganography using time dependencies between packets, and the latter will present the impact of network topology complexity on the efficiency of the method. Both topologies referred to above are schematically presented in Fig. 1 and 2. It should be highlighted that for the Manhattan-type topology, the network structure from the perspective of, for instance, host number 0, and taking into account the paths traced by the OSPF protocol, is as depicted in Fig. 3.
IV. RESULTS AND CONCLUSIONS
To depict the options the method provides, simulations of two network topologies described above are presented below, that is a line topology consisting of 50 nodes and a Manhattan-type topology consisting of 36 nodes. The former will allow to explain how the method works in a simple way, and the latter will demonstrate the options and efficiency of the method involving a more complicated network.
A. Line topology
The first case presented below will use a LACK steganographic method. The source node will be node 0, and the receiver node will be node 49. A steganographic channel is over the entire transmission length. A steganographic carrier is UDP packets sent at the source at equal intervals. All the parameters are presented in Table I. 15 % The meaning of the parameters is the following: T1 is the nominal value of delay between consecutive packets sent from the source. T2 is a value by which packets used for steganographic transmission are delayed. Both parameters referred to above have been selected according to the values in [3]. The last parameter is P -this is the percentage of traffic carrying hidden information in relation to all packets of a given stream. In a majority of cases, this parameter assumes much lower values within the range of several per cent, but the value assumed will allow to better present how the method works. The simulation duration time for all cases described in this paper was assumed to be 120 seconds.
In the first case, the analysis of time dependencies will be launched in all nodes at the entire route length. Once the simulation has been completed, information on the packets sent is collected in the server which saves it in the database. Next, a histogram of delay between consecutive packets is set for every node, along with the values of minimum, maximum and average delay between consecutive packets; in addition a standard deviation is established. Then, histograms are combined in the relevant order, which allows to see how time dependencies change with the distance increase. The histogram described above is presented below.
The X axis of the histogram contains node numbers, that is values from 1 to 48 (the sending and the receiving node are not included in the traffic analysis). The Y axis contains the value of delay between consecutive packets expressed in microseconds. The Z axis presents the number of packets which match a given range, that is a number of packets whose delay versus the previous packet fell within the range of a given bin. The number of bins in every histogram is 100. The interval of bins for every case described in this paper was established by dividing the difference between the maximum and the minimum value of delay by 100. Groups into which the stream used for steganographic purposes is divided are marked with letters A, B and C. It should be highlighted here that the histogram described above refers to the traffic related to only one UDP stream sent from node 0 to node 49. A group of packets containing hidden information has been market with letter A. These packets are delayed by 10 milliseconds on average with respect to the preceding packet. This is the case because, as it has already been mentioned above, these packets are intentionally delayed by additional 30 milliseconds, so they are sent after their successor already at the source. Sending packets at regular intervals of 20 ms and then delaying one of them by additional 30 ms is a reason for some 10 millisecond delay with respect to the predecessor. In line with the input parameters, in this case this is some 15% of all packets of the stream under analysis. The most numerous group of packets is marked with letter B -these are packets which were not intentionally delayed, therefore their average delay is some 20 ms. In addition, they do not carry stenograms. This group contains some 70% of all packets of the stream. Packets which were sent just before the ones containing hidden information are marked with letter C. Their average delay with respect to their predecessor is some 40 ms, that is twice as much as the value of T1. This group's number corresponds to group A, which is also some 15% of the entire traffic. As expected, the minimum value of delay slightly decreases, the maximum delay increases and the average value changes the least and also increases. Movement of the average value towards longer delays is due to the fact that the time of a packet processing in each node, irrespective of the packet, is non-zero. The marginal values (minimum and maximum) go with every consecutive node to the extreme, because when packets are sent at equal intervals at the source, after the first node most of them still preserve time dependence with respect to the neighbouring packets, yet after several subsequent nodes, due to random events occurring in the network and in every node, the dependencies become levelled out. As depicted in the histogram, the distribution of delays resembles a bell in its shape. Initially, the bell is slender and tall, then it becomes lower and broader with the route length. The next plot presents a standard deviation of delay of packets which belong to group A. As presented in the plot above (Fig. 6), the value which changes the most along with the way covered by the packets, is the standard deviation of the delay between consecutive packets. This confirms the change in the shape of the histogram for nodes located at the end of the route. Another example in this topology is a case using modulation of delay between consecutive packets in order to assign "zeros" and "ones". This method consists in intentional delaying a packet to signal a logical "one" to the party receiving the steganogram, or in intentional advancing (a buffer is required in which we will store a specific number of messages) in order to assign a logical "zero" (assignment of "zeros" and "ones" may be reverse). Similarly to the previous case described above, nodes with numbers from 1 to 48 will be colleting information on time dependencies between consecutive packets. As an additional element in this case, we will introduce unintentional delay in node no. 15. The purpose of this is to simulate an overloaded network node, which inadvertently introduces delays to all packets that pass through it. Delay is added on the outgoing interface of the host with a tool available in Linux, that is Traffic Control [5]. The distribution of the delay is normal, with an average value of 30 ms and a standard deviation of 15 ms. The other input parameters are described in the table II. Analogically to the previous case, a histogram was calculated for every node, and then the histograms were combined into one three-dimensional histogram presented below in Fig. 7. Similarly to the previous example, the main part of the multimedia stream is initially focused on one value equal to 50 ms, (that is the value of parameter T1). The packets used for modulation of the bits which are a steganogram constitute a much smaller part of the whole. Small peaks are visible near the delays amounting to 25 ms and 75 ms (T1 ± T2). Both peaks comprise 5% of all packets. At node 15 which disturbs time dependencies, we can see that the groups which have been clearly differentiable so far, have been totally mixed with each other, and the number of packets allocated to each bin is similar. The vertical edge visible for delays equal to 0 ms is due to the fact that the Traffic Control tool is unable to accelerate the packets (impose a negative delay on them), all these packets were sent immediately after their arrival in the node. Let us look now at the diagrams presenting the minimum, maximum and average value of delay between consecutive packets. In this case, all packets of the multimedia stream are taken into consideration. The plot in Fig. 8 presents the extent of impact which strong noises and disturbances created in heavily loaded or damaged network devices have on the method efficiency and a possibility to select the groups which are part of the stream. Below, a diagram of standard deviation is presented, which is also strongly disturbed from node 15 by introduced disturbance of time dependencies.
B. Manhattan-type topology
A more complicated case, in which network topology was depicted in Fig. 2, will now be presented. The node sending a multimedia (and steganographic) stream is node number 0. The recipients of transmission are two hosts number 5 and 35. Every topology may be presented in a perspective in which a given host can see it, in this case it was depicted in Fig. 3. As it Fig. 10. Histogram of delays between consecutive packets for every node. Case no. 2 has been stated before, this transformation is a result of paths traced by the OSPF routing protocol. The nodes collecting data on packet delays will be nodes number 1,6,7,3,9,15,21,20,19,18,34,28,29 (the selected nodes create three "walls" between node 0 and 35 in the topology diagram). LACK will be the steganographic method used. An additional element is a disturbing node no. 12 which delays every packet by 30ms ± 15ms with a correlation between the value of the delay added with respect to the previous one equal to 15%. The simulation duration time is identical to the previous ones and amounts to 120 seconds, the other parameters are presented in the table below. Identically to the two previous cases, a histogram for every network node was created (Fig. 10).
The histogram analysis itself allows to state that in this case it is difficult to unambiguously conclude that time dependencies along with subsequent nodes on the path will be blurring. Several first nodes in which we collect measurements have active probes have a histogram of delays that is very similar, then everything becomes blurred by disturbances introduced in node no. 12. With such a small number of nodes collecting measurements and under the influence of external disturbance, we are unable to conclude where the source of the steganographic stream is, even despite two separate streams, which theoretically should demonstrate that in the proximity of node 0, time dependencies are precisely established, and along with departing from it, both in the direction of node 5 and 35, time dependencies blur (similarly as in the first case presented).
V. CONCLUSIONS
The method MoveSteg previously proposed in [6], in line with what has been described in the previous sections, allows to observe the process of blurring time dependencies between packets. This phenomenon occurs irrespective of whether a given packet is used as a steganographic carrier. The conclusions presented in this paper may serve to detect the source sending steganography in our network.
The scenario of using this method in real world is the following. With a server in our network seized by a criminal (hacker), from which a steganographic stream is sent in several directions via other paths, we are able to find the source of steganography by analyzing the traffic from the sufficiently large number of nodes. The seized server may introduce disturbances to time dependencies not only in streams which originate in it, but also in the ones that pass through it. If we are able to analyze the traffic for every of such passing streams, we will notice groups of packets which were described in Section 4.1. Along with the growing distance from such a server, every group will be undergoing an increasing blurring. By collecting and correlating several such observations, we will be able to indicate which node in the network is conducting steganographic transmission. A weakness of this method is its dependence on the number of sounds on the path of the stream used as a steganographic carrier. With a small number of hosts collecting measurements, we are unable to state with sufficient certainty, whether along with the increasing distance from the suspicious node, time dependencies in streams become blurred. Another weakness of this method is the analysis sensitivity to disturbances introduced unintentionally by other elements of the network, for instance heavily overloaded or damaged transmission devices. There may be a case when a node suspected to send steganography is in fact an old server with insufficient resources.
Fig. 2 .
2Manhattan-type topology consisting of 36 hosts
Fig. 3 .
3Manhattan-type topology from the perspective of node 0, taking into account routes traced by a routing protocol, consisting of 36 hosts Such a transformation of the scheme allows to considerably simplify the traffic analysis.
Fig. 5 .
5Plot showing the minimum, maximum and average value in case no. 1
Fig. 6 .Fig. 7 .
67Standard deviation in case no. Histogram of delays between consecutive packets for every node. Case no 2
Fig. 8 .
8Diagram presenting the minimum, maximum and average value of case no. 2
Fig. 9 .
9Diagram of standard deviation in case no. 2
TABLE I
IInput parameters, case no. 1Parameter
Value
Unit
T1
20
ms
T2
30
ms
P
TABLE II
IIInput parameters, case no. 2
Parameter
Value
Unit
T1
50
ms
T2
25
Ms
P
5
%
L
100
TABLE III Input
IIIparameters, case no. 3Parameter
Value
Unit
T1
20
ms
T2
30
ms
P
5
%
Detection of Covert Channel Encoding in Network Packet Delays. V Berk, A Giani, G Cybenko, Raport Techniczny TR2005-536Berk, V., Giani, A., Cybenko, G., "Detection of Covert Channel Encoding in Network Packet Delays", 2005, Raport Techniczny TR2005-536,
Information Hiding Using Improper Frame Padding. B Jankowski, W Mazurczyk, K Szczypiorski, Proc. of 14th International Telecommunications Network Strategy and Planning Symposium. of 14th International Telecommunications Network Strategy and Planning SymposiumB. Jankowski, W. Mazurczyk, K. Szczypiorski, 2010, "Information Hiding Using Improper Frame Padding", Proc. of 14th International Telecommunications Network Strategy and Planning Symposium (Networks 2010)
On Steganography in Lost Audio Packet. W Mazurczyk, J Lubacz, K Szczypiorski, 10.1002/sec.388International Journal of Security and Communication Networks. John Wiley & SonsMazurczyk W., Lubacz J., Szczypiorski K. : On Steganography in Lost Audio Packet. International Journal of Security and Communication Networks, John Wiley & Sons, doi: 10.1002/sec.388, ISSN: 1939-0114
Practical Protocol Steganography: Hiding Data in IP Header. Bo Xu, Jiazhen Wang, Deyun Peng, AMS '07. First Asia International Conference. Modelling & SimulationBo Xu, Jiazhen Wang, Deyun Peng "Practical Protocol Steganography: Hiding Data in IP Header", 2007, "Modelling & Simulation, 2007. AMS '07. First Asia International Conference". 584 -588
Linux Foundation. netem tool, date of acces: 13.06.2016Linux Foundation, 2009, netem tool, date of acces: 13.06.2016, http://www.linuxfoundation.org/collaborate/workgroups/network ing/netem
The Good, The Bad And The Ugly: Evaluation of Wi-Fi Steganography. Krzysztof Szczypiorski, Artur Janicki, Steffen Wendzel, Krzysztof Szczypiorski, Artur Janicki, Steffen Wendzel, 2015, "The Good, The Bad And The Ugly: Evaluation of Wi-Fi Steganography"
Using Transcoding for Hidden Communication in IP Telephony. Wojciech Mazurczyk, Paweł Szaga, Krzysztof Szczypiorski, Wojciech Mazurczyk, Paweł Szaga, Krzysztof Szczypiorski, "Using Transcoding for Hidden Communication in IP Telephony", 2005
|
[] |
[
"Polarized antiquark distributions from chiral quark-soliton model: summary of the results",
"Polarized antiquark distributions from chiral quark-soliton model: summary of the results"
] |
[
"K Goeke \nInstitute for Theoretical Physics II\nRuhr University Bochum\nGermany\n",
"P V Pobylitsa \nInstitute for Theoretical Physics II\nRuhr University Bochum\nGermany\n\nPetersburg Nuclear Physics Institute\n188350Gatchina, PetersburgStRussia\n",
"M V Polyakov \nInstitute for Theoretical Physics II\nRuhr University Bochum\nGermany\n\nPetersburg Nuclear Physics Institute\n188350Gatchina, PetersburgStRussia\n",
"D Urbano \nInstitute for Theoretical Physics II\nRuhr University Bochum\nGermany\n\nFaculdade de Engenharia da Universidade do Porto\n4000PortoPortugal\n"
] |
[
"Institute for Theoretical Physics II\nRuhr University Bochum\nGermany",
"Institute for Theoretical Physics II\nRuhr University Bochum\nGermany",
"Petersburg Nuclear Physics Institute\n188350Gatchina, PetersburgStRussia",
"Institute for Theoretical Physics II\nRuhr University Bochum\nGermany",
"Petersburg Nuclear Physics Institute\n188350Gatchina, PetersburgStRussia",
"Institute for Theoretical Physics II\nRuhr University Bochum\nGermany",
"Faculdade de Engenharia da Universidade do Porto\n4000PortoPortugal"
] |
[] |
In these short notes we present a parametrization of the results obtained in the chiral quark-soliton model for polarized antiquark distributions ∆ū, ∆d and ∆s at a low normalization point around µ = 0.6 GeV.
|
10.1016/s0375-9474(00)00434-6
|
[
"https://arxiv.org/pdf/hep-ph/0003324v1.pdf"
] | 18,288,827 |
hep-ph/0003324
|
f6e0e59c98a7500d76549ccc6501ef142cbe2a1d
|
Polarized antiquark distributions from chiral quark-soliton model: summary of the results
arXiv:hep-ph/0003324v1 31 Mar 2000
K Goeke
Institute for Theoretical Physics II
Ruhr University Bochum
Germany
P V Pobylitsa
Institute for Theoretical Physics II
Ruhr University Bochum
Germany
Petersburg Nuclear Physics Institute
188350Gatchina, PetersburgStRussia
M V Polyakov
Institute for Theoretical Physics II
Ruhr University Bochum
Germany
Petersburg Nuclear Physics Institute
188350Gatchina, PetersburgStRussia
D Urbano
Institute for Theoretical Physics II
Ruhr University Bochum
Germany
Faculdade de Engenharia da Universidade do Porto
4000PortoPortugal
Polarized antiquark distributions from chiral quark-soliton model: summary of the results
arXiv:hep-ph/0003324v1 31 Mar 2000
In these short notes we present a parametrization of the results obtained in the chiral quark-soliton model for polarized antiquark distributions ∆ū, ∆d and ∆s at a low normalization point around µ = 0.6 GeV.
The aim of these short notes is to summarize the results for the polarized antiquark distributions ∆ū, ∆d and ∆s obtained in refs. [1,2,3] in the framework of the chiral quark-soliton model.
The chiral quark-soliton model [4] is a low-energy field theoretical model of the nucleon structure which allows a consistent calculations of leading twist quark and antiquark distributions [1]. Due to its field theoretical nature the quark and antiquark distributions obtained in this model satisfy all general QCD requirements: positivity, sum rules, inequalities, etc.
A remarkable prediction of the chiral quark soliton model, noted first in ref. [1], is the strong flavour asymmetry of polarized antiquarks, the feature which is missing in other models like, for instance, pion cloud models (for discussion of this point see Ref. [5]).
The fits below are based on the calculations of Refs. [1,2,3], generalized to the case of three flavours. The results of these calculations are fitted by the form inspired by quark counting rules discussed in Ref. [6]:
∆q(x) = 1 x αq A q (1 − x) 5 + B q (1 − x) 6 ,(1)
which leads to
α u = 0.0542, α d = 0.0343, α s = 0.0169 A u = 0.319, A d = −0.185, A s = −0.0366 B u = 0.589, B d = −0.672, B s = −0.316 .(2)
In Fig. 1 we plot the resulting distribution functions. We note that these functions, obtained in the framework of the chiral quark soliton model, refer to the normalization point of about µ = 0.6 GeV.
A few comments are in order here:
• The model calculations are not justified at x close to zero and one. Therefore the small x and x → 1 behaviours obtained in the the fit above should be consider as an educated guess only, not as model prediction. The measurements of flavour asymmetry of polarized antiquarks, say, in semi-inclusive DIS [5] or in Drell-Yan reactions with polarized protons [7] would allow to discriminate between different pictures of the nucleon.
Figure 1 :
1Results for x∆ū(x), x∆d(x) and x∆s(x) at low normalization point obtained in chiral quark soliton model • We estimate that the theoretical errors related to the approximations (1/N c corrections, m s corrections, etc.) done in the model calculations are at the level of 20%-30% for ∆ū and ∆d, and around 50% for ∆s. The value of the normalization point is not known exactly, the most favoured value is µ = 0.6 GeV.
. D I Diakonov, V Yu, P V Petrov, M V Pobylitsa, C Polyakov, Weiss, Nucl. Phys. B. 4804069Phys. Rev. DD.I. Diakonov, V.Yu. Petrov, P.V. Pobylitsa, M.V. Polyakov and C. Weiss, Nucl. Phys. B 480 (1996) 341; Phys. Rev. D 56 (1997) 4069.
. M Penttinen, M V Polyakov, K Goeke, hep-ph/9909489Phys. Rev. D. to appear inM. Penttinen, M.V. Polyakov, K. Goeke, hep-ph/9909489, to appear in Phys. Rev. D.
K Goeke, hep-ph/0001272Preprint RUB-TPII-18/99. K. Goeke et al., Preprint RUB-TPII-18/99, hep-ph/0001272.
For a review of the foundations of this model see: D.I. Diakonov. D I Diakonov, V Yu, P V Petrov, Pobylitsa, hep-ph/9802298Nucl. Phys. B. 306D.I. Diakonov, V.Yu. Petrov, and P.V. Pobylitsa, Nucl. Phys. B 306 (1988) 809. For a review of the foundations of this model see: D.I. Diakonov, hep-ph/9802298.
. B Dressler, K Goeke, M V Polyakov, C Weiss, hep-ph/9909541B. Dressler, K. Goeke, M.V. Polyakov, and C. Weiss, hep-ph/9909541.
. S J Brodsky, M Burkhardt, I Schmidt, Nucl. Phys. 441197S.J. Brodsky, M. Burkhardt and I. Schmidt, Nucl. Phys. B441 (1995) 197.
. B Dressler, hep-ph/9910464B. Dressler, et al., hep-ph/9910464.
|
[] |
[
"Lightweight merging of compressed indices based on BWT variants",
"Lightweight merging of compressed indices based on BWT variants"
] |
[
"Lavinia Egidi \nUniversity of Eastern Piedmont Alessandria\nItaly\n",
"Giovanni Manzini \nIIT-CNR Pisa\nUniversity of Eastern Piedmont Alessandria\nItaly, Italy\n"
] |
[
"University of Eastern Piedmont Alessandria\nItaly",
"IIT-CNR Pisa\nUniversity of Eastern Piedmont Alessandria\nItaly, Italy"
] |
[] |
In this paper we propose a flexible and lightweight technique for merging compressed indices based on variants of Burrows-Wheeler transform (BWT), thus addressing the need for algorithms that compute compressed indices over large collections using a limited amount of working memory. Merge procedures make it possible to use an incremental strategy for building large indices based on merging indices for progressively larger subcollections.Starting with a known lightweight algorithm for merging BWTs [Holt and McMillan, Bionformatics 2014], we show how to modify it in order to merge, or compute from scratch, also the Longest Common Prefix (LCP) array. We then expand our technique for merging compressed tries and circular/permuterm compressed indices, two compressed data structures for which there were hitherto no known merging algorithms.ACM Subject Classification Theory of computation → Design and analysis of algorithms
|
10.1016/j.tcs.2019.11.001
|
[
"https://arxiv.org/pdf/1903.01465v1.pdf"
] | 67,877,077 |
1903.01465
|
c89eae6a38bb43bd2fdc769c21cc4311dac0c3c3
|
Lightweight merging of compressed indices based on BWT variants
4 Mar 2019
Lavinia Egidi
University of Eastern Piedmont Alessandria
Italy
Giovanni Manzini
IIT-CNR Pisa
University of Eastern Piedmont Alessandria
Italy, Italy
Lightweight merging of compressed indices based on BWT variants
4 Mar 2019and phrases multi-string BWTLongest Common Prefix arrayXBWTtrie compres- sioncircular patterns
In this paper we propose a flexible and lightweight technique for merging compressed indices based on variants of Burrows-Wheeler transform (BWT), thus addressing the need for algorithms that compute compressed indices over large collections using a limited amount of working memory. Merge procedures make it possible to use an incremental strategy for building large indices based on merging indices for progressively larger subcollections.Starting with a known lightweight algorithm for merging BWTs [Holt and McMillan, Bionformatics 2014], we show how to modify it in order to merge, or compute from scratch, also the Longest Common Prefix (LCP) array. We then expand our technique for merging compressed tries and circular/permuterm compressed indices, two compressed data structures for which there were hitherto no known merging algorithms.ACM Subject Classification Theory of computation → Design and analysis of algorithms
Introduction
The Burrows Wheeler transform (BWT), originally introduced as a tool for data compression [4], has found application in the compact representation of many different data structures. After the seminal works [31] showing that the BWT can be used as a compressed full text index for a single string, many researchers have proposed variants of this transformation for string collections [5,24], trees [9,10], graphs [3,27,35], and alignments [30,29]. See [13] for an attempt to provide a unified view of these variants. In this paper we consider the problem of constructing compressed indices for string collections based on BWT variants. A compressed index is obviously most useful when working with very large amounts of data. Therefore, a fundamental requirement for construction algorithms, in order to be of practical use, is that they are lightweight in the sense that they use a limited amount of working space, i.e. space in addition to the space used for the input and the output. Indeed, the construction of compressed indices in linear time and small working space is an active and promising area of research, see [1,12,28] and references therein.
A natural approach when working with string collections is to build the indexing data structure incrementally, that is, for progressively larger subcollections. For example, when additional data should be added to an already large index, the incremental construction appears much more reasonable, and often works better in practice, than rebuilding the complete index from scratch, even when the from-scratch option has better theoretical bounds. Indeed, in [33] and [26] the authors were able to build the largest indices in their respective fields using the incremental approach.
Along this path, Holt and McMillan [16,15] proposed a simple and elegant algorithm, that we call the H&M algorithm from now on, for merging BWTs of collections of sequences. For collections of total size n, their fastest version takes O(n aveLcp 01 ) time where aveLcp 01 is the average length of the longest common prefix between suffixes in the collection. The average length of the longest common prefix is O(n) in the worst case but O(log n) for random strings and for many real world datasets [22]. However, even when aveLcp 01 = O(log n) the H&M algorithm is not theoretically optimal since computing the BWT from scratch takes O(n) time. Despite its theoretical shortcomings, because of its simplicity and small space usage, the H&M algorithm is competitive in practice for collections with relatively small average LCP. In addition, since the H&M algorithm accesses all data by sequential scans, it has been adapted to work on very large collections in external memory [16].
In this paper we revisit the H&M algorithm and we show that its main technique can be adapted to solve the merging problem for three different compressed indices based on the BWT.
First, in Section 4 we describe a procedure to merge, in addition to the BWTs, the Longest Common Prefix (LCP) arrays of string collections. The LCP array is often used to provide additional functionalities to indices based on the BWT [31], and the issue of efficiently computing and storing LCP values has recently received much attention [14,20]. Our algorithm has the same O(n aveLcp) complexity as the H&M algorithm.
Next, in Section 5 we describe a procedure for merging compressed labelled trees (tries) as produced by the eXtended BWT transform (XBWT) [9,10]. This result is particularly interesting since at the moment there are no time and space optimal algorithms for the computation from scratch of the XBWT. Our algorithm takes time proportional to the number of nodes in the output tree times the average node height.
Finally, in Section 6 we describe algorithms for merging compressed indices for circular patterns [17], and compressed permuterm indices [11]. The time complexity of these algorithms is proportional to the total collection size times the average circular LCP, a notion that naturally extends the LCP to the modified lexicographic order used for circular strings.
Our algorithms are based on the H&M technique specialized to the particular features of the different compressed indices given as input. They all make use of techniques to recognize blocks of the input that become irrelevant for the computation and skip them in successive iterations. Because of the skipping of irrelevant blocks we call our merging procedures Gap algorithms. Our algorithms are all lightweight in the sense that, in addition to the input and the output, they use only a few bitarrays of working space and the space for handling the irrelevant blocks. The latter amount of space can be significant for pathological inputs, but in practice we found it takes between 2% and 9% of the overall space, depending on the alphabet size.
The Gap algorithms share with the H&M algorithm the feature of accessing all data by sequential scans and are therefore suitable for implementation in external memory. In [7] an external memory version of the Gap algorithm for merging BWT and LCP arrays is engineered, analyzed, and extensively tested on collections of DNA sequences. The results reported there show that the external memory version of Gap outperforms the known external memory algorithms for BWT/LCP computation when the avergae LCP of the collection is relatively small or when the strings of the input collection have widely different lengths.
To the best of our knowledge, the problem of incrementally building compressed indices via merging has been previously addressed only in [34] and [26]. Sirén presents in [34] an algorithm that maintains a BWT-based compressed index in RAM and incrementally merges new collections to it. The algorithm is the first that makes it possible to build indices for Terabytes of data without using a specialized machine with a lot of RAM. However, Sirén's algorithm is specific for a particular compressed index (which doesn't use the LCP array), while ours can be more easily adapted to build different flavors of compressed indices as shown in this paper. In [26] the authors present a merge algorithm for colored de Bruijn graphs. Their algorithm is also inspired by the H&M algorithm and the authors report a threefold reduction in working space compared to the state of the art methods for from scratch de Bruijn graphs. Inspired by the techniques introduced in this paper, we are currently working on an improved de Bruijn graph merging algorithm [6] that also supports the construction of succinct Variable Order de Bruijn graph representations [2].
Background
Let t[1, n] denote a string of length n over an alphabet Σ of constant size σ.
We write t[i, j] to denote the substring t[i]t[i + 1] · · · t[j]. If j ≥ n we assume t[i, j] = t[i, n]. If i > j or i > n then t[i, j]
is the empty string. Given two strings t and s we write t s (t ≺ s) to denote that t is lexicographically (strictly) smaller than s. We denote by LCP(t, s) the length of the longest common prefix between t and s.
The suffix array sa [1, n] associated to t is the permutation of [1, n] giving the lexicographic order of t's suffixes, that is,
for i = 1, . . . , n−1, t[sa[i], n] ≺ t[sa[i+1]
, n]. The longest common prefix array lcp[1, n + 1] is defined for i = 2, . . . , n by
lcp[i] = LCP(t[sa[i − 1], n], t[sa[i], n]);(1)
the lcp array stores the length of the longest common prefix between lexicographically consecutive suffixes. For convenience we define lcp [1] = lcp[n + 1] = −1. We also define the maximum and average LCP as:
maxLcp = max 1<i≤n lcp[i], aveLcp = 1<i≤n lcp[i] /n.(2)
The Burrows-Wheeler transform bwt [1, n] of t is defined by
bwt[i] = t[n] if sa[i] = 1 t[sa[i] − 1] if sa[i] > 1.
bwt is best seen as the permutation of t in which the position of t[j] coincides with the lexicographic rank of t[j + 1, n] (or of t [1, n] if j = n) in the suffix array. We call the string Figure 1 for an example. The longest common prefix (LCP) array, and Burrows-Wheeler transform (BWT) can be generalized to the case of multiple strings. Historically, the first of such generalizations is the circular BWT [24] considered in Section 6. Here we consider the generalization proposed in [5] which is the one most used in applications. Let t 0 [1, n 0 ] and t 1 [1, n 1 ] be such that t 0 [n 0 ] = $ 0 and t 1 [n 1 ] = $ 1 where $ 0 < $ 1 are two symbols not appearing elsewhere in t 0 and t 1 and smaller than any other symbol. Let sa 01 [1, n 0 + n 1 ] denote the suffix array of the concatenation t 0 t 1 . The multi-string BWT of t 0 and t 1 , denoted by bwt 01 [1, n 0 + n 1 ], is
t[j + 1, n] context of t[j]. Seelcp bwt context -1 b $0 0 c ab$0 2 $0 abcab$0 0 a b$0 1 a bcab$0 0 b cab$0 -1 lcp bwt context -1 c $1 0 $1 aabcabc$1 1 c abc$1 3 a abcabc$1 0 a bc$1 2 a bcabc$1 0 b c$1 1 b cabc$1 -1 id lcp 01 bwt01 context 0 -1 b $0 1 0 c $1 1 0 $1 aabcabc$1 0 1 c ab$0 1 2 c abc$1 0 3 $0 abcab$0 1 5 a abcabc$1 0 0 a b$0 1 1 a bc$1 0 2 a bcab$0 1 4 a bcabc$1 1 0 b c$1 0 1 b cab$0 1 3 b cabc$1 -1
Figure 1
LCP array and BWT for t0 = abcab$0 and t1 = aabcabc$1, and multi-string BWT and corresponding LCP array for the same strings. Column id shows, for each entry of bwt01 = bc$1cc$0aaaabbb whether it comes from t0 or t1. defined by is always a character of the string (t 0 or t 1 ) containing the i-th largest suffix (see again Fig. 1). The above notion of multi-string BWT can be immediately generalized to define bwt 1···k for a family of distinct strings t 1 , t 2 , . . . , t k . Essentially bwt 1···k is a permutation of the symbols in t 1 , . . . , t k such that the position in bwt 1···k of t i [j] is given by the lexicographic rank of its context t i [j + 1,
bwt 01 [i] = t 0 [n 0 ] if sa 01 [i] = 1 t 0 [sa 01 [i] − 1] if 1 < sa 01 [i] ≤ n 0 t 1 [n 1 ] if sa 01 [i] = n 0 + 1 t 1 [sa 01 [i] − n 0 − 1] if n 0 + 1 < sa 01 [i].n i ] (or t i [1, n i ] if j = n i ).
Given the concatenation t 0 t 1 and its suffix array sa 01 [1, n 0 + n 1 ], we consider the corresponding LCP array lcp 01 [1, n 0 + n 1 + 1] defined as in (1) (see again Fig. 1). Note that, for i = 2, . . . , n 0 + n 1 , lcp 01 [i] gives the length of the longest common prefix between the contexts of bwt 01 [i] and bwt 01 [i − 1]. This definition can be immediately generalized to a family of k strings to define the LCP array lcp 12···k associated to the multi-string BWT bwt 12···k .
The H&M Algorithm
In [16] Holt and McMillan introduced a simple and elegant algorithm, we call it the H&M algorithm, to merge multi-string BWTs 1 . Because it is the starting point for our results, we now briefly recall its main properties.
Given bwt 1···k and bwt k+1 k+2 ···h the H&M algorithm computes bwt 1···h . The computation does not explicitly need t 1 , . . . , t h but only the (multi-string) BWTs to be merged. For simplicity of notation we describe the algorithm assuming we are merging two single-string BWTs bwt 0 = bwt(t 0 ) and bwt 1 = bwt(t 1 ); the same algorithm works in the general case with multi-string BWTs in input. Note also that the algorithm can be easily adapted to merge more than two (multi-string) BWTs at the same time.
Computing bwt 01 amounts to sorting the symbols of bwt 0 and bwt 1 according to the lexicographic order of their contexts, where the context of symbol bwt
0 [i] (resp. bwt 1 [i]) is t 0 [sa 0 [i], n 0 ] (resp. t 1 [sa 1 [i], n 1 ])
. By construction, the symbols in bwt 0 and bwt 1 are already sorted by context, hence to compute bwt 01 we only need to merge bwt 0 and bwt 1 without changing the relative order of the symbols within the two input sequences.
The H&M algorithm works in successive iterations. After the h-th iteration the entries of bwt 0 and bwt 1 are sorted on the basis of the first h symbols of their context. More formally, the output of the h-th iteration is a binary vector Z (h) containing n 0 = |t 0 | 0's and n 1 = |t 1 | 1's and such that the following property holds.
t 0 [sa 0 [i], sa 0 [i] + h − 1] t 1 [sa 1 [j], sa 1 [j] + h − 1](3)
(recall that according to our notation if sa
0 [i] + h − 1 > n 0 then t 0 [sa 0 [i], sa 0 [i] + h − 1] coincides with t 0 [sa 0 [i]
, n 0 ], and similarly for t 1 ).
Following Property 1 we identify the i-th 0 in Z (h) with bwt 0 [i] and the j-th 1 in Z (h) with bwt 1 [j] so that Z (h) encodes a permutation of bwt 01 . Property 1 is equivalent to stating that we can logically partition Z (h) into b(h) + 1 blocks
Z (h) [1, 1 ], Z (h) [ 1 + 1, 2 ], . . . , Z (h) [ b(h) + 1, n 0 + n 1 ](4)
such that each block corresponds to a set of bwt 01 symbols whose contexts are prefixed by the same length-h string (the symbols with a context shorter than h are contained in singleton blocks). Within each block the symbols of bwt 0 precede those of bwt 1 , and the context of any symbol in block Z (h) [ j + 1, j+1 ] is lexicographically smaller than the context of any symbol in block Z (h) [ k + 1, k+1 ] with k > j.
The H&M algorithm initially sets Z (0) = 0 n0 1 n1 : since the context of every bwt 01 symbol is prefixed by the same length-0 string (the empty string), there is a single block containing all bwt 01 symbols. At iteration h the algorithm computes Z (h+1) from Z (h) using the procedure in Figure 2. The following lemma is a restatement of Lemma 3.2 in [16] using our notation (see [8] for a proof in our notation).
Lemma 2.
For h = 0, 1, 2, . . . the bit vector Z (h) satisfies Property 1.
Computing LCP values with the H&M algorithm
Our first result is to show that with a simple modification to the H&M algorithm it is possible to compute the LCP array lcp 01 , in addition to merging bwt 0 and bwt 1 . Our strategy consists in keeping explicit track of the logical blocks we have defined for Z (h) and represented in (4). We maintain an integer array B[1, n 0 + n 1 + 1] such that at the end of iteration h it is B[i] = 0 if and only if a block of Z (h) starts at position i. The use of such integer array is shown in Figure 3. Note that: (i) initially we set B = 1 0 n0+n1−1 1 and once 1: Initialize array F [1, σ] 2: k 0 ← 1; k 1 ← 1 Init counters for bwt 0 and bwt 1 3: for k ← 1 to n 0 + n 1 do
4: b ← Z (h−1) [k] Read bit b from Z (h−1) 5: c ← bwt b [k b ++]
Get symbol from bwt 0 or bwt 1 according to b 6: if c = $ then 7: j ← F [c]++ Get destination for b according to symbol c 8:
else 9: j ← b Symbol $ b goes to position b 10:
end if 11:
Z (h) [j] ← b
Copy bit b to Z (h) 12: end for
id ← k A new block of Z (h−1) is starting 6: end if 7: b ← Z (h−1) [k] Read bit b from Z (h−1) 8: c ← bwt b [k b ++]
Get symbol from bwt 0 or bwt 1 according to b 9: if c = $ then 10: j ← F [c]++ Get destination for b according to symbol c 11: else 12: j ← b Symbol $ b goes to position b 13: end if 14:
Z (h) [j] ← b Copy bit b to Z (h)
15:
if Block_id[c] = id then 16: an entry in B becomes nonzero it is never changed, (ii) during iteration h we only write to B the value h, (iii) because of the test at Line 4 the values written during iteration h influence the algorithm only in subsequent iterations. In order to identify new blocks, we maintain an array Block_id [1, σ] such that Block_id[c] is the id of the block of Z (h−1) to which the last seen occurrence of symbol c belonged.
Block_id[c] ← id
The following lemma shows that the nonzero values of B at the end of iteration h mark the boundaries of Z (h) 's logical blocks.
Lemma 3.
For any h ≥ 0, let , m be such that 1 ≤ ≤ m ≤ n 0 + n 1 and
lcp 01 [ ] < h, min(lcp 01 [ + 1], . . . , lcp 01 [m]) ≥ h, lcp 01 [m + 1] < h.(5)
Then, at the end of iteration h the array B is such that
B[ ] = 0, B[ + 1] = · · · = B[m] = 0, B[m + 1] = 0 (6)
and
Z (h) [ , m] is, if ≤ v ≤ m and Z (h) [v] is the ith 0 (resp. jth 1) of Z (h) then the suffix starting at t 0 [sa 0 [i]] (resp. t 1 [sa 1 [j]]) is prefixed by s.
To prove (6)
b = Z (h−1) [k], e ≤ k ≤ f , such that the corresponding value in bwt b is c.
Note that by (7) The above corollary suggests the following algorithm to compute bwt 01 and lcp 01 : repeat the procedure of Figure
The Gap BWT/LCP merging Algorithm
The Gap algorithm, as well as its variants described in the following sections, are based on the notion of monochrome blocks. Since a monochrome block only contains suffixes from either t 0 or t 1 , whose relative order is known, it does not need to be further modified. If in addition, the LCP arrays of t 0 and t 1 are given in input, then also LCP values inside monochrome blocks are known without further processing. This intuition is formalized by the following lemmas. Notice that a lazy strategy of not completely processing monochrome blocks, makes it impossible to compute LCP values from scratch. In this case, in order to compute lcp 01 it is necessary that the algorithm also takes lcp 1 and lcp 0 in input. Proof. The first part of the Lemma follows from the observation that subsequent iterations of the algorithm will only reorder the values within a block (and possibly create new subblocks); but if a block is monochrome the reordering will not change its actual content.
For the second part, we observe that during iteration h + 1 as k goes from to m the algorithm writes to Z (h+1) the same value which is in Z (h) [ , m]. Hence, a new monochrome block will be created for each distinct symbol encountered (in bwt 0 or bwt 1 ) as k goes through the range [ , m].
The lemma implies that, if block Z (h) [ , m] is monochrome at the end of iteration h, starting from iteration g = h + 2 processing the range [ , m] will not change Z (g) with respect to Z (g−1) . Indeed, by the lemma the monochrome blocks created in iteration h + 1 do not change in subsequent iterations (in a subsequent iteration a monochrome block can be split in sub-blocks, but the actual content of the bit vector does not change). The above observation suggests that, after we have processed block Z (h+1) [ , m] in iteration h + 1, we can mark it as irrelevant and avoid to process it again. As the computation goes on, more and more blocks become irrelevant. Hence, at the generic iteration h instead of processing the whole Z (h−1) we process only the blocks which are still "active" and skip irrelevant blocks. Adjacent irrelevant blocks are merged so that among two active blocks there is at most one irrelevant block (the gap after which the algorithm is named). The overall structure of a single iteration is shown in Figure 4. The algorithm terminates when there are no more active blocks since this implies that all blocks have become monochrome and by Lemma 7 we are able to compute bwt 01 and lcp 01 .
We point out that at Line 2 of the Gap algorithm we cannot simply skip an irrelevant block ignoring its content. To keep the algorithm consistent we must correctly update the global variables of the main loop, i.e. the array F and the pointers k 0 and k 1 in Figure 3. To this end a simple approach is to store for each irrelevant block the number of occurrences o c of each symbol c ∈ Σ in it and the pair (r 0 , r 1 ) providing the number of 0's and 1's in the block (recall that an irrelevant block may consist of adjacent monochrome blocks coming from different strings). When the algorithm reaches an irrelevant block,
F , k 0 , k 1 are updated setting k 0 ← k 0 + r 0 , k 1 ← k 1 + r 1 and ∀c F [c] ← F [c] + o c .
The above scheme for handling irrelevant blocks is simple and effective for most applications. However, for a large non-constant alphabet it would imply a multiplicative O(σ) slowdown. In [8,Sect. 4] we present a different scheme for large alphabets with a slowdown reduced to O(log σ).
We point out that our Gap algorithm is related to the H&M variant with O(n aveLcp) time complexity described in [15,Sect. 2.1]: Indeed, the sorting operations are essentially the same in the two algorithms. The main difference is that Gap keeps explicit track of the irrelevant blocks while H&M keeps explicit track of the active blocks (called buckets in [15]): this difference makes the non-sorting operations completely different. An advantage of working with irrelevant blocks is that they can be easily merged, while this is not the case for the active blocks in H&M. Of course, the main difference is that Gap merges simultaneously BWT and LCP values. For the analysis of the working space we observe for the array B we can use the space for the output LCP, hence the working space consists only in 2n bits for two instances of the arrays Z (·) and a constant number of counters (the arrays F and Block_id).
It is unfortunately impossible to give a clean bound for the space needed for keeping track of irrelevant blocks. Our scheme uses O(1) words per block, but in the worst case we can have Θ(n) blocks. Although such worst case is rather unlikely, it is important to have some form of control on this additional space. We use the following simple heuristic: we choose a threshold τ and we keep track of an irrelevant block only if its size is at least τ . This strategy introduces a O(τ ) time slowdown but ensures that there are at most n/(τ + 1) irrelevant blocks simultaneously. The experiments in the next section show that in practice the space used to keep track of irrelevant blocks is less than 10% of the total.
Note that also in [15] the authors faced the problem of limiting the memory used to keep track of the active blocks. They suggested the heuristic of keeping track of active blocks only after the h-th iteration (h = 20 for their dataset).
Experimental Results
We have implemented the Gap algorithm in C and tested it on the collections shown in Table 1 which have documents of different size, LCP, and alphabet size. We represented LCP values with the minimum possible number of bytes for each collection: 1 byte for Illumina, 2 bytes for Pacbio and Proteins, and 4 bytes for Wiki-it. We always used 1 byte for each BWT value and n bytes to represent a pair of Z (h) arrays using 4 bits for each entry so that the tested implementation can merge simultaneously up to 16 BWTs.
Referring to Table 2, we split each collection into k subcollections of size less than 2GB and we computed the multi-string SA of each subcollection using gSACA-K [23]. From the SA we computed the multi-string BWT and LCP arrays using the Φ algorithm [19] (implemented in gSACA-K). This computation used 13 bytes per input symbol. Then, we merged the subcollections BWTs and LCPs using Gap with different values of the parameter τ which determines the size of the smallest irrelevant block we keep track of. Since skipping a block takes time proportional to σ + k, regardeless of τ Gap never keeps track of blocks Table 2 For each collection we report the number k of subcollections, the average running time of gSACA-K+Φ in µsecs per symbol, and the running time (µsecs) and space usage (bytes) per symbol for Gap for different values of the τ parameter. All tests were executed on a desktop with 32GB RAM and eight Intel-I7 3.40GHz CPUs, using a single CPU in each experiment. smaller than that threshold; therefore for Wiki-it we performed a single experiment where the smallest irrelevant block size was σ + k = 215.
From the results in Table 2 we see that Gap's running time is indeed roughly proportional to the average LCP. For example, Pacbio and Illumina collections both consist of DNA reads but, despite Pacbio reads being longer and having a larger maximum LCP, Gap is twice as fast on them because of the smaller average LCP. Similarly, Gap is faster on Wiki-it than on Proteins despite the latter collection having a smaller alphabet and shorter documents.
As expected, the parameter τ offers a time-space tradeoff for the Gap algorithm. In the space reported in Table 2, the fractional part is the peak space usage for irrelevant blocks, while the integral value is the space used by the arrays bwt i , B and Z (h) . For example, for Wiki-it we use n bytes for the BWTs, 4n bytes for the LCP values (the B array), n bytes for Z (h) , and the remaining 0.55n bytes are mainly used for keeping track of irrelevant blocks. This is a relatively high value, about 9% of the total space, since in our current implementation the storage of a block grows linearly with the alphabet size. For DNA sequences and τ = 200 the cost of storing blocks is less than 3% of the total without a significant slowdown in the running time.
For completeness, we tested the H&M implementation from [15] on the Pacbio collection. The running time was 14.57 µsecs per symbol and the space usage 2.28 bytes per symbol. These values are only partially significant for several reasons: (i) H&M computes the BWT from scratch, hence doing also the work of gSACA-K, (ii) H&M doesn't compute the LCP array, hence the lower space usage, (iii) the algorithm is implemented in Cython which makes it easier to use in a Python environment but is not as fast and space efficient as C.
Merging only BWTs
If we are not interested in LCP values but we only need to merge BWTs, we can still use Gap instead of H&M to do the computation in O(n aveLcp) time. In that case however, the use of the integer array B recording LCP values is wasteful. We can save space replacing it with an array B 2 [1, n 0 + n 1 + 1] containing two bits per entry representing four possible states called {0 , 1 , 2 , 3 }. The rationale for this is that, if we are not interested in LCP values, the entries of B are only used in Line 4 of Fig. 3 where it is tested whether they are different from 0 or h. The reason for this apparently involved scheme is that during iteration h, an entry in B 2 can be modified either before or after we read it at Line 4. The resulting code is shown in Fig. 5. Using the array B 2 we can still define (and skip) monochrome blocks and therefore achieve the O(n aveLcp) complexity.
Notice that, by Corollary 4, the value in B 2 [i] changes from 0 to 2 or 1 during iteration h = lcp 01 [i] + 1. Hence, if every time we do such change we write to an external file the pair i, h − 1 , when the merging is complete the file contains all the information required to compute the LCP array lcp 01 even if we do not know lcp 0 and lcp 1 . This idea has been introduced and investigated in [7].
Merging compressed tries
Tries [21] are a fundamental data structure for representing a collection of k distinct strings. A trie consists of a rooted tree in which each edge is labeled with a symbol in the input alphabet, and each string is represented by a path from the root to one of the leaves. To simplify the algorithms, and ensure that no string is the prefix of another one, it is customary
b a a c b c # a # # # b a a b a # # c # b a a c b a c # # a # c # # # Last0 L0 Π0 0 a 1 b 0 a a 0 b 1 c 1 # aa 1 # aca 1 c b 1 # ba 1 a ca 1 # cb Last1 L1 Π1 0 a 1 b 0 a a 1 b 1 c aa 1 # ab 1 a b 1 # ba 1 # caa Last01 L01 Π01 0 a 1 b 0 a a 0 b 1 c 0 # aa 1 c 1 # ab 1 # aca 0 a b 1 c 1 # ba 1 a ca 1 # caa 1 # cb
Figure 6
The trie T0 containing the strings aa#, ab#, aca#, bc# (left), the trie T1 containing aac#, ab#, ba# (center) and the trie T01 containing the union of the two set of strings (right). Below each trie we show the corresponding XBWT representation.
to add a special symbol # ∈ Σ at the end of each string. 2 Tries for different sets of strings are shown in Figure 6. For any trie node u we write hgt(u) to denote its height, that is the length of the path from the root to u. We define the height of the trie T as the maximum node height hgt(T ) = max u hgt(u), and the average height avehgt(T ) = ( u hgt(u))/|T |, where |T | denotes the number of trie nodes.
The eXtended Burrows-Wheeler Transform [10,25,32] is a generalization of the BWT designed to compactly represent any labeled tree T . To define xbwt(T ), to each internal node w we associate the string λ w obtained by concatenating the symbols in the edges in the upward path from w to the root of T . If T has n internal nodes we have n strings overall; let Π[1, n] denote the array containing such strings sorted lexicographically. Note that Π [1] is always the empty string corresponding to the root of T . For i = 1, . . . , n let L(i) denote the set of symbols labeling the edges exiting from the node corresponding to Π[i]. We define the array L as the concatenation of the arrays L(1), . . . , L(n). If T has m edges (and therefore m + 1 nodes), it is |L| = m and L contains n − 1 symbols from Σ and m + 1 − n occurrences of #. To keep an explicit representation of the intervals L(1), . . . , L(n) within L, we define a binary array Last [1, m] such that Last[i] = 1 iff L[i] is the last symbol of some interval L(j). See Figure 6 for a complete example.
In [9] it is shown that the two arrays xbwt(T ) = Last, L are sufficient to represent T , and that if they are enriched with data structures supporting constant time rank and select operations, xbwt(T ) can be used for efficient upward and downward navigation and for substring search in T . The fundamental property for efficient navigation and search is that there is an one-to-one correspondence between the symbols in L different from # and the strings in Π different from the empty string. The correspondence is order preserving in the sense that the i-th occurrence of symbol c corresponds to the i-th string in Π starting with c. For example, in Figure 6 (right) the third a in Last 01 corresponds to the third string in Π 01 starting with a, namely ab. Note that ab is the string associated to the node reached by following the edge associated to the third a in L 01 .
In this section, we consider the problem of merging two distinct XBWTs. More formally, let T 0 (resp. T 1 ) denote the trie containing the set of strings t 1 , . . . , t k (resp. s 1 , . . . , s h ), and let T 01 denote the trie containing the strings in the union t 1 ,. . . , t k , s 1 , . . . , s h (see Figure 6). Note that T 01 might contain less than h + k strings: if the same string appears in both T 0 and T 1 it will be represented in T 01 only once. Given xbwt(T 0 ) = Last 0 , L 0 and xbwt(T 1 ) = Last 1 , L 1 we want to compute the XBWT representation of the trie T 01 .
We observe that if we had at our disposal the sorted string arrays Π 0 and Π 1 , then the construction of xbwt(T 01 ) could be done as follows: First, we merge lexicographically the strings in Π 0 and Π 1 , then we scan the resulting sorted array of strings. During the scan if we find a string appearing only once then it corresponds to an internal node belonging to either T 0 or T 1 ; the labels on the outgoing edges can be simply copied from the appropriate range of L 0 or L 1 . if we find two consecutive equal strings they correspond respectively to an internal node in T 0 and to one in T 1 . The corresponding node in T 01 has a set of outgoing edges equal to the union of the edges of those nodes in T 0 and T 1 : thus, the labels in the outgoing edges are the union of the symbols in the appropriate ranges of L 0 and L 1 .
Although the arrays Π 0 and Π 1 are not available, by properly modifying the H&M algorithm we can compute how their elements would be interleaved by the merge operation. Let m 0 = |L 0 | = |Last 0 |, n 0 = |Π 0 |, and similarly m 1 = |L 1 | = |Last 1 |, n 1 = |Π 1 |. Fig. 7 shows the code for the generic h-th iteration of the H&M algorithm adapted for the XBWT. Iteration h computes a binary vector Z (h) containing n 0 = |t 0 | 0's and n 1 = |t 1 | 1's and such that the following property holds (compare with Property 1)
Π 0 [i][1, h] Π 1 [j][1, h].(8)
In ( Π 1 [j]). Note that Property 10 does not mention the first 0 and the first 1 in Z (h) : By construction it is Π 0 [1] = Π 1 [1] = so we know their lexicographic rank is the smallest possible. Note also that because of Step 3 in Fig. 7, the first 0 and the first 1 in Z (h) are always the first two elements of Z (h) .
Apart from the first two entries, during iteration h the array Z (h) is logically partitioned into σ subarrays, one for each alphabet symbol different from #. If Occ(c) denotes the number
1: Initialize array F [1, σ] 2: k 0 ← 1; k 1 ← 1
Init counters for L 0 and L 1 3: Z (h) ← 01
First two entries correspond to Π 0 [1] = Π 1 [1] = 4: for k ← 1 to n 0 + n 1 do
5: b ← Z (h−1) [k]
Read bit b from Z (h−1)
6:
repeat 7:
c ← L b [k b ]
Get symbol from L 0 or L 1 according to b 8: if c = # then # is ignored: it is not in Π 0 or Π 1
9:
j ← F [c]++ Get destination for b according to symbol c 10:
Z (h) [j] ← b Copy bit b to Z (h) 11: end if 12: ← Last b [k b ++]
Check if c labels last outgoing edge 13: until = 1 14: end for of occurrences in L 0 and L 1 of the symbols smaller than c, then the subarray corresponding to c starts at position Occ(c) + 3. Hence, if c < c the subarray corresponding to c precedes the one corresponding to c . Because of how the array F is initialized and updated, we see that every time we read a symbol c from L 0 and L 1 we write a value in the portion of Z (h) corresponding to c, and that each portion is filled sequentially. Armed with these observations, we are ready to establish the correctness of the algorithm in Figure. Suppose now h > 0. To prove the "if" part, let 3 ≤ v < w ≤ n 0 + n 1 denote two indexes such that Z (h) [v] is the i-th 0 and Z (h) [w] is the j-th 1 in Z (h) for some 2 ≤ i ≤ n 0 and 2 ≤ j ≤ n 1 (it is v ≥ 3 since i ≥ 2 and Z (h) [1,2] = 01). We need to show that (8) Figure 7 when the entries Z (h) [v] and Z (h) [w] are written (hence, during the scanning of Z (h−1) ). The hypotheses v < w and
Π 0 [i][1] = Π 1 [j][1] imply v < w . By construction Z (h−1) [v ] = 0 and Z (h−1) [w ] = 1. Say v is the i -th 0 in Z (h−1) and w is the j -th 1 in Z (h−1) . By the inductive hypothesis on Z (h−1) we have Π 0 [i ][1, h − 1] Π 1 [j ][1, h − 1](9)
(we could have v = 1 that would imply i = 1; in that case we cannot apply the inductive hypothesis, but (9) still holds). By the properties of the XBWT we have
Π 0 [i][1, h] = c Π 0 [i ][1, h − 1] and Π 1 [j][1, h] = c Π 1 [j ][1, h − 1]
which combined with (9) gives us (8).
For the "only if" part assume (8) holds for some i ≥ 2 and j ≥ 2. We need to prove that in Z (h) the i-th 0 precedes the j-th 1.
If Π 0 [i][1] = Π 1 [j][1] the proof is immediate. If c = Π 0 [i][1] = Π 1 [j][1] then Π 0 [i][2, h] Π 1 [j][2, h].
Let i and j be such [2, h]. By induction, in Z (h−1) the i -th 0 precedes the j -th 1 (again we could have i = 1 and in that case we cannot apply the inductive hypothesis, but the claim still holds). During iteration h, the i-th 0 in Z (h) is written to position v when processing the i -th 0 of Z (h−1) , and the j-th 1 in Z (h) is written to position w when processing the j -th 1 of Z (h−1) . Since in Z (h−1) the i -th 0 precedes the j -th 1 and since v and w both belongs to the subarray of Z (h) corresponding to the symbol c, their relative order does not change and the i-th 0 precedes the j-th 1 as claimed.
that Π 0 [i ][1, h − 1] = Π 0 [i][2, h] and Π 1 [j ][1, h − 1] = Π 1 [j]
As in the original H&M algorithm we stop the merge phase after the first iteration h such that Z (h) = Z (h−1) . Since in subsequent iterations we would have Z (g) = Z (h) for any g > h, we get that by Property 10, Z (h) gives the correct lexicographic merge of Π 0 and Π 1 . Note however that the lexicographic order is not sufficient to establish whether two consecutive nodes, say Π 0 [i] and Π 1 [j] have the same upward path and therefore should be merged in a single node of T 01 . To this end, we consider the integer array B used in Section 2.1 to mark the starting point of each block. We have shown in Corollary 4 that at the end of the original H&M algorithm B contains the LCP values plus one. Indeed, at iteration h the algorithm sets B[k] = h since it "discovers" that the suffixes in sa 01 [k − 1] and sa 01 [k] differ in the h-th symbol (hence lcp 01 [k] = h − 1). If we maintain the array B in the XBWT merging algorithm, we get that at the end of the computation if the strings associated to Π 0 [i] and Π 1 [j] are identical then the entry in B corresponding to Π 1 [j] would be zero, since the two strings do not differ in any position. Hence, at the end of the modified H&M algorithm the array Z (h) provides the lexicographic order of the nodes, and the array B the position of the nodes of T 0 and T 1 with the same upward path. We conclude that with a single scan of Z (h) and B we can merge all paths and compute xbwt(T 0 ). Finally, we observe that instead of B we can use a two-bit array B 2 as in Sect. 4.2, since we are only interested in determining whether a certain entry is zero, and not in its exact value. As for BWT/LCP merging, we now show how to reduce the running time by skipping the portions of Z (h) that no longer change from one iteration to the next. Note that we cannot use monochrome blocks to early terminate XBWT merging. Indeed, from the previous discussion we know that if two strings Π 0 [i] and Π 1 [j] are equal, they will form a nonmonochrome block that will never be split.
For this reason we introduce an array C[1, n 0 + n 1 ] that, at the beginning of iteration h, keeps track of all the strings in Π 0 and Π 1 that have length less than h. More precisely, for i = 1, . . . , n 0 (resp. j = 1, . . . , n 1 ) if the i-th 0 (resp. the j-th 1) is in position k of Z (h) , then C[k] = > 0 iff the length |Π 0 [i]| (resp. |Π 1 [j]|) is equal to − 1 with − 1 < h. As a consequence, if C[k] = 0 then the string corresponding to C[k] has length h or more. Note that by Property 10 at the beginning of iteration h the algorithm has already determined the lexicographic rank of all the strings in Π 0 and Π 1 of length smaller than h. Hence, the entry in Z (h) [k] will not change in successive iterations and will remain associated to the same string from Π 0 or Π 1 .
The array C is initialized as 110 n0+n1−2 since at the beginning of iteration 1 it is Z (h) = 010 n0−1 1 n1−1 and indeed the only strings of length 0 are Π 0 [0] = Π 1 [0] = . During iteration h, we update C adding, immediately after Line 10 in Fig. 7, the line
if C[k] = h then C[j] ← h + 1
The rationale is that if, during iteration h − 1 we found out that the string α corresponding to Z (h−1) [k] has length h − 1 (so we set C[k] = h), then the string corresponding to Z (h) [j] is cα and has therefore length h.
By the above discussion we see that if at iteration h we write h + 1 to position C[j], then at iteration h + 1 we can possibly use C[j] to write h + 2 in some other position in C, but starting from iteration h + 2 it is no longer necessary to process neither C[j] nor Z (h+2) [j] since they will not affect neither C nor Z (h+3) . In other words, during iteration h we can skip all ranges Z (h) [ , m] such that C[ , m] contains only positive values smaller than h. These ranges grown larger and larger as the algorithm proceeds and are handled in the same way as the irrelevant blocks in Gap. Finally, we observe that, using the same techniques as in Section 4.2, we can replace the integer array C with an array C 2 containing only two bits per entry. Proof. The analysis is similar to the one in Theorem 9. Here the algorithm executes hgt(T 01 ) iterations; however, because of irrelevant blocks, iterations have decreasing costs. To bound the overall running time, observe that the cost of each iteration is dominated by the cost of processing the entries in L 0 and L 1 . The generic entry L 0 [i] corresponds to a trie node u i with upward path of length hgt(u i ). Entry L 0 [i] is processed when the Gap algorithm reaches the entry in Z (h) corresponding to the string Π 1 [i ] associated to u i 's parent. We know that Z (h) 's entry corresponding to Π 1 [i ] becomes irrelevant after iteration |Π 1 [i ]| + 1 = hgt(u i ). Hence, the overall cost of processing u i is O(hgt(u i )). Summing over all entries in L 0 and L 1 the total cost is O(|T 0 |avehgt(T 0 ) + |T 1 |avehgt(T 1 )). The thesis follows observing that |T 01 |avehgt(T 01 ) ≥ max |T 0 |avehgt(T 0 ), |T 1 |avehgt(T 1 ) .
Merging indices for circular patterns
Another well known variant of the BWT is the multistring circular BWT which is defined by sorting the cyclic rotations of the input strings instead of their suffixes. However, to make the transformation reversible, the cyclic rotations have to be sorted according to an order relation, different from the lexicographic order, that we now quickly review. For any string t, we define the infinite form t ∞ of t as the infinite length string obtained concatenating t to itself infinitely many times. Given two strings t and s we write t ∞ s to denote that t ∞ s ∞ . For example, for t = abaa and s = aba, it is t ∞ = abaaabaa · · · and s ∞ = abaabaaba · · · so t ∞ s. Notice that t ∞ = s ∞ does not necessarily imply that t = s. For example, for t = ababab and s = abab it is t ∞ = s ∞ . The following lemma, which is a consequence of Fine and Wilf Theorem [36] and a restatement of Proposition 5 in [24], provides an upper bound to the number of comparisons required to establish whether t ∞ = s ∞ .
Lemma 14. If t ∞ = s ∞ then there exists an index i ≤ |t| + |s| − gcd(|t|, |s|) such that t ∞ [i] = s ∞ [i].
A string is primitive if all its cyclic rotations are distinct. The following Lemma is another well known consequence of the Fine and Wilf Theorem.
rot 01 (i) = t 0 [i, n 0 ]t 0 [1, i − 1] if 0 < i ≤ n 0 t 1 [i − n 0 , n 1 ]t 1 [1, i − n 0 − 1] if n 0 < i ≤ n 0 + n 1 .
For example, if t 0 = abc and t 1 = abbb, it is rot 01 (2) = bca and rot 01 (7) = babb. The above definition of rotations of substrings can be obviously generalized to a collection of k strings.
In addition to assuming that t 0 and t 1 are primitive, we assume that t 0 is not a rotation of t 1 . We define the circular Suffix Array of t 0 and t 1 , csa 01 as the permutation of [1, n] such that:
rot 01 (csa 01 [i]) ∞ rot 01 (csa 01 [i + 1]).(10)
Note that because of our assumptions and Lemma 15, the inequality in (10) is always strict. Finally, the multistring circular Burrows-Wheeler Transform (cBWT) is defined as
cbwt 01 [i] = t 0 [n 0 ] if csa 01 [i] = 1 t 0 [csa 01 [i] − 1] if 1 < csa 01 [i] ≤ n 0 t 1 [n 1 ] if csa 01 [i] = n 0 + 1 t 1 [csa 01 [i] − n 0 − 1] if csa 01 [i] > n 0 + 1.
The above definition given for t 0 and t 1 can be generalized to any number of strings. The ∞ order and the above multistring circular BWT has been introduced in [24]. In [11] the authors uses a data structure equivalent to a circular BWT to design a compressed permuterm index for prefix/suffix queries. The crucial observation is that if we add a unique symbol # at the end of each string, the same symbol for every string, then searching β#α in a circular BWT returns all the strings prefixed by α and suffixed by β. In [17] Hon et al. use the circular BWT to design a succinct index for circular patterns. Note that Hon et al. in addition to cbwt 01 use an additional data structure length 01 such that length 01 (i) provides the length of the string t j to which the symbol cbwt 01 [i] belongs. Finally, a lightweight algorithm for the construction of the circular BWT has been described in [18]: for a string of length n the proposed algorithm takes O(n) time and uses O(n log σ) bits of space.
To simplify our analysis, we preliminary extend the concept of longest common prefix to the ∞ order. For any pair of strings t, s we define
cLCP(t, s) = LCP(t ∞ , s ∞ ) if t ∞ = s ∞ |t| + |s| − gcd(|t|, |s|) otherwise.(11)
Because of Lemma 14, cLCP(t, s) generalizes the standard LCP in that it provides the number of comparisons that are necessary in order to establish the ∞ ordering between t, s. It is then natural to define for i = 2, . . . , n
clcp 01 [i] = cLCP(rot 01 (csa 01 [i − 1]), rot 01 (csa 01 [i]))(12)
and the values
maxcLcp = max i clcp 01 [i], avecLcp = i clcp 01 [i] /n.(13)
that generalize the standard notions of maximum LCP and average LCP. Let cbwt 0 (resp. cbwt 1 ) denote the circular BWT for the collection of strings t 1 , . . . , t k (resp. s 1 , . . . , s h ). In this section we consider the problem of computing the circular BWT cbwt 01 for the union collection t 1 ,. . . , t k , s 1 , . . . , s h . As we previously observed, we assume that all strings are primitive and that within each input collection no string is the rotation of another. However, we cannot rule out the possibility that some t i is the rotation of some s j . The merging algorithm should therefore recognize this occurrence and eliminate from the union one of the two strings, say s j . In practice, this means that all symbols of cbwt 1 coming from s j must not be included in cbwt 01 .
To merge cbwt 0 and cbwt 1 we need to merge their symbols according to their context. By construction, the context of cbwt
0 [i] (resp. cbwt 1 [j]) is rot 0 (csa 0 [i]) (resp. rot 1 (csa 1 [j])), where rot 0 (csa 0 [i])
is a cyclic rotation of the string t k to which the symbol cbwt 0 [i] belongs (and similarly for rot 1 (csa 1 [j])). Note however, that context must be sorted according to the
≺ ∞ order; hence cbwt 0 [i] should precede cbwt 1 [j] in cbwt 01 iff rot 0 (csa 0 [i]) ∞ rot 1 (csa 1 [j]).
The good news is that the H&M algorithm, as described in Figure 2, when applied to cbwt 0 and cbwt 1 will sort each symbol according to the ∞ order of its context. Notice that the ∞ order induces a significant difference with respect to the merging of BWTs: indeed, since there are no $'s in cbwt 0 and cbwt 1 Line 9 is never executed and the destination of each symbol is always determined by its predecessor in the cyclic rotation. More formally, reasoning as in Lemma 2, it is possible to prove the following property.
rot 0 (csa 0 [i]) ∞ [1, h] rot 1 (csa 1 [j]) ∞ [1, h].(14)
Property 16 states that after iteration h the infinite strings rot 0 (csa 0 [i]) ∞ and rot 1 (csa 1 [j]) have been sorted according to their length h prefix. As for the original H&M algorithm, as soon as Z (h+1) = Z (h) the Z (·) array will not change in any successive iteration and the merging is complete. By Lemma 14 it is Z (h+1) = Z (h) for some h ≤ maxcLcp.
Since we do not simply need to sort the context, but also recognize if some string t i is a rotation of some s j , we make use of the algorithm in Figure 3 which, in addition to Z (h) , also computes the integer array B that marks the boundaries of the groups of all rotations whose infinite form have a common prefix of length h. We can prove a result analogous to Lemma 3 replacing the LCP between suffixes (lcp 01 ) with the LCP between the infinite strings The two rotations are therefore identical and the symbol cbwt 1 [j] should not be included in cbwt 01 .
Summing up, to merge cbwt 0 and cbwt 1 we execute the procedure of Figure 3 until both Z (h) and B do not change. Then, we compute cbwt 01 by merging cbwt 0 and cbwt 1 according to Z (h) , discarding those symbols corresponding to zero entries in B. The number of iterations will be at most maxcLcp. In addition, since we are only interested in zero/nonzero entries, instead of B we can use a 2-bit array B 2 as in Section 4.2. Reasoning as for Lemma 5, setting n = n 0 + n 1 we get the following result. As we have done in the previous sections, we now show how to reduce the running time of the merging algorithm by avoiding to re-process the blocks of Z (h−1) that have become irrelevant for the computation of the new bitarray Z (h) . Reasoning as in Section 4 we observe that monochrome blocks, i.e. blocks containing entries only from cbwt 0 or cbwt 1 , after having been processed once, become irrelevant and can be skipped in successive iterations. Note however, that whenever rot 0 (csa 0 [i]) ∞ = rot 1 (csa 1 [j]) ∞ these two entries will always belong to the same block. To handle this case we first assume cbwt 01 is to be used as a compressed index for circular patterns [17] and we later consider the case in which cbwt 01 is to be used for a compressed permuterm index.
Compressed indices of circular patterns
In this setting, cbwt 01 is to be used as a compressed index for circular patterns and therefore we have access to the length 0 and length 1 data structures providing the length of each rotation. Under this assumption we modify the Gap algorithm described in Section 4 as follows: in addition to skipping monotone blocks, every time there is a size-2 non monochrome block containing, say cbwt 0 [i] and cbwt 1 [j], we mark it as quasi-irrelevant and compute ij = |length 0 (i)| + |length 1 (j)| − gcd(|length 0 (i)|, |length 1 (j)|). As soon as this block is split or we reach iteration ij the block becomes irrelevant and is skipped in successive iterations. As in the original Gap algorithm, the computation stops when all blocks have become irrelevant.
For simplicity, in the next theorem we assume that the access to the data structures length 0 and length 1 takes constant time. If not, and random access to the individual lengths takes O(ρ) time, the overall cost of the algorithm is increased by O((n 0 + n 1 )ρ) since each length is computed at most once.
iterations it will be in a size-2 non-monochrome block together with its identical rotation. In either case, the block containing rot 01 (csa 01 [k]) will become irrelevant and it will be no longer processed in successive iterations. Hence, the overall cost of handling rot 01 (csa 01 [k]) over all iterations is proportional to (15), and the overall cost of handling all rotations is bounded by O(n avecLcp) as claimed. Note that the final bitarray Z (h) describes also how length 0 and length 1 must be interleaved to get length 01 .
Compressed permuterm indices
Finally, we consider the case in which cbwt 01 is to be used as the core of a compressed permuterm index [11]. In this case we do not have the length 0 and length 1 data structures, but each string in the collection is terminated by a unique # symbol. In this case, to recognize whether a size-2 non-monochrome block contains two identical rotations, we make use of the following lemma. Proof. Let δ denote the distance between the two occurrences of # in t ∞ [1, h]. Since t contains a single #, we have t = t ∞ [1, δ] = s ∞ [1, δ] = s.
The above lemma suggests to design a #Gap algorithm to merge compressed permuterm indices in which the arrays Z (·) are arrays of pairs so that they keep track also of the number of # in each prefix. In the following Z (h) [k] = b, m means that the k-th rotation belongs to csa b , and among the first h symbols of the infinite form of that rotation there are exactly m occurrences of #. Formally, for h = 0, 1, 2, . . . the array Z (h) satisfies the following property. In the practical implementation of the #Gap algorithm, instead of maintaining the pairs b, m , we maintain two bit arrays Z (h−1) , Z (h) as in Gap, and an additional 2-bit array C containing the second component of the pairs. For such array C two bits per entry are sufficient since the values stored in each entry C[k] never decrease and they are no longer updated when they reach the value 2. Proof. We reason as in the proof of Theorem 18 except that if rot 01 (csa 01 [k]) = rot 01 (csa 01 [k+ 1]) we are guaranteed that the corresponding size-2 block will become irrelevant only after iteration h = 2 |rot 01 (csa 01 [k])| = 2 clcp 01 [k + 1]. Hence, the cost of handling rot 01 (csa 01 [k]) is still proportional to (15) and the overall cost of the algorithm is O(n avecLcp) time. The space usage is the same as in Theorem 18, except for the 2n additional bits for the C array.
Property 1 .
1For i = 1, . . . , n 0 and j = 1, . . . n 1 the i-th 0 precedes the j-th 1 in Z (h) if and only if
Figure 2
2Main loop of algorithm H&M for computing Z (h) given Z (h−1) . Array F is initialized so that F [c] contains the number of occurrences of symbols smaller than c in bwt0 and bwt1 plus one. Note that the bits stored in Z (h) immediately after reading symbol c = $ are stored in positions from F [c] to F [c + 1] − 1 of Z (h) . 1: Initialize arrays F [1, σ] and Block_id[1, σ] 2: k 0 ← 1; k 1 ← 1 Init counters for bwt 0 and bwt 1 3: for k ← 1 to n 0 + n 1 do 4: if B[k] = 0 and B[k] = h then 5:
Figure 3
3Main loop of the H&M algorithm modified for the computation of the lcp values. At Line 1 for each symbol c we set Block_id[c] = −1 and F [c] as in Figure 2. At the beginning of the algorithm we initialize the array B[1, n0 + n1 + 1] as B = 1 0 n 0 +n 1 −1 1.
one of the blocks in (4). Proof. We prove the result by induction on h. For h = 0, hence before the execution of the first iteration, (5) is only valid for = 1 and m = n 0 + n 1 (recall that we defined lcp 01 [1] = lcp 01 [n 0 + n 1 + 1] = −1). Since initially B = 1 0 n0+n1−1 1 our claim holds. Suppose now that (5) holds for some h > 0. Let s = t 01 [sa 01 [ ], sa 01 [ ] + h − 1]; by (5) s is a common prefix of the suffixes starting at positions sa 01 [ ], sa 01 [ + 1], . . . , sa 01 [m], and no other suffix of t 01 is prefixed by s. By Property 1 the 0s and 1s in Z (h) [ , m] corresponds to the same set of suffixes That is
we start by showing that, if < m, then at the end of iteration h − 1 it is B[ + 1] = · · · = B[m] = 0. To see this observe that the range sa 01 [ , m] is part of a (possibly) larger range sa 01 [ , m ] containing all suffixes prefixed by the length h − 1 prefix of s. By inductive hypothesis, at the end of iteration h − 1 it is B[ + 1] = · · · = B[m ] = 0 which proves our claim since ≤ and m ≤ m . To complete the proof, we need to show that during iteration h: (i) we do not modify B[ + 1, m] and (ii) we write a nonzero to B[ ] and B[m + 1] if they do not already contain a nonzero. Let c = s[0] and s = s[1, h − 1] so that s = cs . Consider now the range sa 01 [e, f ] containing the suffixes prefixed by s . By inductive hypothesis at the end of iteration h − 1 it is B[e] = 0, B[e + 1] = · · · = B[f ] = 0, B[f + 1] = 0. (7) During iteration h, the bits in Z (h) [ , m] are possibly changed only when we are scanning the region Z (h−1) [e, f ] and we find an entry
3 until the iteration h in which all entries in B become nonzero. At that point Z (h) describes how bwt 0 and bwt 1 should be merged to get bwt 01 and for i = 2, . . . , n 0 + n 1 lcp 01 [i] = B[i] − 1. The above strategy requires a number of iterations, each one taking O(n 0 + n 1 ) time, equal to the maximum of the lcp values, for an overall complexity of O((n 0 + n 1 ) maxlcp 01 ), where maxlcp 01 = max i lcp 01 [i]. Note that in addition to the space for the input and the output the algorithm only uses two bit arrays (one for the current and the next Z (·) ) and a constant number of counters (the arrays F and Block_id). Summing up we have the following result. Lemma 5. Given bwt 0 and bwt 1 , the algorithm in Figure 3 computes bwt 01 and lcp 01 in O(n maxLcp) time and 2n + O(log n) bits of working space, where n = |t 01 | and maxLcp = max i lcp 01 [i] is the maximum LCP of t 01 .
Definition 6 .
6If B[ ] = 0, B[m + 1] = 0 and B[ + 1] = · · · = B[m] = 0, we say that block Z (h) [ , m] is monochrome if it contains only 0's or only 1's.
Lemma 7 .
7If at the end of iteration h bit vector Z (h) contains only monochrome blocks we can compute bwt 01 and lcp 01 in O(n 0 + n 1 ) time from bwt 0 , bwt 1 , lcp 0 and lcp 1 . Proof. By Property 1, if we identify the i-th 0 in Z (h) with bwt 0 [i] and the j-th 1 with bwt 1 [j] the only elements which could be not correctly sorted by context are those within the same block. However, if the blocks are monochrome all elements belong to either bwt 0 or bwt 1 so their relative order is correct. To compute lcp 01 we observe that if B[i] = 0 then by (the proof of) Corollary 4 it is lcp 01 [i] = B[i] − 1. If instead B[i] = 0 we are inside a block hence sa 01 [i − 1] and sa 01 [i] belong to the same string t 0 or t 1 and their LCP is directly available in lcp 0 or lcp 1 .
Lemma 8 .Figure 4
84Suppose that, at the end of iteration h, Z (h) [ , m] is a monochrome block. Then (i) for g > h, Z (g) [ , m] = Z (h) [ , m], and (ii) processing Z (h) [ , m] during iteration h + 1 creates a set of monochrome blocks in Z (h+1) . Main loop of the Gap algorithm. The processing of active blocks at Line 4 is done as in Lines 7-20 of Figure 3.
Theorem 9 .
9Given bwt 0 , lcp 0 and bwt 1 , lcp 1 let n = |bwt 0 | + bwt 1 |. The Gap algorithm computes bwt 01 and lcp 01 in O(n aveLcp 01 ) time, where aveLcp 01 = ( i lcp 01 [i])/n is the average LCP of the string t 01 . The working space is 2n + O(log n) bits, plus the space used for handling irrelevant blocks. Proof. For the running time we reason as in [15] and observe that the sum, over all iterations, of the length of all active blocks is bounded by O( i lcp 01 [i]) = O(n aveLcp 01 ). The time bound follows observing that at any iteration the cost of processing an active block of length is bounded by O( ) time.
Figure 5
5Modification of the H&M algorithm to use a two-bit array B2 instead of the integer array B. The code shows the case for h even; if h is odd, the value 2 is replaced by 1 and viceversa.
During iteration h, the values in B 2 are used instead of the ones in B as follows: An entry B 2 [i] = 0 corresponds to B[i] = 0, an entry B 2 [i] = 3 corresponds to 0 < B[i] < h − 1. If h is even, an entry B 2 [i] = 2 corresponds to B[i] = h and an entry B 2 [i] = 1 corresponds to B[i] = h − 1; while if h is odd the correspondence is 2 → h − 1, 1 → h. The array B 2 is initialized as 3 (0 ) n0+n1−1 (3 ), and it is updated appropriately in lines 13-14.
Property 10 .
10At the end of iteration h, for i = 2, . . . , n 0 and j = 2, . . . n 1 the i-th 0 precedes the j-th 1 in Z (h) if and only if
Figure 7
7Main loop of algorithm H&M modified to merge XBWTs. Array F is initialized so that F [c] contains the number of occurrences of symbols smaller than c in L0 and L1 plus three, to account for Π0[1] = Π1[1] = which are smaller than any other string.
7 .
7Lemma 11. Let Z (0) = 010 n0−1 1 n1−1 , and let Z (h) be obtained from Z (h−1) by the algorithm in Fig. 7. Then, for h = 0, 1, 2, . . ., the array Z (h) satisfies Property 10. Proof. We prove the result by induction. For h = 0, Π 0 [i][1, 0] = Π 1 [j][1, 0] = so (8) is always true and Z (0) satisfies Property 10.
Lemma 12 .
12The modified H&M algorithm computes xbwt(T 01 ) given xbwt(T 0 ) and xbwt(T 1 ) in O(|T 01 |hgt(T 01 )) time and 4n + O(log n) bits of working space, where n = n 0 + n 1 . Proof. Each iteration of the merging algorithm takes O(m 0 + m 1 ) time since it consists of a scan of the arrays Z (h−1) , L 0 , L 1 , Last 0 and Last 1 . After at most hgt(T 01 ) iterations the strings in Π 0 and Π 1 are lexicographically sorted and Z (h) no longer changes. The final scan of Z (h) and B 2 to compute xbwt(T 0 ) takes O(m 0 + m 1 ) time. Since |T 01 | ≥ max(m 0 , m 1 ) the overall cost is O(|T 01 |hgt(T 01 )) time. The working space of the algorithm, consists of B 2 and of two instances of the Z (h) array (for the current and the previous iteration), in addition to O(σ) counters (recall that σ is assumed to be constant).
Theorem 13 .
13The modified Gap algorithm computes xbwt(T 01 ) given xbwt(T 0 ) and xbwt(T 1 ) in O(|T 01 |avehgt(T 01 )) time. The working space is 6n + O(log n) bits, where n = n 0 + n 1 , plus the space required for handling irrelevant blocks.
Lemma 15 .
15If t and s are primitive, t ∞ = s ∞ implies t = s. Let t 0 [1, n 0 ], t 1 [1, n 1 ] be two primitive strings and t 01 [1, n] their concatenation of length n = n 0 + n 1 .For i = 1, .. . , n, let rot 01 (i) define the rotation of substrings t 0 and t 1 within t 01 as follows:
Property 16 .
16For i = 1, . . . , n 0 and j = 1, . . . n 1 the i-th 0 precedes the j-th 1 in Z (h) if and only if
rot b (csa b [i]) ∞ (that is clcp 01 ). After iteration h = maxcLcp all distinct rotations have been sorted according to the ∞ order; thus an entry B[k] = 0 denotes two rotations rot 0 (csa 0 [i]) ∞ and rot 1 (csa 1 [j]) ∞ which have a common prefix of length maxcLcp. By Lemma 14 it is rot 0 (csa 0 [i]) ∞ = rot 1 (csa 1 [j]) ∞ and by Lemma 15 rot 0 (csa 0 [i]) = rot 1 (csa 1 [j]).
Lemma 17 .
17The modified H&M algorithm computes cbwt 01 given cbwt 0 and cbwt 1 in O(n maxcLcp) time and 4n + O(log n) bits of working space.
Theorem 18 .
18The modified Gap algorithm computes cbwt 01 , length 01 given cbwt 0 , length 0 and cbwt 1 , length 1 in O(n avecLcp) time, where n = n 0 + n 1 . The working space is 2n + O(log n) bits plus the space required for handling (quasi-)irrelevant blocks.Proof. If rot 01 (csa 01 [k]) is different from any other rotation, by definitions(11)and (12) after at most max(clcp 01 [k], clcp 01 [k + 1]) iterations it will be in a monochrome (possibly singleton) block. If instead rot 01 (csa 01 [k]) is identical to another rotation, which can only be either rot 01 (csa 01 [k − 1]) or rot 01 (csa 01 [k + 1]), then after at most max(clcp 01 [k − 1], clcp 01 [k], clcp 01 [k + 1], clcp 01 [k + 2])
Lemma 19 .
19Let t and s denote two strings each one containing a single occurrence of the symbol #. If for some h > 0 it is t ∞ [1, h] = s ∞ [1, h] and t ∞ [1, h] contains two occurrences of #, then t = s.
Property 20 .
20At the end of iteration h of #Gap Property 16 holds and ifZ (h) [k] = b, m is the i-th b in Z (h) then rot b (csa b [i]) ∞[1, h] contains exactly m copies of symbol #.
Initially we set Z (0) = 0, 0 n0 1, 0 n1 which clearly satisfies Property 20. At each iteration #Gap reads Z (h−1) and updates Z (h) using Lines 7-15 below instead of Lines 7-14 ofFigure 3:7: b, m ← Z (h−1) [k] 8: c ← bwt b [k b ++]Get c according to b . . .
14: if c = # then m ← m + 1 Update number of # 15: Z (h) [j] ← b, m Reasoning as in the previous sections, one can prove by induction that with this modification the array Z (h) computed by #Gap satisfies Property 20. In the #Gap algorithm a block becomes irrelevant when it is monochrome or it is a size-2 non-monochrome block Z (h) [k, k + 1] such that Z (h) [k] = 0, 2 and Z (h) [k + 1] = 1, 2 . By Lemma 19 such block corresponds to two identical rotations rot 0 (csa 0 [i]) = rot 1 (csa 1 [j]) and after being processed a final time it can be ignored in successive iterations.
Theorem 21 .
21The #Gap algorithm merges two compressed permuterm indices cbwt 0 and cbwt 1 in O(n avecLcp) time, where n = n 0 + n 1 . The working space is 6n + O(log n) bits plus the space required for handling irrelevant blocks.
In other words, bwt 01 [i] is the symbol preceding the i-th lexicographically larger suffix, with the exception that if sa 01 [i] = 1 then bwt 01 [i] = $ 0 and if sa 01 [i] = n 0 + 1 then bwt 01 [i] = $ 1 . Hence, bwt 01 [i]
as soon as k reaches e the variable id changes and becomes different from all values stored in Block_id.Hence, at the first
occurrence of symbol c the value h will be stored in B[ ] (Line 18) unless a nonzero is
already there. Again, because of (7), during the scanning of Z (h−1) [e, f ] the variable id does
not change so subsequent occurrences of c will not cause a nonzero value to be written to
B[ + 1, m]. Finally, as soon as we leave region Z (h−1) [e, f ] and k reaches f + 1, the variable
id changes again and at the next occurrence of c a nonzero value will be stored in B[m + 1].
If there are no more occurrences of c after we leave region Z (h−1) [e, f ] then either sa 01 [m+1]
is the first suffix array entry prefixed by symbol c + 1 or m + 1 = n 0 + n 1 + 1. In the former
case B[m + 1] gets a nonzero value at iteration 1, in the latter case B[m + 1] gets a nonzero
value when we initialize array B.
Corollary 4. For i = 2, . . . , n 0 + n 1 , if lcp 01 [i] = , then starting from the end of iteration
+ 1 it is B[i] = + 1.
Proof. By Lemma 3 we know that B[i] becomes nonzero only after iteration + 1. Since at
the end of iteration it is still B[i] = 0 during iteration + 1 B[i] gets the value + 1 which
is never changed in successive iterations.
Collections used in our experiments sorted by average LCP. Columns 4 and 5 refer to the lengths of the single documents. Pacbio are NGS reads from a D.melanogaster dataset. Illumina are NGS reads from Human ERA015743 dataset. Wiki-it are pages from Italian Wikipedia. Proteins are protein sequences from Uniprot. Collections and source files are available on https: //people.unipmn.it/manzini/gap.Name
Size GB
σ
Max Len
Ave Len
Max LCP
Ave LCP
Pacbio
6.24
5
40212
9567.43
1055
17.99
Illumina
7.60
6
103
102.00
102
27.53
Wiki-it
4.01
210
553975
4302.84
93537
61.02
Proteins
6.11
26
35991
410.22
25065
100.60
Table 1 Name
k
gSACA-K
τ = 50
τ = 100
τ = 200
+Φ
time
space
time
space
time
space
Pacbio
7
0.46
0.41
4.35
0.46
4.18
0.51
4.09
Illumina
4
0.48
0.93
3.31
1.02
3.16
1.09
3.08
Wiki-it
5
0.41
-
-
-
-
3.07
6.55
Proteins
4
0.59
3.90
4.55
5.18
4.29
7.05
4.15
4 :
4if B 2 [k] = 0 and B 2 [k] = 2 then5:
id ← k
A new block of Z (h−1) is starting
6: end if
7: if B 2 [k] = 1 then
8:
B 2 ← 3
Mark the block as old
9: end if
. . .
13: if B 2 [j] = 0 then
Check if already marked
14:
B 2 [j] ← 2
A new block of Z (h) will start here
15: end if
holds. Assume first Π 0 [i][1] = Π 1 [j][1]. The hypothesis v < w implies Π 0 [i][1] < Π 1 [j][1] hence (3) certainly holds. Assume now Π 0 [i][1] = Π 1 [j][1] = c. Let v , w denote respectively the values of the main loop variable k in the procedure of
Unless explicitly stated otherwise, in the following we use H&M to refer to the algorithm from[16], and not to its variant proposed in[15].
In this and in the following section we purposely use a special symbol # different from $. The reason is that $ is commonly used to for sorting purposes, while # simply represents a symbol different from the ones in Σ.
Linear time construction of compressed text indices in compact space. Djamal Belazzougui, STOC. ACMDjamal Belazzougui. Linear time construction of compressed text indices in compact space. In STOC, pages 148-193. ACM, 2014.
Variable-order de Bruijn graphs. Christina Boucher, Alexander Bowe, Travis Gagie, Simon J Puglisi, Kunihiko Sadakane, DCC. IEEEChristina Boucher, Alexander Bowe, Travis Gagie, Simon J. Puglisi, and Kunihiko Sadakane. Variable-order de Bruijn graphs. In DCC, pages 383-392. IEEE, 2015.
Succinct de Bruijn graphs. Alexander Bowe, Taku Onodera, Kunihiko Sadakane, Tetsuo Shibuya, WABI. Springer7534Alexander Bowe, Taku Onodera, Kunihiko Sadakane, and Tetsuo Shibuya. Succinct de Bruijn graphs. In WABI, volume 7534 of Lecture Notes in Computer Science, pages 225- 235. Springer, 2012.
A block-sorting lossless data compression algorithm. M Burrows, D Wheeler, 124Digital Equipment Corporation. Technical ReportM. Burrows and D. Wheeler. A block-sorting lossless data compression algorithm. Technical Report 124, Digital Equipment Corporation, 1994.
Lightweight LCP construction for very large collections of strings. Anthony J Cox, Fabio Garofalo, Giovanna Rosone, Marinella Sciortino, J. Discrete Algorithms. 37Anthony J. Cox, Fabio Garofalo, Giovanna Rosone, and Marinella Sciortino. Lightweight LCP construction for very large collections of strings. J. Discrete Algorithms, 37:17-33, 2016.
Space-efficient merging of succinct de Bruijn graphs. Lavinia Egidi, Felipe Alves Louza, Giovanni Manzini, CoRRLavinia Egidi, Felipe Alves Louza, and Giovanni Manzini. Space-efficient merging of suc- cinct de Bruijn graphs. CoRR, 2019. URL: https://arxiv.org/abs/1902.02889.
External memory BWT and LCP computation for sequence collections with applications. Lavinia Egidi, Felipe Alves Louza, Giovanni Manzini, Guilherme P Telles, 14. Schloss Dagstuhl -Leibniz-Zentrum fuer Informatik. 10WABI, volume 113 of LIPIcsLavinia Egidi, Felipe Alves Louza, Giovanni Manzini, and Guilherme P. Telles. External memory BWT and LCP computation for sequence collections with applications. In WABI, volume 113 of LIPIcs, pages 10:1-10:14. Schloss Dagstuhl -Leibniz-Zentrum fuer Inform- atik, 2018.
Lightweight BWT and LCP merging via the gap algorithm. Lavinia Egidi, Giovanni Manzini, SPIRE. Springer10508Lavinia Egidi and Giovanni Manzini. Lightweight BWT and LCP merging via the gap algorithm. In SPIRE, volume 10508 of Lecture Notes in Computer Science, pages 176-190. Springer, 2017.
Structuring labeled trees for optimal succinctness, and beyond. P Ferragina, F Luccio, G Manzini, S Muthukrishnan, Proc. 46th IEEE Symposium on Foundations of Computer Science (FOCS). 46th IEEE Symposium on Foundations of Computer Science (FOCS)P. Ferragina, F. Luccio, G. Manzini, and S. Muthukrishnan. Structuring labeled trees for optimal succinctness, and beyond. In Proc. 46th IEEE Symposium on Foundations of Computer Science (FOCS), pages 184-193, 2005.
Compressing and indexing labeled trees, with applications. Paolo Ferragina, Fabrizio Luccio, Giovanni Manzini, S Muthukrishnan, 4:1-4:33J. ACM. 571Paolo Ferragina, Fabrizio Luccio, Giovanni Manzini, and S. Muthukrishnan. Compressing and indexing labeled trees, with applications. J. ACM, 57(1):4:1-4:33, 2009.
The compressed permuterm index. Paolo Ferragina, Rossano Venturini, 10:1-10:21ACM Trans. Algorithms. 71Paolo Ferragina and Rossano Venturini. The compressed permuterm index. ACM Trans. Algorithms, 7(1):10:1-10:21, 2010.
Space-efficient computation of the Burrows-Wheeler Transform. J Fuentes-Sepúlveda, G Navarro, Y Nekrich, Proc. 29th Data Compression Conference (DCC). 29th Data Compression Conference (DCC)To appearJ. Fuentes-Sepúlveda, G. Navarro, and Y. Nekrich. Space-efficient computation of the Burrows-Wheeler Transform. In Proc. 29th Data Compression Conference (DCC), 2019. To appear.
Wheeler graphs: A framework for bwtbased data structures. Travis Gagie, Giovanni Manzini, Jouni Sirén, Theor. Comput. Sci. 698Travis Gagie, Giovanni Manzini, and Jouni Sirén. Wheeler graphs: A framework for bwt- based data structures. Theor. Comput. Sci., 698:67-78, 2017.
Compressed suffix trees: Efficient computation and storage of LCP-values. Simon Gog, Enno Ohlebusch, 10.1145/2444016.2461327doi:10. 1145/2444016.2461327ACM Journal of Experimental Algorithmics. 18Simon Gog and Enno Ohlebusch. Compressed suffix trees: Efficient computation and storage of LCP-values. ACM Journal of Experimental Algorithmics, 18, 2013. doi:10. 1145/2444016.2461327.
Constructing Burrows-Wheeler transforms of large string collections via merging. James Holt, Leonard Mcmillan, BCB. ACMJames Holt and Leonard McMillan. Constructing Burrows-Wheeler transforms of large string collections via merging. In BCB, pages 464-471. ACM, 2014.
Merging of multi-string BWTs with applications. James Holt, Leonard Mcmillan, Bioinformatics. 302423James Holt and Leonard McMillan. Merging of multi-string BWTs with applications. Bioinformatics, 30(24):3524-3531, 2014. 23:23
Succinct indexes for circular patterns. W.-K Hon, C.-H Lu, R Shah, S V Thankachan, 10.1007/978-3-642-25591-5_69Algorithms and Computation -22nd International Symposium. Yokohama, JapanProceedingsW.-K. Hon, C.-H. Lu, R. Shah, and S.V. Thankachan. Succinct indexes for circu- lar patterns. In Algorithms and Computation -22nd International Symposium, ISAAC 2011, Yokohama, Japan, December 5-8, 2011. Proceedings, pages 673-682, 2011. doi: 10.1007/978-3-642-25591-5_69.
Efficient algorithm for circular burrows-wheeler transform. Tsung-Han Wing-Kai Hon, Chen-Hua Ku, Rahul Lu, Sharma V Shah, Thankachan, CPM. Springer7354Wing-Kai Hon, Tsung-Han Ku, Chen-Hua Lu, Rahul Shah, and Sharma V. Thankachan. Efficient algorithm for circular burrows-wheeler transform. In CPM, volume 7354 of Lecture Notes in Computer Science, pages 257-268. Springer, 2012.
Permuted longest-common-prefix array. J Kärkkäinen, G Manzini, S Puglisi, Proc. 20th Symposium on Combinatorial Pattern Matching (CPM). 20th Symposium on Combinatorial Pattern Matching (CPM)Springer-Verlag5577J. Kärkkäinen, G. Manzini, and S. Puglisi. Permuted longest-common-prefix array. In Proc. 20th Symposium on Combinatorial Pattern Matching (CPM), pages 181-192. Springer- Verlag, LNCS n. 5577, 2009.
LCP array construction in external memory. Juha Kärkkäinen, Dominik Kempa, ACM Journal of Experimental Algorithmics. 21122Juha Kärkkäinen and Dominik Kempa. LCP array construction in external memory. ACM Journal of Experimental Algorithmics, 21(1):1.7:1-1.7:22, 2016.
of The Art of Computer Programming. D E Knuth, Addison-Wesley3Reading, MA, USASorting and Searching. second editionD. E. Knuth. Sorting and Searching, volume 3 of The Art of Computer Programming. Addison-Wesley, Reading, MA, USA, second edition, 1998.
On the number of elements to reorder when updating a suffix array. Martine Léonard, Laurent Mouchard, Mikaël Salson, 10.1016/j.jda.2011.01.002J. Discrete Algorithms. 11Martine Léonard, Laurent Mouchard, and Mikaël Salson. On the number of elements to reorder when updating a suffix array. J. Discrete Algorithms, 11:87-99, 2012. doi: 10.1016/j.jda.2011.01.002.
Induced suffix sorting for string collections. Felipe Alves Louza, Simon Gog, Guilherme P Telles, DCC. IEEEFelipe Alves Louza, Simon Gog, and Guilherme P. Telles. Induced suffix sorting for string collections. In DCC, pages 43-52. IEEE, 2016.
An extension of the Burrows-Wheeler transform. Sabrina Mantaci, Antonio Restivo, Giovanna Rosone, Marinella Sciortino, Theor. Comput. Sci. 3873Sabrina Mantaci, Antonio Restivo, Giovanna Rosone, and Marinella Sciortino. An extension of the Burrows-Wheeler transform. Theor. Comput. Sci., 387(3):298-312, 2007.
XBWT tricks. Giovanni Manzini, SPIRE. 9954Giovanni Manzini. XBWT tricks. In SPIRE, volume 9954 of Lecture Notes in Computer Science, pages 80-92, 2016.
Succinct de Bruijn graph construction for massive populations through space-efficient merging. D Martin, Christina Muggli, Boucher, 10.1101/229641Martin D. Muggli and Christina Boucher. Succinct de Bruijn graph construction for massive populations through space-efficient merging. bioRxiv, 2017. doi:10.1101/229641.
Succinct colored de Bruijn graphs. Martin D Muggli, Alexander Bowe, Noelle R Noyes, Paul S Morley, Keith E Belk, Robert Raymond, Travis Gagie, Simon J Puglisi, Christina Boucher, Bioinformatics. 3320Martin D. Muggli, Alexander Bowe, Noelle R. Noyes, Paul S. Morley, Keith E. Belk, Robert Raymond, Travis Gagie, Simon J. Puglisi, and Christina Boucher. Succinct colored de Bruijn graphs. Bioinformatics, 33(20):3181-3187, 2017.
Space-efficient construction of compressed indexes in deterministic linear time. J Ian Munro, Gonzalo Navarro, Yakov Nekrich, SODA. SIAMJ. Ian Munro, Gonzalo Navarro, and Yakov Nekrich. Space-efficient construction of com- pressed indexes in deterministic linear time. In SODA, pages 408-424. SIAM, 2017.
FM-index of alignment with gaps. Theoretical Computer Science. Hyunjoon Joong Chae Na, Seunghwan Kim, Heejin Min, Thierry Park, Martine Lecroq, Laurent Léonard, Kunsoo Mouchard, Park, 10.1016/j.tcs.2017.02.020710Joong Chae Na, Hyunjoon Kim, Seunghwan Min, Heejin Park, Thierry Lecroq, Martine Léonard, Laurent Mouchard, and Kunsoo Park. FM-index of alignment with gaps. Theor- etical Computer Science, 710:148-157, feb 2018. doi:10.1016/j.tcs.2017.02.020.
FM-index of alignment: A compressed index for similar strings. Hyunjoon Joong Chae Na, Heejin Kim, Thierry Park, Martine Lecroq, Laurent Léonard, Kunsoo Mouchard, Park, Theoretical Computer Science. 638Joong Chae Na, Hyunjoon Kim, Heejin Park, Thierry Lecroq, Martine Léonard, Laurent Mouchard, and Kunsoo Park. FM-index of alignment: A compressed index for similar strings. Theoretical Computer Science, 638:159-170, 2016.
Compressed full-text indexes. G Navarro, V Mäkinen, ACM Computing Surveys. 391G. Navarro and V. Mäkinen. Compressed full-text indexes. ACM Computing Surveys, 39(1), 2007.
Trickier XBWT tricks. Enno Ohlebusch, Stefan Stauß, Uwe Baier, SPIRE. Springer11147Enno Ohlebusch, Stefan Stauß, and Uwe Baier. Trickier XBWT tricks. In SPIRE, volume 11147 of Lecture Notes in Computer Science, pages 325-333. Springer, 2018.
Compressed suffix arrays for massive data. Jouni Sirén, Proc. 16th Int. Symp. on String Processing and Information Retrieval (SPIRE '09). 16th Int. Symp. on String essing and Information Retrieval (SPIRE '09)Springer Verlag LNCS5721Jouni Sirén. Compressed suffix arrays for massive data. In Proc. 16th Int. Symp. on String Processing and Information Retrieval (SPIRE '09), pages 63-74. Springer Verlag LNCS n. 5721, 2009.
Burrows-Wheeler transform for Terabases. Jouni Sirén, IEEE Data Compression Conference (DCC). Jouni Sirén. Burrows-Wheeler transform for Terabases. In IEEE Data Compression Con- ference (DCC), pages 211-220, 2016.
Indexing variation graphs. Jouni Sirén, Proc. 19th Meeting on Algorithm Engineering and Experiments (ALENEX '17). 19th Meeting on Algorithm Engineering and Experiments (ALENEX '17)SIAMJouni Sirén. Indexing variation graphs. In Proc. 19th Meeting on Algorithm Engineering and Experiments (ALENEX '17), pages 13-27. SIAM, 2017.
Uniqueness theorem for periodic functions. H S Wilf, N J Fine, Proc. Amer. Math. Soc. 16H.S. Wilf and N.J. Fine. Uniqueness theorem for periodic functions. Proc. Amer. Math. Soc., 16:109-114, 1965.
|
[] |
[
"XMMFITCAT: The XMM-Newton spectral-fit database ⋆",
"XMMFITCAT: The XMM-Newton spectral-fit database ⋆"
] |
[
"A Corral [email protected] \nIAASARS\nNational Observatory of Athens\nGR-15236PenteliGreece\n",
"I Georgantopoulos \nIAASARS\nNational Observatory of Athens\nGR-15236PenteliGreece\n",
"M G Watson \nDepartment of Physics & Astronomy\nUniversity of Leicester\nLE1 7RHLeicesterUK\n",
"S R Rosen \nDepartment of Physics & Astronomy\nUniversity of Leicester\nLE1 7RHLeicesterUK\n",
"K L Page \nDepartment of Physics & Astronomy\nUniversity of Leicester\nLE1 7RHLeicesterUK\n",
"N A Webb \nInstitut de Recherche en Astrophysique et Planétologie (IRAP)\n9 Avenue du Colonel Roche31028, Cedex 4ToulouseFrance\n"
] |
[
"IAASARS\nNational Observatory of Athens\nGR-15236PenteliGreece",
"IAASARS\nNational Observatory of Athens\nGR-15236PenteliGreece",
"Department of Physics & Astronomy\nUniversity of Leicester\nLE1 7RHLeicesterUK",
"Department of Physics & Astronomy\nUniversity of Leicester\nLE1 7RHLeicesterUK",
"Department of Physics & Astronomy\nUniversity of Leicester\nLE1 7RHLeicesterUK",
"Institut de Recherche en Astrophysique et Planétologie (IRAP)\n9 Avenue du Colonel Roche31028, Cedex 4ToulouseFrance"
] |
[] |
The XMM-Newton spectral-fit database (XMMFITCAT) is a catalogue of spectral fitting results for the source detections within the XMM-Newton Serendipitous source catalogue with more than 50 net (background-subtracted) counts per detector in the 0.5-10 keV energy band. Its most recent version, constructed from the latest version of the XMM-Newton catalogue, the 3XMM Data Release 4 (3XMM-DR4), contains spectral-fitting results for 114,000 detections, corresponding to ≃ 78,000 unique sources. Three energy bands are defined and used in the construction of XMMFITCAT: Soft (0.5-2 keV), Hard (2-10 keV), and Full (0.5-10 keV) bands. Six spectral models, three simple and three more complex models, were implemented and applied to the spectral data. Simple models are applied to all sources, whereas complex models are applied to observations with more than 500 counts (30%). XMMFITCAT includes best-fit parameters and errors, fluxes, and goodness of fit estimates for all fitted models. XMMFITCAT has been conceived to provide the astronomical community with a tool to construct large and representative samples of X-ray sources by allowing source selection according to spectral properties, as well as characterise the X-ray properties of samples selected in different wavelengths. We present in this paper the main details of the construction of this database, and summarise its main characteristics.
|
10.1051/0004-6361/201425124
|
[
"https://arxiv.org/pdf/1411.7678v1.pdf"
] | 73,575,270 |
1411.7678
|
b8e6a3cdd26164944fe472ca2d97dc68e34e28f0
|
XMMFITCAT: The XMM-Newton spectral-fit database ⋆
27 Nov 2014 December 1, 2014
A Corral [email protected]
IAASARS
National Observatory of Athens
GR-15236PenteliGreece
I Georgantopoulos
IAASARS
National Observatory of Athens
GR-15236PenteliGreece
M G Watson
Department of Physics & Astronomy
University of Leicester
LE1 7RHLeicesterUK
S R Rosen
Department of Physics & Astronomy
University of Leicester
LE1 7RHLeicesterUK
K L Page
Department of Physics & Astronomy
University of Leicester
LE1 7RHLeicesterUK
N A Webb
Institut de Recherche en Astrophysique et Planétologie (IRAP)
9 Avenue du Colonel Roche31028, Cedex 4ToulouseFrance
XMMFITCAT: The XMM-Newton spectral-fit database ⋆
27 Nov 2014 December 1, 2014Received , ; accepted ,arXiv:1411.7678v1 [astro-ph.HE] Astronomy & Astrophysics manuscript no. XMMFITCAT_ACorral c ESO 2014X-rays: general -Catalogues -Surveys
The XMM-Newton spectral-fit database (XMMFITCAT) is a catalogue of spectral fitting results for the source detections within the XMM-Newton Serendipitous source catalogue with more than 50 net (background-subtracted) counts per detector in the 0.5-10 keV energy band. Its most recent version, constructed from the latest version of the XMM-Newton catalogue, the 3XMM Data Release 4 (3XMM-DR4), contains spectral-fitting results for 114,000 detections, corresponding to ≃ 78,000 unique sources. Three energy bands are defined and used in the construction of XMMFITCAT: Soft (0.5-2 keV), Hard (2-10 keV), and Full (0.5-10 keV) bands. Six spectral models, three simple and three more complex models, were implemented and applied to the spectral data. Simple models are applied to all sources, whereas complex models are applied to observations with more than 500 counts (30%). XMMFITCAT includes best-fit parameters and errors, fluxes, and goodness of fit estimates for all fitted models. XMMFITCAT has been conceived to provide the astronomical community with a tool to construct large and representative samples of X-ray sources by allowing source selection according to spectral properties, as well as characterise the X-ray properties of samples selected in different wavelengths. We present in this paper the main details of the construction of this database, and summarise its main characteristics.
Introduction
X-rays observations have expanded our knowledge of the most energetic phenomena in the Universe, and the astronomical sources in which they take place, such as stars, galaxies, clusters, and active galactic nuclei (AGN). Besides, and thanks to their penetrating ability, X-ray observations detect sources hidden behind large amount of gas, and up to high distances. Serendipitous X-ray surveys conducted by the XMM-Newton 1 and Chandra 2 observatories, have almost completely resolved the X-ray Cosmic Background (CXB) below 10 keV, showing that the X-ray sky is dominated by AGN emission (Gilli et al. 2007;Treister et al. 2009).
Thanks to its large collecting area and its large field of view, XMM-Newton has proven to be an extraordinary instrument to perform X-ray surveys. The European Photon Imaging Camera (EPIC 3 ) on-board XMM-Newton works in a photoncounting mode. This characteristic, along with its large effective area, allows us, from a single observation, to extract images, light curves, and spectral data not only for the proposed target, but also for many of the detected sources within the field of view. Data from this camera have been used to construct the largest catalogue of X-ray sources ever built, the XMM-Newton serendipitous source catalogue. The latest version of the XMM-Newton catalogue, 3XMM Data Release 4 (3XMM-A&A proofs: manuscript no. XMMFITCAT_ACorral ments operate in photon-counting mode such that, from a single observation, spectra and time series can be extracted from every source detected in the observed field of view. During routine pipeline processing of the XMM-Newton EPIC data, photometric measurements are obtained for every detected source in a number of distinct energy bands over the 0.2-12 keV range (see Watson et al. 2009). In parallel, for brighter sources (> 100 counts, summed over all 3 instruments), spectra and time series are extracted.
The 3XMM-DR4 catalogue reflects both an increased number of detections, due to the increased (3.2 yr) observation baseline since its predecessor, 2XMMi-DR3, and improvements to the processing system and calibration information. The number of public observations contained in 3XMM-DR4 increased by ∼ 50% relative to 2XMMi-DR3. At the same time, the science data processing has taken advantage of significant improvements within the XMM-Newton Science Analysis Software (SAS) and calibration data. The key science-driven gains include:
-Improved source characterisation and reduced spurious source detections. -Improved astrometric precision of sources.
-Greater net sensitivity for source detection.
-Extraction of spectra and time series for fainter sources, with improved signal-to-noise ratio (SNR).
The resulting catalogue contains spectra for more than 120 000 detections corresponding to ∼ 80 000 unique sources.
Spectral extraction
The pipeline processing automatically extracts spectra and time series (source-specific products, SSPs), from suitable exposures, for detections that meet certain brightness criteria.
In previous versions of the processing pipeline, extractions were attempted for any source which had at least 500 EPIC counts. In such cases, source data were extracted from a circular aperture of fixed radius (28 arcseconds), centred on the detection position, while background data were accumulated from a co-centred annular region with inner and outer radii of 60 and 180 arcseconds, respectively. Other sources that lay within or overlapped the background region were masked during the processing. In most cases this process worked well. However, in some cases, especially when extracting SSPs from sources within the small central window of MOS Small-Window mode observations, the background region could comprise very little usable background, with the bulk of the region lying in the gap between the central CCD and the peripheral ones. This resulted in very small (or even zero) areas for background rate scaling during background subtraction, often leading to incorrect background subtraction during the analysis of spectra in Xspec 6 , the standard package for X-ray spectral analysis.
For the bulk reprocessing leading to 3XMM-DR4, two new approaches have been adopted and implemented in the pipeline.
1. The extraction of data for the source takes place from an aperture whose radius is chosen to maximise the signalto-noise (SNR) of the source data. This is achieved by a curve-of-growth analysis, performed by the SAS task, eregionanalyse. This is especially useful for fainter sources where the relative background level is high. 6 http://heasarc.gsfc.nasa.gov/xanadu/xspec/ 2. To address the problem of locating an adequately filled background region for each source, the centre of a circular background aperture of radius, rb=168 arcseconds (comparable area to the previously used annulus) is stepped around the source along a circle centred on the source position. Up to 40 uniformly spaced azimuthal trials are tested along each circle. A suitable background region is found if, after masking out other contaminating sources and allowing for empty regions, a filling factor of at least 70% usable area remains. If no background trial along a given circle yields sufficient residual background area, the aperture is moved out to a circle of larger radius and the azimuthal trials are repeated. The smallest trial circle has a radius of rc=rb + 60 arcseconds so that the inner edge of the background region is at least 60 arcseconds from the source centre -for the case of MOS Small-Window mode, the smallest test circle for a source in the central CCD is set to a radius that already lies on the peripheral CCDs. Other than for the MOS Small-Window cases, a further constraint is that, ideally, the background region should lie on the same instrument CCD as the source.
If no solution is found with at least a 70% filling factor, the background trial with the largest filling factor is adopted.
For the vast majority of detections where SSP extraction is attempted, this process obtains a solution in the first radial step, and a strong bias to early azimuthal steps, i.e. in most cases an acceptable solution is found very rapidly. For detections in the MOS instruments, about 1.7% lie in the central window in Small-Window mode and have a background region located on the peripheral CCDs. Importantly, in contrast to earlier pipelines, this process always yields a usable background spectrum for objects in the central window of MOS Small-Window mode observations.
In addition, the current pipeline permits extraction of SSPs for fainter sources. Extraction is considered for any detection with at least 100 EPIC source counts in the full catalogue band (0.2-12 keV), instead of the 500 EPIC counts limit used in the previous pipeline. Where this condition is met, spectra and time series are extracted. For a more detailed description of the 3XMM-DR4 catalogue production see http://xmmssc-www.star.le.ac.uk/Catalogue/3XMM-DR4/UserGuide_xm Catalogue spectra and time series can be retrieved from the XMM-Newton Science Archive (XSA 7 ), as well as from the web services listed at the end of Sect. 4.
The XMM-Newton spectral-fit database
The XMM-Newton spectral-fit database is a project aimed to take advantage of the great wealth of data and information contained within the XMM-Newton serendipitous source catalogue to construct a database composed of spectral-fitting results. The possible applications of this database include: the construction of large and representative samples of X-ray sources according to spectral properties; the possibility of pinpoint sources with interesting spectral properties; and the cross-correlation with samples selected in other wavelengths so as to obtain a first-order description of the sources X-ray spectral-properties.
Automated spectral fitting
The XMM-Newton spectral-fit database (XMMFITCAT) is constructed by using automated spectral fits applied to the pipeline extracted spectra within the 3XMM-DR4 catalogue. The software used to perform the spectral fits was Xspec v12.7 (Arnaud 1996). In the case of multiple observations available for the same source, each observation is fitted separately. The fitting scripts were designed to jointly fit all the available spectra for the same source detection. As a result, the database contains spectralfitting results for each source observation, not for each observed source.
The data used in the construction of the database are spectral data (source and background spectra), as well as ancillary matrices, retrieved from the 3XMM-DR4 catalogue products. Redistribution matrices are the canned matrices provided by the XMM-Newton SOC (Science Operations Centre).
The spectral-fitting pipeline is composed of tcl (Tool Command Language), and Perl scripts. The default fitting algorithm that Xspec uses to find the best-fit values for each model parameter is a modified Levenberg-Marquardt algorithm. This default algorithm is the one used in the construction of the database so as to optimise the fitting speed. However, this algorithm is local rather than global, so it is possible for the fitting process not to find the global best-fit, but a local minimum. To prevent this, the scripts include an optimisation algorithm that tries to avoid the fit to fall into a local minimum by computing the errors on all variable parameters at the 95% confidence level. If a better fit is found, the optimisation algorithm starts again. In the case nonmonotonicity is detected during the error computation, the confidence level in the error computation is increased until a better minimum is found. Once the best-fit is found, errors are computed and reported within the database at the 90% confidence level. The energy bands used in the automated fits are listed in Table 1. As a comparison, the energy bands used in the construction of the 3XMM-DR4 catalogue are also listed in the same table.
Spectral data selection
Cash statistics, implemented as C-stat in Xspec, are used to fit the data. This statistic was selected, instead of the more commonly used χ 2 statistic, to optimise the spectral fitting in the case of low count spectra. The 3XMM-DR4 spectra are unbinned and then binned to 1 count/bin. The combined use of spectra binned to 1 count/bin plus C-stat fitting has been proven to work very well when fitting spectral data down to 40 counts (Krumpe et al. 2008).
During the spectral fits, all variable parameters for different instruments and exposures are tied together except for a relative normalisation, which accounts for the differences between different flux calibrations. Given that each additional instrument spectrum adds a new parameter to the fit, and to ensure a minimum quality on the spectral fits, a lower limit on the number of counts in each individual spectrum is imposed: only spectra corresponding to a single EPIC instrument, with more than 50 source counts in the Full band are included in the spectral fits. Note that this implies that not all available spectra within 3XMM-DR4 for a given observation are used in some cases. A table listing the spectra used for each source observation is also publicly available, along with the one containing the spectral fitting results, at the database's webpage (see Sect. 4).
Spectral models
Three simple, and three more complex models have been implemented. All these models are applied to the spectral data if the following conditions are fulfilled :
-Simple models: total number of counts (all instruments added together) larger than 50 counts in the energy band under consideration. -Complex models: total number of counts larger than 500 counts in the Full band.
The models are selected to represent the most commonly observed spectral shapes in astronomical sources in a phenomenological way. The preferred model (among the implemented ones) for each observation is selected according to the goodness of each fit (see Sect.3.5), but spectral-fitting results for all the models applied are included in the database. It is important to note that the automated procedure is only intended to obtain a good representation of the spectral shape so, given the limited number of spectral models applied, the preferred model should not be interpreted as a "best-fit model" in the way it is when carrying out manual fits.
The simple models (models 1 to 3), and the more complex models (models 4 to 6) are:
1. Absorbed power-law model ( wabs*pow in Xspec notation ): A power-law model modified by photoelectric absorption. 2. Absorbed thermal model (wabs*mekal): A thermal model modified by photoelectric absorption. 3. Absorbed black-body model (wabs*bb): A photoelectrically absorbed black-body model. 4. Absorbed power-law model plus thermal model (wabs(mekal+wabs*pow)): A thermal plus a powerlaw model in which both components are modified by absorption, and the power-law is additionally absorbed. 5. Double power-law model (wabs*(pow+wabs*pow)): A double power-law model, with different photon indices, modify by photoelectric absorption, and additional absorption only affecting one of the power-law components. 6. Black-body plus power-law model (wabs*(bb+pow)): A black-body plus a power-law component modified by photoelectric absorption.
A component is considered not-significant in a complexmodel fit if its corresponding normalisation is consistent with zero at the 90% confidence level. In those cases, spectral-fitting results for that model are not included in the database. Plots corresponding to each of the implemented models are shown in Fig. 1. Each model and its corresponding spectral-fitting process are described in Sects.3.3.1 to 3.3.6. Parameters and allowed ranges for them that are not described in these sections are set to A&A proofs: manuscript no. XMMFITCAT_ACorral their Xspec default values. The initial value and allowed range of values for the column density for all absorption components (wabs) is the same for all simple models: initial value N H = 10 21 cm −2 ; allowed range: 10 20 to 10 24 cm −2 .
As part of the spectral fits, errors are computed at the 90% confidence level for every variable parameter. If Xspec cannot constrain the value of a certain parameter, i.e the error computation pegs at the lower and upper limits of the allowed range, the parameter is fixed. Fixed values are not the same for all observations, but they are computed during each spectral fit (see below), and they depend on the spectral model, energy band, and data quality.
Absorbed power-law model
A power-law component is the most common spectral shape displayed by X-ray sources, including active galactic nuclei (AGN, the most abundant X-ray sources), and X-ray binaries. This model is applied in the Full, Soft, and Hard bands to get a firstorder characterisation of the full spectral shape. Besides, a simple absorbed power-law model is found to be a good representation of most low to medium quality AGN spectra. The variable parameters of the model are the column density of the absorption component, and the photon index (initial value Γ = 2; range: 0-4), and normalisation of the power-law component.
The first step of the spectral fit for this model in the Full band is to compute the photon index in the Hard band without including absorption. The resulting value is the one used to fix the photon index of the model if it cannot be constrained. Sometimes the number of counts above 2 keV is < 20 counts, and this value cannot be computed. In those cases, the fixed value is the initial value, Γ = 2. If the parameter that cannot be constrained is the column density, its fixed value is the one obtained by carrying out the spectral fit in the Soft band with Γ fixed to the value obtained in the Hard band. If the number of counts below 2 keV is < 20 counts, the fixed value is N H = 10 22 cm −2 . This is a reasonable approximation since spectra with > 50 counts in the Full band but < 20 counts below 2 keV are most likely absorbed by column densities above that value.
The fixed values for the photon index and the column density when fitting in the Soft and Hard bands are Γ=2 and N H =10 22 cm −2 , respectively. We adopted these values because it is very difficult to constrain both the photon index and the column density in the Soft band in the case of low-count spectra, and the spectral fit is insensitive to column densities below that value in the Hard band. The spectral-fitting results of this model applied in these narrow bands are used as initial parameters and fixed values in the complex models.
Absorbed Thermal model
The absorbed thermal model is applied in the Full and Soft bands. It is intended to model emission from stars, galaxies, and galaxy clusters. The Xspec mekal model is used, instead of the more up to date apec model, to maximise the fitting speed. The only variable parameters are the column density of the wabs component, the plasma temperature of the mekal component (initial value kT = 0.5 keV; range: 0.08 -20 keV), and its normalisation. As a consequence, this model is not always an acceptable fit for these kind of sources, but it is better, in terms of goodness (see Sect.3.5), than a power-law model in most cases.
Similarly to the absorbed power-law model fit, the plasma temperature is computed by removing the absorption component and fitting this simple thermal model in the Soft band. The resulting temperature is the one used to fix this parameter in the case it cannot be constrained. In the case the number of counts in the Soft band is < 20 counts, a value of kT = 1 keV is used instead. The fixed value used in case the column density cannot be constrained is N H = 10 20 cm −2 in all cases.
The spectral-fitting results of this model applied in the Soft band are the ones used in the complex model: mekal plus powerlaw model.
Absorbed black-body model
This model is only applied in the Soft band to model, for example, soft emission in AGN, X-ray binaries, and supersoft novae. The variable parameters are the column density of the wabs component, and the black-body temperature (initial value kT: 0.5 keV; range: 0.01-10 keV), and its normalisation.
The spectral fitting of this model is carried out in the same way as the absorbed thermal model. The only difference in this case is that the fixed value of the temperature in the case of low number of counts is kT = 0.1 keV.
The spectral-fitting results of this model are used as input parameters and fixed values in the black-body plus power-law model fit.
Absorbed thermal plus power-law model
This model includes two absorption components: one affecting both the thermal, and power-law components; and a second one only affecting the power law component. In the case of AGN for example, this model could represent the host-galaxy soft emission (thermal component), and the more absorbed intrinsic AGN emission (power-law component), although it could also model appropriately emission from X-ray binaries.
The initial parameters are extracted from the absorbed thermal model fitting in the Soft band, and the absorbed power-law model fitting in the Hard band. If the number of counts in either of those bands is lower than 50 counts and thus, either of these models has not been fitted, this complex model is not fitted either. Taking into account the limit of 500 counts to apply complex modes, it is reasonable to assume that spectra that lack enough counts in the Hard/Soft band do not need an additional power-law/thermal component to model the spectral shape.
Double power-law model
As the previous model, this model also includes two absorption components. The photon indices of the two power-law components are not fixed together so this model could represent several physical scenarios: an absorbed power-law plus a scattered component; a partial covering absorber affecting intrinsic power-law emission; and even a hard power-law plus a soft thermal component if the data at low energies are of low quality. In this case, the initial values and fixed parameters are extracted from the absorbed power-law model fittings in the Soft, and Hard bands. As for the previous model, if either of these fits has not been performed, this complex model fit is not carried out.
3.3.6. Absorbed black-body plus power-law model
Fluxes
For each model applied, the observed flux and its errors (at the 90% confidence level) are computed in the band used in the spectral fit. In the case of multiple instrument spectra being jointly fitted, the flux included in the database corresponds to the average of all available instruments and exposures for that observation. Fluxes and errors are reported in erg cm −2 s −1 . In a small number of cases (< 1%), Xspec fails to compute the errors on the fluxes. In those cases, only fluxes values are reported in the database. A direct comparison between fluxes reported in this database and within the 3XMM-DR4 catalogue is not possible for all energy bands (see Table 1), but only in the database Soft band. The observed fluxes in the Soft band obtained from the automated fits are plotted against the one reported in 3XMM-DR4 in Fig. 2. Values in both catalogues agree in 70% of cases. Significant differences between both values correspond to one of the following A&A proofs: manuscript no. XMMFITCAT_ACorral preferred models: black-body or thermal model; and power-law models with steep photon indices. The larger number of consistent fluxes below the on-to-one line is due to larger errors in the XMMFITCAT fluxes computed in the Soft band, sometimes consistent with zero at the 90% confidence level. Filled circles correspond to consistent fluxes between both catalogues, whereas small points correspond to non-consistent ones. Square points with error bars represent average error sizes at each flux interval.
Goodness of fit
C-stat does not provide goodness of fit. As an estimate, the Xspec command goodness is used. This command performs a number of simulations, 1000 simulations in the case of this database, and returns the percentage of simulations that gives a lower value of the statistic. For large return values of this command, the model can be rejected at the goodness value confidence level.
As another proxy of the goodness of fit, the reduced χ 2 value, after C-stat fitting, is also computed and included in the database. There is not direct correspondence between goodness and reduced χ 2 values computed this way, although low goodness values (< 50%) correspond to low reduced χ 2 values (< 1.5) in 97% of cases. As it is implemented in Xspec, C-stat/d.o.f values tend to reduced χ 2 values for large number of counts. However, most observations (80%) within the 3XMM-DR4 catalogue have < 1000 counts. A conservative approach is adopted in the database to decide if a spectral-fit is an acceptable fit or not. Instead of using the χ 2 values from the C-stat fits, we used only the value of goodness. We consider a fit as an acceptable fit if the return value of goodness is lower than 50%. Although both estimates behave in a similar way, i.e. they are similarly "good" in separating acceptable from unacceptable fits, by using goodness a smaller number of unacceptable fits are included within our acceptable criteria (see Sect.A.1).
As a guide to the user, the simplest model applied in the Full band with the lowest value of goodness is considered as the preferred model for that observation. However, since both goodness and reduced χ 2 values are provided within the database, the user could decide between both estimates to select the best-fit model.
Database overview
The final XMM-Newton spectral-fit database contains spectralfitting results for 114 166 observations, corresponding to 77 954 unique sources. Acceptable fits are found for 90% of the observations with less than 500 counts (70% of all the observations), whereas they are found for 80% of the observations with more than 500 counts. This is a remarkable result given the limited number of spectral models used, and the difficulty often found to obtain acceptable fits in the case of high number of counts even in manual fits. The distribution of counts in the XMMFITCAT is shown in Fig. 3. The vertical line correspond to the limit of 500 counts above which complex models are applied. The distribution of photon indices (excluding detections for which it had to be fixed) for the sources best-fitted by an absorbed power-law model is plotted in Fig.4. As expected, since most X-ray sources are likely AGN, the distribution is very similar to the one usually obtained from X-ray analyses of AGN. We separated observations with fewer than 500, and more than 500 counts and found and average Γ = 1.8 +0.4 −0.3 and a standard deviation of 0.64, and Γ = 1.9 +0.2 −0.2 and a standard deviation of 0.54, respectively. Both distributions are very similar, the lower value for the average Γ for detections with < 500 counts likely due to the higher tail towards lower photon index values. Hardness ratios (or X-colors) are used as a proxy of the spectral shape, and to estimate the column density. Following the band numbering in Table 1, hardness ratios are defined within 3XMM-DR4 as:
HR n = CR(band n+1)−CR(band n) CR(band n+1)+CR(band n) (1)
where CR(band n) is the count rate in the band number n. Therefore, a correlation is expected between hardness ratio values and spectral parameters. To check the consistency between the results in 3XMM-DR4 and XMMFITCAT, we compared the HR2+HR3 values from 3XMM-DR4, against the best-fit parameters from the absorbed power-law fit in XMMFITCAT (see Fig. 5). As it can be seen in Fig. 5, we find a very good agreement between both catalogues.
Less than 1% of the observations in XMMFITCAT are affected by Xspec errors. This means that these observations lack spectral fitting results for one or more, but not all, of the spectral models that were applied according to their spectral counts. These errors occur when the model is a very bad representation of the spectral shape and/or the allowed ranges for the parameters did not encompass the best-fit values. As a consequence, Xspec fails in finding a minimum and/or falls into an infinite loop. More than 18% of the sources within XMMFITCAT have multiple observations with spectra available. This represents an incredible source of information for spectral variability studies, that are usually observationally expensive. This database has been already been successfully used to devise a selection technique for highly absorbed AGN (Corral et al. 2014). That work was envisioned as a test of the capabilities of XMMFITCAT in constructing representative samples of different X-ray sources. We used the automated spectral-fitting results as a starting point from which to pinpoint candidate sources, and then confirmed their obscured nature by using manual fits. We derived an efficiency of our automated method of ∼80% in selecting highly absorbed AGN.
The database, and the list of spectra used in the spectral fits, can be retrieved in FITS format from the database project webpage: http://xraygroup.astro.noa.gr/Webpage-prodec/index.html. The spectral-fitting results can be also be queried by accessing the LEDAS 8 (LEicester Database and Archive Service), and the XCAT-DB 9 , that also includes a data visualisation tool.
The verification tests carried out, as well as a description of the database columns can be found in the Appendix.
A&A proofs: manuscript no. XMMFITCAT_ACorral very well down to 40 spectral counts, but it does not provide goodness of fit. Two possible solutions to this problem are: to use the command goodness in Xspec; and to use C-stat as fitting statistic but χ 2 as the test statistic (see Sect.3.5).
To compare both goodness estimates to define acceptable fits in the database, a random sample (∼ 2000 detections with more than 200 net counts) was selected from the 3XMM-DR4 catalogue. The automated fitting procedure was applied both using C-stat fitting to spectra binned to 1 count/bin, and χ 2 fitting to spectra binned to 20 count/bin. In the second case, χ 2 values are true representations of the goodness of fit. In Fig. A.1, the values of goodness are plotted against derived χ 2 values from C-stat fitting (χ 2 C ). Different symbols represent acceptable (filled circles), and unacceptable (crosses) fits from the χ 2 fitting of the same observations. We define as an acceptable fit a χ 2 fit with a reduced χ 2 value < 1.5. The arrows on the plot indicate that 50% of the unacceptable fits according to χ 2 fitting lie above χ 2 C > 3 and goodness > 80. We find that there are no limits for goodness or χ 2 C that allow us to distinguish unambiguously between acceptable and unacceptable fits by using these estimates. The compromise adopted in the database is to define an acceptable fit as a fit with a goodness value < 50%, that roughly corresponds to reduced χ 2 C < 1.5, but classify less probably-bad fits as acceptable fits.
It is important to note that both estimates, goodness < 50% and χ 2 C < 1.5, distinguish between acceptable and unacceptable fits (as defined by χ 2 < 1.5, or > 1.5, respectively), similarly well. We find that both criteria classify correctly 90% of the spectral fits. The disadvantage of each criterion is to include more bad fits within the acceptable fits in the case of χ 2 C , and classifying more good fits as unacceptable, in the case of goodness. For lower number of counts, the comparison is more difficult since χ 2 fitting is less reliable. To compare the reliability of both estimates we made use of simulations. We simulated ∼ 4000 observations with fewer than 200 counts, using an absorbed power-law and an absorbed thermal model, and considering a wide range of values for the photon index, the thermal component temperature, and the column density. Assuming that powerlaw simulated models should be well-fitted by a power-law, we find again that using goodness we miss a larger number of good fits than using χ 2 C (17% fits are considered unacceptable according to goodness values, whereas only 2% are according to χ 2 C values). However, simulated thermal models are classified as acceptable fitted by a power-law model according to χ 2 C in 70% of the cases, whereas they are only classified as acceptable according to goodness in 50% of them. Both criteria seem to classify correctly the same fraction of spectral fits (∼ 80%, lower than the 90% for more than 200 counts), but again using goodness we miss more good fits, and using χ 2 C we miss-classify more bad fits as good fits.
Although it is not the purpose of this database to decide between models but to provide as much information as possible about the spectral shape, the goodness values are used, as a guide to the user, to both distinguish between acceptable and unacceptable fits, and to select the best-fit model within the database. However, goodness and χ 2 C values are both included in the database so the users may consider either or both values to define their own acceptable/unacceptable classification or to select the best-fit model.
Appendix A.2: Dependence on source type
To study if the automated preferred models and best-fit parameters are in agreement with what is often found from manual spectral analyses, we constructed a sample of ∼ 500 sources including stars (2XMM/Tycho sample, Pye et al. 2008), Low Mass X-ray binaries, (LMXB, from the catalogue in Liu et al. 2007), High Mass X-ray Binaries (HMXB, from the catalogue in Liu et al. 2006), normal galaxies, and Active Galactic Nuclei (AGN from the XMM-Newton Bright Sample, XBS, Della Ceca et al. 2004). These kinds of sources are representative of the most common types expected to be found within the XMM-Newton catalogue. This test sample also includes a great variety in spectral quality, including bright targeted sources as well as serendipitous much more fainter sources.
We then applied the automated spectral-fitting process to the 500 sources. In 6% of cases an acceptable fit was not found (goodness of fit larger than 50% for all models). All these cases correspond to X-ray binaries and stars with large numbers of net counts (> 15000), and a very complex spectral shape, and for which a much more detailed spectral analysis has already been published in the literature. Most sources with less than 1000 counts are well-fitted by using simple models. But it is also important to note that a good fit is also found for 20% of the sources with more than 15000 counts.
The large majority of AGN (95%) are well-fitted by using a simple absorbed power-law model. The photon index distribution is in good agreement with published results from manual spectral analyses of X-ray selected AGN. A manual spectral fitting analysis of the same sample of AGN was presented in Corral et al. (2011), which has been used to compare our results. The individual values for the photon index from the automated fits are systematically lower than the ones presented in Corral et al. (2011) (see Fig. A.2), but consistent within errors in most cases. The same applies for the column density values (see Fig.A.3), the values in Corral et al. (2011) being higher likely because of the also higher values of the photon indices and the effect of redshift, but consistent with the automated ones within errors. These small differences are likely caused by one or more of the following intrinsic differences between both analysis: -Different energy channels taken into account: The fitting statistics used in Corral et al. (2011) was χ 2 instead of C-stat.
One of the main differences between the two statistics is that spectral channels were added together to contain at least a minimum number of counts in Corral et al. (2011). This can lead to the removal of high energy spectral channels (usually less populated) and as a consequence, the lost of counts at high energies, whereas using C-stat almost all detected photons in the energy range in use are taken into account. These high energy channels usually contain only a small number of counts, so they are often removed if only added channels with a minimum number of counts are considered, which can result in a softer spectrum used during the spectral fit, and a resulting higher value for the photon index. -Different fitting statistics: The systematic differences can be produced by the use of C-stat instead of χ 2 too, and it could be simply due to the fact that C-stat fitting is better in the low-count mode. In fact, the higher the number of counts, the closer the parameter values become between C-stat and χ 2 fitting. -Different spectral-fitting methods: It is also important to note that, unlike previously reported manual analysis of these samples, the automated fits do not make use of any assumptions regarding the type of source under consideration, nor do they include information about the source redshifts. Besides, very hard photon indices ( < 1.4) were not allowed in Corral et al. (2011). If a very hard photon index was found, either it was fixed to 1.9, or additional spectral components were added to the fit. The results for the rest of source types are also in agreement with published results. Most stars, normal galaxies, and LMXBs are better-fitted by using soft models, i.e. by power law models with a steep (most of them >2) photon index, or by thermal models. HMXBs are better-fitted by hard models, i.e. by models including a power-law component with a flat photon index (most of them <2).
Appendix A.2.1: Manual testing
We constructed another randomly selected sample of 500 sources, extracted from 3XMM-DR4, in order to manually check the spectral results. The strategy was to apply the automated spectral fitting procedure to these sources and then, to fit them also manually by using the same set of models and compare the results. The selected sample spans a wide range in spectral quality very similar to the one spanned by the full XMMFITCAT. As a first step, we compared the values of the resulting spectral parameters, such as the power-law photon index, the temperature of the thermal component, or the inferred flux. We find an excellent agreement between these values in almost all cases. Significant deviations between values derived from different methods only occur if the model is not an acceptable fit. The values obtained from the automated fits against the ones obtained manually, for the photon index and the absorbing column density, are plotted in Fig. A.4. Note that the most significant differences, although consistent within errors, for the column density values occur only for low values of this parameter, i.e., when the absorption component does not affect significantly the spectral shape.
We also checked if the model considered as our best-fit model was the same for the manual and automated fits. For the one-component models, the model selected as our best-fit model by the automated process was the same as for the manual process in 95% of the cases. In the case of two-components models, it is extremely difficult to decide between two acceptable models even if we could take into consideration the source type. Nevertheless, the manually derived spectral parameters are in agreement with the ones obtained from the automated fits in almost all cases (see, for example, the computed fluxes plotted in Fig. A.5).
Appendix A.2.2: Simulated data Finally, we tested the ability of the automated procedure to distinguish between spectral models, and its accuracy at retrieving the intrinsic shape. To this end, we selected yet another random sample from 3XMM-DR4 of 1000 detections with the same count distribution as the full XMMFITCAT. Then, we used the preferred model and parameters to simulate 10 times each source and background spectra, and applied the automated procedure to the simulated data.
We find that the simulated model and the preferred model after the automated fits agree in ∼ 87% of the cases. Neverthe-A&A proofs: manuscript no. XMMFITCAT_ACorral less, the preferred model is given as a guide to the user, and it is not the aim of this database to distinguish between models, but to provide a good representation of the spectral shape. We find that the vast majority of the best-fit parameters from the automated fits are consistent within errors with the input parameters of the simulations. Therefore, the automated procedure is very successful in recovering the simulated spectral shape. As an example, the simulated photon indices are plotted against the ones obtained after the automated fits in Fig. A.6.
Fig. 1 .
1In this case there is only one absorption component covering both the black-body and the power-law components. The blackbody component is often used to phenomenologically represent Examples of spectral-fitting results for the models (from top to bottom and left to right): absorbed power-law model; absorbed thermal model; absorbed black-body model; thermal plus power-law model; double power-law model; and black-body plus power-law model. soft emission in AGN, but it can also account for disk emission in X-ray binaries. The initial and fixed values for the parameters are extracted from the absorbed black-body model fit, and the absorbed power-law model fit in the Hard band.
Fig. 2 .
2Observed fluxes in the Soft band (in erg cm −2 s −1 ) from the automated fits against the ones reported in the 3XMM-DR4 catalogue.
Fig. 3 .
3Distribution of net source counts per detection in XMMFITCAT.
Fig. 5 .Fig. 4 .
54Hardness ratios from 3XMM-DR4 against best-fit parameters (photon index and column density) from XMMFITCAT. Left panel: High Galactic latitude sample (|b| > 20). Right panel: Low Galactic latitude sample (|b| < 20). Photon index distribution for the XMMFITCAT detections for which an absorbed power-law model is the preferred model. Empty histogram corresponds to detections with < 500 counts, and line-shaded histogram to detections with > 500 counts.
Fig. A. 1 .
1Goodness values versus reduced χ 2 values for the same spectral fits. Circles and crosses correspond to acceptable and unacceptable fits, respectively, according to χ 2 fitting.
Fig. A. 2 .
2Photon index distribution for XBS AGN obtained from the automated spectral fit (line-shaded histogram) and from the manual fit in Corral et al. (2011)(empty histogram).
Fig. A. 3 .
3Column density distribution for XBS AGN obtained from the automated spectral fit (line-shaded histogram) and from the manual fit inCorral et al. (2011)(empty histogram).
Fig. A. 4 .Fig
4Best-fit parameters (left: photon index, right: absorbing column density) from the manual fits compared to the ones obtained from the automated fits. Empty symbols and arrows on the right panel correspond to upper limits. Square points with errors represent average error sizes. . A.5. Fluxes (in erg cm −2 s −1 ) computed by using manual fits versus the ones obtained by the automated fits. Square points with errors correspond to the average error at each flux.
Table 1 .
1Energy bands
http://xmm.esac.esa.int/xsa/ Article number, page 2 of 12 A. Corral et al.: XMMFITCAT: The XMM-Newton spectral-fit database
http://www.ledas.ac.uk/arnie5/arnie5.php?action=advanced &catname=3xmmspectral 9 http://xcatdb.unistra.fr/3xmm/
Article number, page 12 of 12
Acknowledgements. We thank the anonymous referee for providing us with constructive comments and suggestions. A. Corral acknowledges financial support by the European Space Agency (ESA) under the PRODEX program.Appendix A: Verification proceduresIn the following sections of the appendix we describe some of the quality verification tests that have been carried out during the construction of the database.Appendix A.1: Goodness of fit: C-stat versus χ 2 fitting χ 2 fitting has the advantage of providing goodness of fit, reduced values (χ 2 /d.o.f.) ∼ 1 indicating that the model is a good representation of the data. However, a relatively high number of counts is needed in order to use this statistic. C-stat fitting worksAppendix B: Database columnsThe catalogue contains 214 columns. A description of each column is given in the following sections. The name is given in capital letters, the FITS data format in brackets, and the unit in square brackets. For easier reference the columns are grouped into five sections. Non-available data are represented by a -99 value within the FITS table.Appendix B.1: Identification of the detectionThe first eight columns of the database are also contained within the 3XMM-DR4 catalogue, and their purpose is to identify each source detection and spectral products, and to this end, they share the same column name, format, and values as in 3XMM-DR4. IAUNAME (21A): the IAU name assigned to the unique SRCID. DETID (J): a consecutive number which identifies each entry (detection) in the catalogue. The DETID numbering assignments in 3XMM-DR4 bear no relation to those in 2XMMi-DR3 but the DETID of the nearest matching detection from the 2XMMi-DR3 catalogue to the 3XMM-DR4 detection is provided via the DR3DETID column (not included in the XMMFITCAT table) within 3XMM-DR4. SRCID (J): A unique number assigned to a group of catalogue entries which are assumed to be the same source. The process of grouping detections in to unique sources has changed since the 2XMM catalogue series. The SRCID assignments in 3XMM-DR4 bear no relation to those in 2XMMi-DR3 but the nearest unique sources from the 2XMMi-DR3 catalogue to the 3XMM-DR4 unique source is provided via the DR3SRCID column (not included in XMMFITCAT). The possible values for this column correspond to the different situations that may occur during the spectral fits, and they are described as follows:0. The spectral fit was performed, and the model is considered an acceptable fit, i.e, the value returned by the command goodness is lower than 50%. 1. The spectral fit was performed, but the value returned by the command goodness is greater than 50%. 2. The spectral fit was not performed because the number of counts in the Soft band is lower than 50 counts. 3. The spectral fit was not performed because the number of counts in the Hard band is lower than 50 counts. 4. Complex-model (WAMEKALPO, WABBPO, or WAPOPO) spectral fit was not performed because the number of counts in the full band is lower than 500 counts. 5. Complex-model (WAMEKALPO, WABBPO, or WAPOPO) fit results not reported because soft model component (thermal or black-body) is not significant, i.e., its normalisation is consistent with 0 at the 90% confidence level. 6.Complex-model (WAMEKALPO, WABBPO, or WAPOPO)fit results not reported because hard model component (power-law) is not significant, i.e., its normalisation is consistent with 0 at the 90% confidence level. 7. No best-fit parameters found. This may occur if the allowed ranges for the parameters to vary and/or the spectral model are not a good representation of the data, and Xspec falls into an infinite loop or fails to find a minimum.
K A Arnaud, Astronomical Society of the Pacific Conference Series. G. H. Jacoby & J. Barnes10117Astronomical Data Analysis Software and Systems VArnaud, K. A. 1996, in Astronomical Society of the Pacific Conference Series, Vol. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, 17
. A Corral, R Della Ceca, A Caccianiga, A&A. 53042Corral, A., Della Ceca, R., Caccianiga, A., et al. 2011, A&A, 530, A42
. A Corral, I Georgantopoulos, M G Watson, A&A. 56971Corral, A., Georgantopoulos, I., Watson, M. G., et al. 2014, A&A, 569, A71
. R Della Ceca, T Maccacaro, A Caccianiga, A&A. 428383Della Ceca, R., Maccacaro, T., Caccianiga, A., et al. 2004, A&A, 428, 383
. R Gilli, A Comastri, G Hasinger, A&A. 46379Gilli, R., Comastri, A., & Hasinger, G. 2007, A&A, 463, 79
. P M W Kalberla, W B Burton, D Hartmann, A&A. 440775Kalberla, P. M. W., Burton, W. B., Hartmann, D., et al. 2005, A&A, 440, 775
. M Krumpe, G Lamer, A Corral, A&A. 483415Krumpe, M., Lamer, G., Corral, A., et al. 2008, A&A, 483, 415
. Q Z Liu, J Van Paradijs, Van Den, E P J Heuvel, A&A. 4551165Liu, Q. Z., van Paradijs, J., & van den Heuvel, E. P. J. 2006, A&A, 455, 1165
. Q Z Liu, J Van Paradijs, Van Den, E P J Heuvel, A&A. 469807Liu, Q. Z., van Paradijs, J., & van den Heuvel, E. P. J. 2007, A&A, 469, 807
J Pye, D Fyfe, S Rosen, A Schröder, The X-ray Universe. Pye, J., Fyfe, D., Rosen, S., & Schröder, A. 2008, in The X-ray Universe 2008
. E Treister, C M Urry, S Virani, ApJ. 696110Treister, E., Urry, C. M., & Virani, S. 2009, ApJ, 696, 110
. M G Watson, J.-L Auguères, J Ballet, A&A. 36551Watson, M. G., Auguères, J.-L., Ballet, J., et al. 2001, A&A, 365, L51
. M G Watson, A C Schröder, D Fyfe, A&A. 493339Watson, M. G., Schröder, A. C., Fyfe, D., et al. 2009, A&A, 493, 339
|
[] |
[
"RESONANCE EXPANSIONS FOR TENSOR-VALUED WAVES ON ASYMPTOTICALLY KERR-DE SITTER SPACES",
"RESONANCE EXPANSIONS FOR TENSOR-VALUED WAVES ON ASYMPTOTICALLY KERR-DE SITTER SPACES"
] |
[
"Peter Hintz "
] |
[] |
[] |
In recent joint work with András Vasy[19], we analyze the low energy behavior of differential form-valued waves on black hole spacetimes. In order to deduce asymptotics and decay from this, one in addition needs high energy estimates for the wave operator acting on sections of the form bundle. The present paper provides these on perturbations of Schwarzschild-de Sitter spaces in all spacetime dimensions n ≥ 4. In fact, we prove exponential decay, up to a finite-dimensional space of resonances, of waves valued in any finite rank subbundle of the tensor bundle, which in particular includes differential forms and symmetric tensors. As the main technical tool for working on vector bundles that do not have a natural positive definite inner product, we introduce pseudodifferential inner products, which are inner products depending on the position in phase space.
|
10.4171/jst/171
|
[
"https://arxiv.org/pdf/1502.03183v1.pdf"
] | 948,852 |
1502.03183
|
f79aa790d4248e0c658b1a703cc7218fcf14a43d
|
RESONANCE EXPANSIONS FOR TENSOR-VALUED WAVES ON ASYMPTOTICALLY KERR-DE SITTER SPACES
Peter Hintz
RESONANCE EXPANSIONS FOR TENSOR-VALUED WAVES ON ASYMPTOTICALLY KERR-DE SITTER SPACES
In recent joint work with András Vasy[19], we analyze the low energy behavior of differential form-valued waves on black hole spacetimes. In order to deduce asymptotics and decay from this, one in addition needs high energy estimates for the wave operator acting on sections of the form bundle. The present paper provides these on perturbations of Schwarzschild-de Sitter spaces in all spacetime dimensions n ≥ 4. In fact, we prove exponential decay, up to a finite-dimensional space of resonances, of waves valued in any finite rank subbundle of the tensor bundle, which in particular includes differential forms and symmetric tensors. As the main technical tool for working on vector bundles that do not have a natural positive definite inner product, we introduce pseudodifferential inner products, which are inner products depending on the position in phase space.
Introduction
We continue the analysis of (linear) aspects of the black hole stability problem in the spirit of earlier works by Dyatlov, Vasy and the author [10,17,18,27] by studying linear tensor-valued wave equations on perturbations of Schwarzschild-de Sitter spaces with spacetime dimension n ≥ 4; in particular, this includes wave equations for differential forms and symmetric 2-tensors. In our main result, we establish exponential decay up to a finite-dimensional space of resonances: Theorem 1. Let (M, g) denote a Kerr-de Sitter spacetime in n ≥ 4 spacetime dimensions, with small angular momentum. Let E ⊂ T k be a subbundle of the bundle T k of (covariant) rank k tensors on M , so that the tensor wave operator g = − tr ∇ 2 acts on sections of E; for instance, one can take E to be equal to T k , symmetric rank k-tensors or differential forms of degree k. Let Ω denote a small neighborhood of the domain of outer communications, bounded beyond but close to the cosmological and the black hole horizons by spacelike boundaries, and let t * be a smooth time coordinate on Ω. See Figure 1 for the setup.
Then for any f ∈ C ∞ c (Ω, E), the wave equation g u = f has a unique global forward solution (supported in the causal future of supp f ) u ∈ C ∞ (Ω, E), and u has an asymptotic expansion u = where u jm ∈ C, the resonant states a jm , only depending on g , are smooth functions of the spatial coordinates and σ j ∈ C are resonances with Im σ j > −δ (whose multiplicity is m j ≥ 1 and for which the space of resonant states has dimension d j ), while u ∈ e −δt * L ∞ (Ω, E) is exponentially decaying, for δ > 0 small; we measure the size of sections of E by means of a t * -independent positive definite inner product.
The same result holds true if we add any stationary 0-th order term to , and one can also add stationary first order terms which are either small or subject to natural, but somewhat technical condition, which we explain in Remark 4.9. In fact, we can even work on spacetimes which merely approach a stationary perturbation of Schwarzschild-de Sitter space exponentially fast. See Section 2 for the form of the Schwarzschild-de Sitter metric and the precise assumptions on regularity and asymptotics of perturbations, for details on the setup, and Theorem 2.1 for the full statement of Theorem 1. The 'point at future infinity' in the usual Penrose diagrammatic representation is shown blown-up here, since the wave operator is well-behaved (namely, a b-operator in the sense of Melrose [23]) on the blown-up space, and the asymptotic information is encoded on the front face ff of the blow-up.
The resonances and resonant states depend strongly on the precise form of the operator and which bundle one is working on. In the case of the trivial bundle, thus considering scalar waves, they were computed in the Kerr-de Sitter setting by Dyatlov [10], following work by Sá Barreto and Zworski [3] as well as Bony and Häfner [5]. In recent work with Vasy [19], we compute the resonances for the Hodge d'Alembertian on differential forms, which equals the tensor wave operator plus a zeroth order curvature term: We show that there is only one resonance σ 1 = 0 in Im σ ≥ 0, with multiplicity m 1 = 1, and we canonically identify the 0-resonant states with cohomological information of the underlying spacetime. Note however that [19] deals with a very general class of warped product type spacetimes with asymptotically hyperbolic ends, while the present paper is only concerned with (perturbations of) Schwarzschild-de Sitter spacetimes. We remark that in general one expects that g = − tr ∇ 2 on a bundle E as in Theorem 1 has resonances in Im σ > 0, thus causing linear waves to grow exponentially in time.
We point out that if there are no resonances for g (plus lower order terms) in Im σ ≥ 0, thus solutions decay exponentially, we can combine Theorem 1 with the framework for quasilinear wave-type equations developed by the author [16] and in collaboration with Vasy [18] and immediately obtain the global solvability of quasilinear equations. This also works if there is merely a simple resonance at σ = 0 which is annihilated by the nonlinearity.
The point of view from which we approach the proof of Theorem 1 was originally developed by Vasy [27] and extended by Vasy and the author [17]. In the context of scalar waves, more general and precise versions of Theorem 1 are known, see the references below. Thus, the main novelty is that we give a conceptually transparent framework that allows us to deal with tensor-valued waves on black hole spacetimes, where the natural inner product on the tensor bundle induced by the spacetime metric is not positive definite. The central motivation for the study of such waves is the black hole stability problem, see the lecture notes by Dafermos and Rodnianski [8] for details. Notice that in order to obtain energy estimates for waves, one needs to work with positive inner products on the tensor bundle, relative to which however is in general not well-behaved: Most severely, it is in general far from being symmetric at the trapped set, which prevents the use of estimates at normally hyperbolic trapping. In the context of black hole spacetimes, such estimates were pioneered by Wunsch and Zworski [29] and Dyatlov [13,14]. On a pragmatic level, we show that one can conjugate by a suitable 0-th order pseudodifferential operator so as to make the conjugated operator (almost) symmetric at the trapped set with respect to a positive definite inner product, and one can then directly apply Dyatlov's results [14]. 1 The conceptually correct point of view to accomplish this conjugation is that of pseudodifferential inner products, which we introduce in this paper.
Roughly speaking, pseudodifferential inner products replace ordinary inner products B 0 (u), v |dg|, where B 0 is an inner product on the fibers of E, mapping E into its anti-dual E * , by 'inner products' of the form B(x, D)u, v |dg|, where B ∈ Ψ 0 is a zeroth order pseudodifferential operator mapping sections of E into sections of E * . Thus, we gain a significant amount of flexibility, since we can allow the inner product to depend on the position in phase space, rather than merely on the position in the base: Indeed, the principal symbol b = σ 0 (B) is an inner product on the vector bundle π * E over T * M \ 0, where π : T * M \ 0 → M is the projection. One can define adjoints of operators P ∈ Ψ m (M, E) (e.g. P = g ), acting on sections of E, relative to a pseudodifferential inner product B, denoted P * B , which are well-defined modulo smoothing operators. Moreover, there is an invariant symbolic calculus involving the subprincipal operator S sub (P ), which is a first order differential operator on T * M \ 0 acting on sections of π * E that invariantly encodes the subprincipal part of P , for computing principal symbols of commutators and imaginary parts of such operators. In the case that P is principally scalar and real, the principal symbol of P − P * B ∈ Ψ m−1 (M, E) then vanishes in some conic subset of phase space T * M \ 0 if and only if S sub (P ) − S sub (P ) * b does, which in turn can be reinterpreted as saying that the principal symbol of QP Q −1 − (QP Q −1 ) * B0 vanishes there, where B 0 is an ordinary inner product on E, and Q ∈ Ψ 0 (M, E) is a suitably chosen elliptic operator. In the case considered in Theorem 1 then, it 1 In other words, we reduce the high frequency analysis of tensor-valued waves to an essentially scalar problem. turns out that the subprincipal operator of g on tensors, decomposed into parts acting on tangential and normal tensors according to the product decompositions M = R t × X x and X = (r − , r + ) × S n−2 , at the trapped set equals the derivative along the Hamilton vector field H G , G the dual metric function, plus a nilpotent zeroth order term. This then enables one to choose a positive definite inner product b on π * E relative to which S sub ( g ) is arbitrarily close to being symmetric at the trapped set; thus with B = b(x, D), the operator g is arbitrarily close to being symmetric with respect to the pseudodifferential inner product B. Hence, one can indeed appeal to Dyatlov's results on spectral gaps by considering a conjugate of g , which is the central ingredient in the proof of Theorem 1.
1.1. Related work. The study of non-scalar waves on black hole backgrounds has focused primarily on Maxwell's equations: Sterbenz and Tataru [25] showed local energy decay for Maxwell's equations on a class of spherically symmetric asymptotically flat spacetimes including Schwarzschild. Blue [4] established conformal energy and pointwise decay estimates in the exterior of the Schwarzschild black hole; Andersson and Blue [1] proved similar estimates on slowly rotating Kerr spacetimes. These followed earlier results for Schwarzschild by Inglese and Nicolo [22] on energy and pointwise bounds for integer spin fields in the far exterior of the Schwarzschild black hole, and by Bachelot [2], who proved scattering for electromagnetic perturbations. Finster, Kamran, Smoller and Yau [15] proved local pointwise decay for Dirac waves on Kerr. There are further works which in particular establish bounds for certain components of the Maxwell field, see Donniger, Schlag and Soffer [9] and Whiting [28]. Dafermos [6], [7] studied the non-linear Einstein-Maxwell-scalar field system under the assumption of spherical symmetry.
The framework in which we describe resonances was introduced by Vasy [27]. In the scalar setting, this can directly be combined with estimates at normally hyperbolic trapping [13,14,29] to obtain resonance expansions for scalar waves. On exact Kerr-de Sitter space, Dyatlov proved a significant strengthening of this in [11], obtaining a full resonance expansion for scalar waves, improving on the result of Bony and Häfner [5] in the Schwarzschild-de Sitter setting, which in turn followed Sá Barreto and Zworski [3]. Vasy [26] proved the meromorphic continuation of the resolvent of the Laplacian on differential forms on asymptotically hyperbolic spaces, and the fact that the underlying analysis of [27] works on sections of vector bundles just as it does on functions is fundamental for the present paper.
1.2. Structure of the paper. In Section 2, we recall the Schwarzschild-de Sitter metric and its extension past the horizons, put it into the framework of [17,27] for the study of asymptotics of waves, and establish the normally hyperbolic nature of its trapping. We proceed to sketch the proof of Theorem 1, leaving the discussion of high energy estimates at the trapped set to the subsequent sections, which comprise the central part of the paper: We introduce pseudodifferential inner products on vector bundles in full generality in Section 3, and we use the theory developed there in Section 4 to study pseudodifferential inner products for wave operators on tensor bundles, uncovering the nilpotent nature of the subprincipal operator of on Schwarzschild-de Sitter space at the trapping in Section 4.2 and thereby finishing the proof of Theorem 1.
Detailed setup and proof of the main theorem
We recall the form of the n-dimensional Schwarzschild-de Sitter metric, n ≥ 4: We equip M = R t × X, X = (r − , r + ) r × S n−2 ω , with r ± defined below, with the metric g 0 = µ dt 2 − (µ −1 dr 2 + r 2 dω 2 ), (2.1) where dω 2 is the round metric on the sphere S n−2 , and µ = 1 − 2M• r n−3 − λr 2 , λ = 2Λ (n−2)(n−1) , with M • > 0 the black hole mass and Λ > 0 the cosmological constant Λ. The assumption
M 2 • λ n−3 < (n − 3) n−3 (n − 1) n−1 (2.2)
guarantees that µ has two unique positive roots 0 < r − < r + . Indeed, let µ =
r −2 µ = r −2 − 2M • r 1−n − λ. Then µ = −2r −n (r n−3 − (n − 1)M • ) has a unique positive root r p = [(n − 1)M • ] 1/(n−3)
, µ (r) > 0 for r ∈ (0, r p ) and µ (r) < 0 for r > r p ; moreover, µ(r) < 0 for r > 0 small and µ(r) → −λ < 0 as r → ∞, thus the existence of the roots 0 < r − < r + of µ is equivalent to the requirement µ(r p ) = n−3 n−1 r −2 p − λ > 0, which is equivalent to (2.2). Define α = µ 1/2 , thus dα = 1 2 µ α −1 dr, and let
β ± (r) := ∓ 2 µ (r) (2.3)
near r ± , so β ± (r ± ) > 0 there. Then the metric g 0 can be written as
g 0 = α 2 dt 2 − h, h = α −2 dr 2 + r 2 dω 2 = β 2 ± dα 2 + r 2 dω 2 , We introduce a new time variable t * = t − F (α), with ∂ α F = −α −1 β ± near r = r ± .
Then g 0 = µ dt 2 * − β ± dt * dµ − r 2 dω 2 near r = r ± , which extends as a non-degenerate Lorentzian metric to a neighborhood M = R t * × X of M , where X = (r − − 2δ, r + + 2δ) × S n−2 . We will consider the Cauchy problem for the tensor wave equation in the domain Ω ⊂ M ,
Ω = [0, ∞) t * × [r − − δ, r + + δ] r × S n−2 .
Thus, Ω bounded by the Cauchy surface H 1 = {t * = 0}, which is spacelike, and by the hypersurface H 2 = ± {r = r ± ± δ}, which has two spacelike components, one lying beyond the black hole (r − ) and the other beyond the cosmological (r + ) horizon; see Figure 1.
For the purpose of analysis on spacetimes close to (but not necessarily asymptotically equal to!) Schwarzschild-de Sitter space, we encode the uniform (asymptotically stationary) structure of the spacetime by working on a compactified model, which puts the problem into the setting of Melrose's b-analysis, see [23]: Define τ := e −t * , and bordify M to a manifold M with boundary by adding τ = 0 as the boundary at future infinity and declaring τ to be a smooth boundary defining function. The metric g 0 becomes a smooth Lorentzian b-metric on M : If dx i denotes coordinate differentials on X, then g 0 is a linear combination of dτ 2 τ 2 , dτ τ ⊗ dx i + dx i ⊗ dτ τ and dx i ⊗ dx j with coefficients which are smooth on M , and g 0 , written in such coordinates, is a non-degenerate matrix (with Lorentzian signature) up to and including τ = 0. Invariantly, we have the Lie algebra V b (M ) of b-vector fields, which are the vector fields tangent to the boundary, spanned by τ ∂ τ = −∂ t * and ∂ xi ; elements of V b (M ) are sections of a natural vector bundle b T M , the b-tangent bundle, and we have the dual bundle b T * M , spanned by dτ τ and dx i . Thus, g is a smooth non-degenerate section of the symmetric second tensor power S 2b T * M . Now, given a complex vector bundle E → M of finite rank, equip it with an arbitrary Hermitian inner product and any smooth b-connection, which gives a notion of differentiating sections of E along b-vector fields; over Ω (which has compact closure in M ), all choices of inner products are equivalent. We can then define the b-Sobolev space H s b (Ω, E) for s ∈ Z ≥0 to consist of all sections of E over Ω which are square integrable (with respect to the volume density |dg| induced by the metric g) together with all of its b-derivatives up to order s, and extend this to all s ∈ R by duality and interpolation, or via the use of b-pseudodifferential operators. For the forward problem for the wave equation, we work on spaces of functions which vanish in the past of H 1 and which extend across H 2 . Thus, we work with the space
H s b (Ω, E) •,− of distributions u ∈ H s b (Ω, E) whichb (Ω, E) = τ r H s b (Ω, E)
, likewise for spaces of supported/extendible distributions. Note that the b-Sobolev spaces H s b are independent of the choice of boundary defining function τ in that the choice τ = aτ γ , a = a(x) smooth, γ > 0, while it changes the smooth structure of M , yields the same spaces H s b with equivalent norms. The asymptotic behavior of waves will be encoded on the boundary ∂ ∞ Ω at future infinity of Ω, that is, on
∂ ∞ Ω = {τ = 0} × [r − − δ, r + + δ] r × S n−2 ,
which is a smooth manifold with boundary. Similarly to the above definitions, we can define Sobolev spaces (including semiclassical versions of these) with supported/extendible character at the boundary.
Suppose g is a Lorentzian b-metric such that for some smooth Lorentzian bmetric g , we have g − g ∈ H ∞,r b (Ω, S 2b T * M ) for some r > 0. 2 Changing g so as to make it invariant under time translations does not affect this condition, so let us assume g is t * -invariant. We will consider the wave operator g acting on sections of the bundle T k of covariant tensors of rank k over Ω. We assume that g and g 0 are close (in the C k sense for sufficiently high k), 3 so that the dynamical and geometric structure of g is close to that of g 0 : 4 Most importantly, the nature of the trapping for g (and thus for g) is still normally hyperbolic, 5 and the subprincipal operator (see Section 3.3) of g at the trapped set, while not necessarily having the nilpotent structure alluded to in the introduction and explained in Section 4.2, has 2 By the discussion of b-Sobolev spaces above, this condition on g is invariant, i.e. independent of the specific choice of the boundary defining function e −t * of the spacetime at future infinity. 3 In other words, the metric g is exponentially approaching a stationary metric close to the 5 We will show the r-normal hyperbolicity for every r of the trapping for Schwarzschild-de
Sitter in all spacetime dimensions below, and r-normal hyperbolicity (for large, but finite r) is structurally stable under perturbations of the metric, see Dyatlov [13] and Hirsch, Shub and Pugh [20].
small imaginary part relative to (the symbol of) a pseudodifferential inner product on T k . We then have:
Theorem 2.1. In the above notation, if g is sufficiently close to the Schwarzschildde Sitter metric g 0 , then there exist s 0 ∈ R and δ > 0 as well as a finite set
{σ j : j = 1, . . . , N } ⊂ C, Im σ j > −δ, integers m j ≥ 1 and d j ≥ 1, and smooth functions a jm ∈ C ∞ (∂ ∞ Ω), 1 ≤ j ≤ N , 0 ≤ m ≤ m j − 1, 1 ≤ ≤ d j ,
such that the following holds: The equation
g u = f, f ∈ H s,δ b (Ω, T k ) •,− , s ≥ s 0 , (2.4) has a unique solution u ∈ H −∞,−∞ b
(Ω, T k ) •,− , which has an asymptotic expansion
u = χ(τ ) N j=1 mj −1 m=0 dj =1 τ iσj | log τ | m u jm a jm + u ,
where χ is a cutoff function, i.e. χ(τ ) ≡ 1 near τ = 0 and χ(τ ) ≡ 0 near the Cauchy surface H 1 , and u jm ∈ C, while the remainder term is u ∈ H s,δ b (Ω, T k ) •,− . The same result holds true if we restrict to a subbundle of T k which is preserved by the action of , for instance the degree k form bundle, or the symmetric rank k tensor bundle.
If
V ∈ C ∞ (M , End(T k )) + H ∞,r b
(Ω, End(T k )), r > 0, is a smooth (conormal) End(T k )-valued potential (without restriction on its size), the analogous result holds for g replaced by g + V . We may even change g by adding a first order bdifferential operator L acting on T k with coefficients which are elements of C ∞ + H ∞,r b , provided either the coefficients of L are small, or the subprincipal operator of g +L is sufficiently close to being symmetric with respect to a pseudodifferential inner product on T k , see Remark 4.9.
The numbers σ j are called resonances or quasinormal modes, and the functions a jm resonant states. They have been computed in various special cases; see the discussion in the introduction for references. The threshold regularity s 0 is related to the dynamics of the flow of the Hamiltonian vector field H G of the dual metric function G (i.e. G(x, ξ) = |ξ| 2 G(x) , with G the dual metric of g) near the horizons which are generalized radial sets, see [17, Proposition 2.1]. Thus, s 0 can easily be made explicit, but this is not the point of the present paper.
The proof of Theorem 2.1 proceeds in the same way as the proof of [17, Theorem 2.20] in the scalar setting, so we shall be brief: Denote by N ( g ) the normal operator of g : We freeze the coefficients of g ∈ Diff 2 b (Ω, T k ) at ∂ ∞ Ω and thus obtain a dilation-invariant operator N ( g ), with g − N ( g ) being an operator whose coefficients decay exponentially (in t * ) by assumption on the structure of g. Denote by g (σ) ∈ Diff 2 (∂ ∞ Ω, T k ) the Mellin transformed normal operator family, depending holomorphically on σ ∈ C, which we obtain from N ( g ) by replacing D t * by −σ. 6 Once we show high energy estimates for (σ) −1 , which are polynomial bounds on its operator norm between suitable Sobolev spaces 7 as | Re σ| → ∞ in Im σ > −δ, we can use a contour shifting argument to iteratively improve on the 6 Changing the boundary defining function τ to a(x)τ γ , we can express the normal operator with respect to the new defining function in terms of the normal operator with respect to τ , namely it equals a(x) −1 g (γσ)a(x). 7 These are semiclassical Sobolev spaces with extendible character at the boundary of ∂∞Ω, see in particular [27] and the proof of [17,Theorem 2.20].
decay of u, picking up contributions of the poles of (σ) −1 which give rise to the resonance expansion. 8 The fact that the remainder term u has the same regularity as the forcing term f , thus u loses 2 derivatives relative to the elliptic gain of 2 derivatives, comes from the high energy estimate losing a power of 2, see [18,Theorem 5.5], which in turn is caused by the same loss for high energy estimates at normally hyperbolic trapping, see [
|σ| −1 σ b,1 1 2i ( − * ) < ν min /2 (2.5)
at the trapped set Γ, 9 where ν min is the minimal normal expansion rate of the Hamilton flow at the trapping, see [14] and the computation below. Here, the adjoint is taken with respect to a positive definite inner product on T k ; note that the inner product induced by g, with respect to which is of course symmetric, is not positive definite, except when k = 0, i.e. for the scalar wave equation. Since g is close to the Schwarzschild-de Sitter metric, it suffices (by the dynamical stability of the trapping) to obtain such a bound for the Schwarzschild-de Sitter metric g 0 . While this bound is impossible to obtain directly for the full range of Schwarzschildde Sitter spacetimes, we show in Section 4.2 how it can be obtained if we use pseudodifferential products. Prosaically, this means that we consider a conjugated operator P :
= Q Q − , where Q ∈ Ψ 0 b (M , T k )
is elliptic with parametrix Q − , and for any > 0, we can arrange |σ| −1 σ b,1 ( 1 2i (P − P * )) < (with the adjoint taken relative to an ordinary positive definite inner product on T k ), thus (2.5) holds for replaced by P ; we will prove this in Theorem 4.8. Hence [14, Theorem 1] applies to P , establishing a spectral gap; indeed, by the remark following [14, Theorem 1], Dyatlov's result applies for operators on bundles as well, as soon as one establishes (2.5). Arranging (2.5) in a natural fashion lies at the heart of Sections 3 and 4.
It remains to establish the r-normal hyperbolicity for all r for the Schwarzschildde Sitter metric. The dynamics at the trapping only depend on properties of the (scalar!) principal symbol g 0 of . For easier comparison with [12,27,29], we consider the operator P = −r 2 instead. We take the Fourier transform in −t and rescale to a semiclassical operator on X (this amounts to multiplying P by h 2 , giving a second order semiclassical differential operator P h , with h = |σ| −1 , and we then define z = hσ). Introducing coordinates on T * X by writing 1-forms as ξ dr + η dω, and letting 8 As shown by Vasy [27, §7], these estimates in Im σ 0 are automatic if the boundary defining function of future infinity is timelike; our choice does not satisfy this, but changing t * by a smooth function of the spatial variables, this can easily be arranged, see [27, §6], and in fact we can arrange t * = t away from the black hole and cosmological horizons. 9 We work in the b-setting, which via the Mellin transform is equivalent (on the normal operator level, which is all that matters) to the semiclassical setting considered in Dyatlov's work, see the discussion in [18, §5]. P h has semiclassical principal symbol
∆ r = r 2 µ = r 2 (1 − λr 2 ) − 2M • r 5−n ,p = ∆ r ξ 2 − r 4 ∆ r z 2 + |η| 2 ,
and correspondingly the Hamilton vector field is
H p = 2∆ r ξ∂ r − ∂ r ∆ r ξ 2 − ∂ r r 4 ∆ r z 2 ∂ ξ + H |η| 2
We work with real z, hence z = ±1. First, we locate the trapped set:
If H p r = 2∆ r ξ = 0, then ξ = 0, in which case H 2 p r = 2∆ r H p ξ = 2∆ r ∂ r (r 4 /∆ r )z 2 .
Recall the definition of the function µ = µ/r 2 = ∆ r /r 4 , then we can rewrite this as
H 2 p r = −2∆ r µ −2 (∂ r µ)z 2 .
We have already seen that ∂ r µ has a single root r p ∈ (r − , r + ), and (r − r p )∂ r µ < 0 for r = r p . Therefore, H 2 p r = 0 implies (still assuming H p r = 0) r = r p . We rephrase this to show that the only trapping occurs in the cotangent bundle over r = r p :
Let F (r) = (r − r p ) 2 , then H p F = 2(r − r p )H p r and H 2 p F = 2(H p r) 2 + 2(r − r p )H 2 p r. Thus, if H p F = 0, then either r = r p , in which case H 2 p F = 2(H p r) 2 > 0 unless H p r = 0, or H p r = 0, in which case H 2 p F = 2(r − r p )H 2 p r > 0 unless r = r p . So H p F = 0, p = 0 implies either H 2 p F > 0 or r = r p , H p r = 0, i.e. (r, ω; ξ, η) ∈ Γ := (r p , ω; 0, η) : r 4 ∆ r z 2 = |η| 2 ,
so Γ is the only trapping in T * X, and F is an escape function. We compute the linearization of the H p -flow at Γ in the normal coordinates r − r p and ξ, to wit
H p r − r p ξ = 0 2r 4 p µ| r=rp 2(n − 3)r −4 p ( µ| r=rp ) −2 z 2 0 r − r p ξ + O(|r − r p | 2 + |ξ| 2 ), where we used ∂ rr µ| r=rp = −2(n − 3)r −4 p , which gives ∂ r µ = −2(n − 3)r −4 p (r − r p ) + O(|r − r p | 2 )
. The eigenvalues of the linearization are therefore
±2r p n − 1 1 − n−1 n−3 r 2 p λ 1/2
, which reduces to the expression given in [27, p. 85] in the case n = 4, where r p = 3M • = 3 2 r s with r s = 2M • , and λ = Λ/3. In particular, the minimal expansion rate for the semiclassical rescaling of at the trapping Γ is
ν min = 2r −1 p n − 1 1 − n−1 n−3 r 2 p λ 1/2 > 0.
The expansion rate of the flow within the trapped set is 0 by spherical symmetry; note that integral curves of H p on Γ are simply unit speed geodesics of the round unit sphere S n−2 . This shows the normal hyperbolicity (in fact, r-normal hyperbolicity for every r) of the trapping and finishes the proof of Theorem 2.1.
For later reference, we note that the spacetime trapped set, i.e. the set of points in phase space that never escape through either horizon along the Hamilton flow, is given by Γ = {(t, r = r p , ω; σ, ξ = 0, η) : σ 2 = Ψ 2 |η| 2 }, (2.6) where Ψ = αr −1 , Ψ (r p ) = 0.
Pseudodifferential inner products
We now develop a general theory of pseudodifferential inner products, which we apply to the setting of Theorem 2.1 in Section 4.
We work on a complex rank N vector bundle E over the smooth compact ndimensional manifold X without boundary. We will define pseudodifferential inner products on E, which are inner products depending on the position in phase space T * X, rather than merely the position in the base X. As indicated in the introduction, we achieve this by replacing ordinary inner products by pseudodifferential operators whose symbols are inner products on the bundle π * E → T * X \ 0, where π : T * X \ 0 → X is the projection.
3.1. Notation. Let V be a complex N -dimensional vector space. We denote by V the complex conjugate of V, i.e. V = V as sets, and the identity map ι : V → V is antilinear, so ι(λv) = λι(v) for v ∈ V, λ ∈ C, which defines the linear structure on V. 10 A Hermitian inner product H on V is thus a linear map H : V ⊗ V → C such that H(u, ι(v)) = H(v, ι(u)) for u, v ∈ V, and H(u, ι(u)) > 0 for all nonzero u ∈ V. This can be rephrased this in terms of the linear map B : V → V * defined by B(u) = H(u, ·) and the natural dual pairing of V * with V, namely
Bu, ι(v) = Bv, ι(u) , and Bu, ι(u) > 0 for u ∈ V non-zero. A map A : V → V * has a transpose A T : V → V * , which satisfies Au, ι(v) = u, A T ι(v) for all u, v ∈ V, and an adjoint A * : V → V * satisfying Au, ι(v) = A * v, ι(u)
. Concretely, defining the antilinear map
j : V * → V * , j( ), ι(v) = , v ,
we have A * = jA T ι. The symmetry of a Hermitian inner product B as above is simply expressed by B = B * . Similarly, a map P : V → V has a transpose P T : V * → V * and an adjoint P * : V * → V * defined by , ι(P v) = P * , ι(v)
for ∈ V * and v ∈ V, and one easily finds P * = jP T j −1 . We point out that the definitions of adjoints of maps A : V → V * and P : V → V are compatible in the sense that (AP ) * = P * A * . Furthermore, if B : V → V * is a Hermitian inner product and Q : V → V is invertible, then B 1 = Q * BQ defines another Hermitian inner product, B 1 u, ι(v) = BQu, ι(Qv) . Now, given an inner product B on V and any map P : V → V, the adjoint P * B of P with respect to B is the unique map P * B : V → V such that BP u, ι(v) = Bu, ι(P * B v) for all u, v ∈ V. We find a formula for P * B by computing
BP u, ι(v) = B * (B * ) −1 P * B * v, ι(u) = Bu, ι((BP B −1 ) * v) , i.e. P * B = (BP B −1 ) * = B −1 P * B.
The self-adjointness of P with respect to B is thus expressed by the equality P = B −1 P * B.
If E is a complex rank N vector bundle, we can similarly define the complex conjugate bundle E as well as adjoints of vector bundle maps E → E and E → E * . We can also define adjoints of pseudodifferential operators mapping between these bundles: For convenience, we remove the dependence of adjoints on a volume density on X by tensoring all bundles with the half-density bundle Ω 1 2 over X, and 10 We prefer to write ι(v) rather than v to prevent possible confusion with taking complex conjugates in complexifications of real vector spaces.
we have a natural pairing
(E * ⊗ Ω 1 2 ) x × (E ⊗ Ω 1 2 ) x ( , ι(v)) → , ι(v) ∈ Ω 1
x , x ∈ X, likewise for the complex conjugate of E. Thus, an operator A ∈ Ψ m (X, E ⊗Ω
1 2 , E * ⊗ Ω 1 2 ) has an adjoint A * ∈ Ψ m (X, E ⊗ Ω 1 2 , E * ⊗ Ω 1 2 ) defined by X A * u, ι(v) = X Av, ι(u) ,
with principal symbol σ m (A * ) = σ m (A) * ∈ S m (T * X \ 0, π * Hom(E, E * )), and like-
wise P ∈ Ψ m (X, E ⊗ Ω 1 2 ) has an adjoint P * ∈ Ψ m (X, E * ⊗ Ω 1 2 ) with σ m (P * ) = σ m (P ) * .
3.2.
Definition of pseudodifferential inner products; adjoints. We work with classical, i.e. one-step polyhomogeneous, symbols and operators, and denote by S m hom (T * X \ 0) symbols which are homogeneous of degree m with respect to dilations in the fibers of T * X \ 0.
Definition 3.1. A pseudodifferential inner product (or Ψ-inner product) on the vector bundle E → X is a pseudodifferential operator B ∈ Ψ 0 (X; E ⊗ Ω
(B) = b ∈ S 0 hom (T * X \ 0; π * Hom(E, E * )) of B satisfies b(x, ξ)u, ι(u) > 0 (3.1)
for all non-zero u ∈ E x , where x ∈ X, ξ ∈ T * x X \ 0. If the context is clear, we will also call the sesquilinear pairing
C ∞ (X, E ⊗ Ω 1 2 ) × C ∞ (X, E ⊗ Ω 1 2 ) (u, v) → X B(x, D)u, ι(v)
the pseudodifferential inner product associated with B.
In particular, the principal symbol b of B is a Hermitian inner product on π * E. Conversely, for any b ∈ S 0 hom (T * X \ 0; π * Hom(E, E * )) satisfying b = b * and (3.1), there exists a Ψ-inner product B with σ 0 (B) = b; indeed, simply take B to be any quantization of b and put B = 1 2 ( B + B * ). Remark 3.2. While we will develop the theory of Ψ-inner products only in the standard calculus on a closed manifold, everything works mutatis mutandis in other settings as well. Thus, in the b-calculus of Melrose [23], Ψ b -inner products on a manifold with boundary are defined similarly to Ψ-inner products, except that adjoints are defined on the spaceĊ ∞ of functions vanishing to infinite order at the boundary, and the space of 'trivial', smoothing operators is now Ψ −∞ b , likewise for the scattering calculus [24], replacing 'b' by 'sc'. In the semiclassical calculus on a closed manifold, adjoints are again defined on C ∞ , but the space of 'trivial' operators is now h ∞ Ψ −∞ , and suitable factors of h need to be put in for computations involving subprincipal symbols.
We next discuss adjoints of ΨDOs relative to Ψ-inner products.
operator R ∈ Ψ −∞ (X, E ⊗ Ω 1 2 , E * ⊗ Ω 1 2 ) such that BP u, ι(v) = Bu, ι(P * B v) + Ru, ι(v) (3.2) for all u, v ∈ C ∞ (X, E ⊗ Ω 1 2 ).
Remark 3.4. This definition and the following lemma have straightforward generalizations to the case that P maps section of E into sections of another vector bundle F, provided a (Ψ-)inner product on F is given.
Lemma 3.5. In the notation of Definition 3.3, the adjoint of P with respect to B exists and is uniquely determined modulo Ψ −∞ (X, E ⊗Ω
= I − B − B ∈ Ψ −∞ (X, E ⊗ Ω 1 2 ). Then BP u, ι(v) = BP B − Bu, ι(v) + BP R L u, ι(v) ,
hence (3.2) holds with P * B = (BP B − ) * and R = BP R L . To show the uniqueness of P * B modulo smoothing operators, suppose that P is another adjoint of P with respect to B, with error term R (i.e. (3.2) holds with P * B and R replaced by P and R). Then
B(P * B − P )v, ι(u) = Bu, ι((P * B − P )v) = ( R − R)u, ι(v) = ( R − R) * v, ι(u) for u, v ∈ C ∞ (X, E ⊗ Ω 1 2 ), so B(P * B − P ) = ( R − R) * ∈ Ψ −∞ (X, E ⊗ Ω 1 2 , E * ⊗ Ω 1
2 ), and the ellipticity of B implies P * B − P ∈ Ψ −∞ (X, E ⊗ Ω 1 2 ), as claimed. Since B is self-adjoint, we can assume that B − is self-adjoint by replacing it by
b = σ 0 (B), i.e. b(x, ξ)p(x, ξ)u, ι(v) = b(x, ξ)u, ι(p(x, ξ)v) , x ∈ X, ξ ∈ T x X, u, v ∈ E x .
Proof. The hypothesis on P means (BP B − ) * = P modulo Ψ −∞ , thus on the level of principal symbols, p = b −1 p * b = p * b , which proves the claim.
We now specialize to the case that P ∈ Ψ m (X, E ⊗Ω 1 2 ) has a real, scalar principal symbol. Fix a coordinate system of X and a local trivialization of E, then the full symbol of P is a sum of homogeneous symbols p ∼ p m + p m−1 + . . ., with p j homogeneous of degree j and valued in complex N × N matrices. Recall from [21, §18] that the subprincipal symbol
σ sub (P ) = p m−1 (x, ξ) − 1 2i j ∂ xj ξj p m (x, ξ) ∈ S m−1 hom (T * X \ 0, C N ×N ) (3.3)
is well-defined under changes of coordinates; however, it does depend on the choice of local trivialization of E. We compute the principal symbol of
Im B P := 1 2i (P − P * B )
for such P in a local trivialization of E; we will give an invariant formulation in Proposition 3.10 below.
Lemma 3.7. Let P ∈ Ψ m (X, E ⊗ Ω 1 2 ) be a principally real and scalar, and let B = b(x, D) be a Ψ-inner product on E. Then Im B P ∈ Ψ m−1 (X, E ⊗ Ω 1 2 ) has the principal symbol
σ m−1 (Im B P ) = Im b σ sub (P ) + 1 2 b −1 H p (b),(3.
4)
where Im b σ sub (P ) = 1 2i σ sub (P ) − σ sub (P ) * b . Here, we interpret b and σ sub (P ) as N × N matrices of scalar-valued symbols using a local frame of E and the corresponding dual frame of E * , and the action of H p is component-wise.
Proof. We compute in a local coordinate system over which E and E are trivialized by a choice of N linearly independent sections e 1 , . . . , e N , and E * and E * are trivialized by the dual sections e * 1 , . . . , e * N ∈ E * satisfying e * i (e j ) = δ ij , extended linearly as linear functionals on E, resp. on E, in the case of E * , resp. E * .
We trivialize Ω 1 2 using the section |dx|
b(x, ξ)u, ι(v) = ij b ij (x, ξ)u j · v i |dx|, thus Bu, ι(v) = ij (b ij (x, D)u j ) · v j dx.
Note that b(x, ξ) is a Hermitian matrix, i.e. b ij (x, ξ) = b ji (x, ξ), and in fact B = b(x, D) is self-adjoint (with respect to the standard Hermitian inner product on C N ). The adjoint of P = p(x, D), which in local coordinates is simply an N × N matrix of scalar ΨDOs, with respect to B is the operator P = p(x, D) such that by Lemma 3.5. Write p(x, ξ) = p m (x, ξ) + p m−1 (x, ξ) + . . ., then the full symbol of P − P = B − (BP −P * B) (where P * is the adjoint of P with respect to the standard Hermitian inner product on C N ) is given, modulo S m−2 , by
b(x, D)p(x, D)u · v dx = b(x, D)u · p(x, D)v dx + Ru · v dx, R ∈ Ψ −∞ .b −1 bp m + 1 i j ∂ ξj b∂ xj p m + bp m−1 − p * m b − 1 i j (∂ xj ξj p * m )b − 1 i j ∂ ξj p * m ∂ xj b − p * m−1 b = p m−1 − 1 2i j ∂ xj ξj p m − b −1 p m−1 − 1 2i j ∂ xj ξj p m * b + ib −1 H pm (b),
where we used that p m is scalar and real. The claim follows.
3.3.
Invariant formalism for subprincipal symbols of operators acting on bundles. We continue to denote by P ∈ Ψ m (X, E ⊗ Ω 1 2 ) a principally scalar ΨDO acting on the vector bundle E, with principal symbol p. 11 We will show how to modify the definition (3.3) of the subprincipal symbol of P , expressed in terms of a local trivialization of E, in an invariant fashion, i.e. in a way that is both independent of the choice of local trivialization and of local coordinates on X. This provides a completely invariant formulation of Lemma 3.7.
Let U ⊂ X be an open subset over which E is trivial, and pick a frame e(x) = {e 1 (x), . . . , e N (x)} trivializing E over U . Let us write P e for P in the frame e, i.e. P e = (P e jk ) j,k=1,...,N is the N × N matrix of operators P e jk ∈ Ψ m (U, Ω 1 2 ) defined by P ( k u k (x)e k (x)) = jk P e jk (u k )e j (x), u k ∈ C ∞ (U, Ω 1 2 ). Then σ e sub (P ) as defined in (3.3), with the superscript making the choice of frame explicit, is simply an N × N matrix of scalar symbols: σ e sub (P ) = (σ sub (P e jk )) j,k=1,...,N . We will consider the effect of a change of frame on the subprincipal symbol (3.3). Thus, let C ∈ C ∞ (U, End(E)) be a change of frame, i.e. C(x) is invertible for all x ∈ X. Then e j (x) = C(x)e j (x) defines another frame e (x) = {e 1 (x), . . . , e N (x)} of E over U . One easily computes σ e sub (C −1 P C) = (C e ) −1 σ e sub (P )C e − i(C e ) −1 H p (C e ), with H p interpreted as the diagonal N ×N matrix 1 N ×N H p of first order differential operators, and C e is the matrix of C in the frame e . Now note that (C −1 P C) e = P e and (C e ) −1 H p (C e ) = (C e ) −1 H p C e − H p ; thus, we obtain
σ e sub (P ) − iH p = (C e ) −1 σ e sub (P ) − iH p C e (3.5)
Thus, viewing σ e sub (P ) − iH p as the N × N matrix (in the frame e ) of a differential operator acting on C ∞ (T * X \ 0, π * E), the right hand side of (3.5) is the matrix of the same differential operator, but expressed in the frame e. Notice that the principal symbol p of P as a scalar, i.e. diagonal, N × N matrix of symbols, is well-defined independently of the choice of frame. To summarize: Definition 3.8. For P ∈ Ψ m (X, E ⊗ Ω 1 2 ) with scalar principal symbol p, there is a well-defined subprincipal operator S sub (P ) ∈ Diff 1 (T * X \ 0, π * E), homogeneous of degree m − 1 with respect to dilations in the fibers of T * X \ 0, defined as follows: If {e 1 (x), . . . , e N (x)} is a local frame of E, define the operators P jk ∈ Ψ m (X, Ω
1 2 ) by P ( k u k (x)e k (x)) = jk P jk (u k )e j (x), u k ∈ C ∞ (X, Ω 1 2 ). Then S sub (P ) k q k (x, ξ)e k (x) := jk (σ sub (P jk )q k )e j − i k (H p q k )e k .
In shorthand notation, S sub (P ) = σ sub (P ) − iH p , understood in a local frame as a matrix of first order differential operators. We emphasize the dependence on the order of the operator by writing S sub,m (P ), so that for P ∈ Ψ m (X, E ⊗ Ω 1 2 ), we have S sub,m+1 (P ) = σ m (P ).
We shall compute the subprincipal operator of the Laplace-Beltrami operator acting on sections of the tensor bundle in Section 4.
P ∈ Ψ m b (X, E ⊗ Ω 1 2 b ) acting on E-valued b-half-densities is an element of Diff 1 b ( b T * X \ 0, π * b E), where π b : b T * X \ 0 → X is the projection. In the semiclassical setting, P ∈ Ψ m (X, E ⊗ Ω 1 2 ), we have S sub (P ) ∈ Diff 1 (T * X, π * E).
We can now express the symbols of commutators and imaginary parts in a completely invariant fashion:
Proposition 3.10. Let P ∈ Ψ m (X, E ⊗ Ω 1 2
) be a ΨDO with scalar principal symbol p.
(1) Suppose Q ∈ Ψ m (X, E ⊗Ω If Q is elliptic with parametrix Q − , then S sub (QP Q − ) = qS sub (P )q −1 .
(3.6)
(2) Suppose in addition that p is real. Let B be a Ψ-inner product on E with principal symbol b, then
σ m−1 (Im B P ) = Im b S sub (P ), (3.7)
where Im b S sub (P ) = 1 2i S sub (P ) − S sub (P ) * b ; we take the adjoint of the differential operator S sub (P ) with respect to the inner product b on π * E and the symplectic volume density on T * X.
Proof. We verify this in a local frame e(x) = {e 1 (x), . . . , e N (x)} of E. We compute
S sub (P ) jk q jk (x, ξ)u k (x, ξ)e j (x) = j k σ sub (P ) jk q k − iH p (q j ) u e j − iq j H p (u )e j − iq j u e j H p , while qS sub (P ) u (x, ξ)e (x) = j k q jk σ sub (P ) k u e j − iq j H p (u )e j − iq j u e j H p ,
hence S sub (P )q − qS sub (P ) = [σ sub (P ), q] − iH p (q) as an endomorphism (a zeroth order differential operator acting on sections of E) of E in the frame e, which equals = qS sub,m (P )q −1 ,
noting that Q[P, Q − ] is of order m − 1. For the second part, we have S sub (P ) * b = σ sub (P ) * b − (iH p ) * b = b −1 σ sub (P ) * b + ib −1 (H p ) * b,
where (H p ) * is the adjoint of H p as an operator acting on C ∞ c (T * X \ 0), and we equip T * X with the natural symplectic volume density |dx dξ|. We have (H p ) * = −Hp = −H p since p is real. Therefore,
S sub (P ) − S sub (P ) * b = σ sub (P ) − σ sub (P ) * b − iH p + ib −1 H p b = σ sub (P ) − σ sub (P ) * b + ib −1 H p (b),
which indeed gives (3.4) upon division by 2i.
In particular, (3.7) provides a very elegant point of view for understanding the imaginary part of a principally scalar and real (pseudo)differential operator with respect to a Ψ-inner product B, as already indicated in the introduction: For instance, the principal symbol of the imaginary part Im B P vanishes (or is small relative to b = σ 0 (B)) in a subset of phase space if and only if the imaginary part of the first order differential operator S sub (P ) on T * X \ 0 has vanishing (or small with respect to the fiber inner product b of π * E) coefficients in this subset.
3.4.
Interpretation of pseudodifferential inner products in traditional terms. We now show how to interpret the imaginary part Im B P of an operator P with respect to a Ψ-inner product B in terms of the imaginary part of a conjugated version of P with respect to a standard inner product: Proposition 3.11. Let B be a Ψ-inner product on E. Then for any positive definite Hermitian inner product B 0 ∈ C ∞ (X, Hom(E ⊗ Ω 1 2 , E * ⊗ Ω 1 2 )) on E, there exists an elliptic operator Q ∈ Ψ 0 (X, End(E ⊗Ω
1 2 )) such that B−Q * B 0 Q ∈ Ψ −∞ (X, Hom(E ⊗ Ω 1 2 , E * ⊗ Ω 1
2 )). In particular, denoting by Q − ∈ Ψ 0 (X, End(E ⊗Ω 1 2 )) a parametrix of Q, we have for any P ∈ Ψ m (X, E ⊗ Ω 1 2 ) with real and scalar principal symbol: 12 Q(Im B P )Q − = Im B0 (QP Q − ), (3.8) and σ m−1 (Im B P ) and σ m−1 (Im B0 (QP Q − )) (which are self-adjoint with respect to σ 0 (B) and B 0 , respectively, hence diagonalizable) have the same eigenvalues.
Proof. In order to shorten the notation, fix a global trivialization of Ω 1 2 over X and use it to identify E ⊗ Ω 0, π * Hom(E, E * )). We similarly put b 0 := B 0 , which is an inner product on π * E that only depends on the base point. We start with on the symbolic level by constructing an elliptic symbol q 1 ∈ S 0 hom (T * X \ 0, π * End(E)) such that b = q * 1 b 0 q 1 ; recall that q * 1 ∈ S 0 hom (T * X \ 0, π * End(E * )). For t ∈ [0, 1], define the Hermitian inner product b t := (1−t)b 0 +tb.
We will construct a differentiable family q t of symbols such that b t = q * t b 0 q t for t ∈ [0, 1]. Observe that for any such family, we have
∂ t b t = b − b 0 = (∂ t q t ) * b 0 q t + q * t b 0 ∂ t q t , which suggests requiring ∂ t q t = 1 2 b −1 0 (q * t ) −1 (b − b 0 )
, which we can write as a linear expression in q t by noting that (q * t ) −1 = b 0 q t b −1 t . Moreover, q 0 = id is a valid choice for q t at t = 0. Thus, we are led to define q t , t ∈ [0, 1], as the solution of the ODE
∂ t q t = 1 2 q t b −1 t (b − b 0 ), q 0 = id .
Reversing these arguments, for the solution q t we then have q * t b 0 q t = b t for t = 0, and both q * t b 0 q t and b t are solutions of the same ODE, namely
∂ t b t = 1 2 (b − b 0 )b −1 t b t + b t b −1 t (b − b 0 ) , b 0 = b 0 , hence q * t b 0 q t = b t for all t ∈ [0, 1]. Let Q 1 ∈ Ψ 0 (X, End(E)) be a quantization of q 1 , then we conclude that B − Q * 1 B 0 Q 1 ∈ Ψ −1 .
We iteratively remove this error to obtain a smoothing error:
Suppose Q k ∈ Ψ 0 (X, End(E)) is such that B − Q * k B 0 Q k ∈ Ψ −k for some k ≥ 1. We will find D k ∈ Ψ −k , a quantization of d k ∈ S −k hom (T * X \ 0, π * E), such that Q k+1 := Q k + D k satisfies B − Q * k+1 B 0 Q k+1 ∈ Ψ −k−1 .
This is equivalent to the equality of symbols
r k := σ −k (B − Q * k B 0 Q k ) = σ −k (D * k B 0 Q k + Q * k B 0 D k ) = d * k b 0 q 1 + (b 0 q 1 ) * d k , which in view of r * k = r k is satisfied for d k = 1 2 ((b 0 q 1 ) * ) −1 r k .
We define Q ∈ Ψ 0 (X, End(E)) to be the asymptotic limit of the Q k as k → ∞, i.e. Q ∼ Q 1 + ∞ k=1 D k , which thus satisfies B − Q * B 0 Q ∈ Ψ −∞ . This proves the first part of the proposition.
For the second part, denote parametrices of B and Q by B − and Q − , respectively. Then, modulo operators in Ψ −∞ , we have
P * B = (BP B − ) * = (Q * B 0 QP Q − B −1 0 (Q − ) * ) * = Q − (QP Q − ) * B0 Q, hence Q(P − P * B )Q − = (QP Q − ) − (QP Q − ) * B0 modulo Ψ −∞ . 3.5. A simple example. On R n x = R x1 × R n−1 x , we consider the operator P = D x1 + A ∈ Ψ 1 (R n , C N ), where A = A(x, D) ∈ Ψ 0 (R n , C N ) is independent of x 1 .
Trivializing the half-density bundle over R n via |dx| 1 2 , we can consider P as an operator in Ψ 1 (R n , C N ⊗ Ω 1 2 ). Its principal symbol is σ 1 (P )(x, ξ) = ξ 1 , where we use the standard coordinates on T * R n , i.e. writing covectors as ξ dx, so the Hamilton vector field is H σ1(P ) = ∂ x1 ; moreover, in the trivialization of C N by means of its standard basis, σ sub (P )(x, ξ) = A(x, ξ). Thus, the subprincipal operator of P is
S sub (P )(x, ξ) = A(x, ξ) − i∂ x1 ∈ Diff 1 (T * R n \ 0, π * C N ),
with A homogeneous of degree 0 in the fiber variables. Suppose we are interested in bounding 1 2i (P − P * ) on Z := T * {x =0} R n \ 0 relative to a suitably chosen inner product. Let us assume that A(0, ξ) is nilpotent for all |ξ| = 1, and that in fact at x = 0 and |ξ| = 1, we can choose a smooth frame e 1 (ξ), . . . , e N (ξ) of the bundle π * C N → T * R n \ 0 so that A(0, ξ), written in the basis e 1 (ξ), . . . , e N (ξ), is a single Jordan block with zeros on the diagonal and ones directly above. Extend the e j by homogeneity (of degree 0) in the fiber variables, and define them to be constant in the x 1 -direction along Z, i.e. e j (x 1 , 0; ξ) = e j (0, 0; ξ), and extend them in an arbitrary manner to a neighborhood of Z. Now, on Z we have Ae j = e j−1 , writing e 0 := 0. Introduce a new frame e j := j e j with > 0 fixed, then Ae j = e j−1 . Define the inner product b on π * C N by b(x, ξ)(e i (x, ξ)), ι(e j (x, ξ)) = δ ij , that is, {e 1 , . . . , e N } is an orthonormal frame for b. Then on Z, we find that Im b S sub (P ) (which is of order 0) in the frame {e 1 , . . . , e N } is given by the matrix which is zero apart from entries /2i directly above and − /2i directly below the diagonal. Thus, defining the Ψ-inner product B = b(x, D), we have arranged that σ 0 (Im B P )(x, ξ) b ≤ on Z. Since σ 0 (Im B P ) is self-adjoint with respect to b, this is really the statement that its eigenvalues are bounded from above and below by and − , respectively.
Using Proposition 3.11, we can rephrase this as follows: If v j denotes the standard basis of C N and B 0 (v i ), ι(v j ) = δ ij the standard inner product on C N (the particular choice of an ordinary inner product being irrelevant, see the statement of Proposition 3.11), define the map q(x, ξ) ∈ S 0 hom (T * R n \ 0, π * C N ) by q(x, ξ)e j (x, ξ) = v j . Let Q = q(x, D) and denote by Q − a parametrix of Q, then we find that QP Q − ∈ Ψ 1 (R n , C N ) satisfies σ 0 (Im B0 QP Q − ) B0 ≤ .
If A has several Jordan blocks not all of which are nilpotent, one can (under the assumption of the existence of a smooth family of Jordan bases) similarly construct a Ψ-inner product so that the imaginary part of A relative to it is bounded by the maximal imaginary part of the eigenvalues of A (plus ) from above, and by the minimal imaginary part (minus ) from below.
Subprincipal operators of tensor Laplacians
Let (M, g) be a smooth manifold equipped with a metric tensor g of arbitrary signature. Denote by T k M = k T * M , k ≥ 1, the bundle of (covariant) tensors of rank k on M . The metric g induces a metric (which we also call g) on T k M . We study the symbolic properties of ∆ k = − tr ∇ 2 ∈ Diff 2 (M, T k M ), the Laplace-Beltrami operator on M acting on the bundle T k M . Denote by G ∈ C ∞ (T * M ) the metric function, i.e. G(x, ξ) = |ξ| 2 G(x) , where G is the dual metric of g.
Proposition 4.1. The subprincipal operator of ∆ k is S sub (∆ k )(x, ξ) = −i∇ π * T k M H G ∈ Diff 1 (T * M \ 0, π * T k M ), (4.1)
where ∇ π * T k M is the pullback connection, with π : T * M \ 0 → M being the projection.
Proof. Since both sides of (4.1) are invariantly defined, it suffices to prove the equality in an arbitrary local coordinate system. At a fixed point x 0 ∈ M , introduce normal coordinates so that ∂ k g ij = 0 at x 0 . Then we schematically have
(∆ k u) i1...i k = −g jk u i1...i k ,jk = −g jk (∂ k u i1...i k ,j + Γ · ∂u) = −g jk ∂ jk u i1...i k + ∂(Γ · u) + Γ · ∂u = −g jk ∂ jk u i1...i k + Γ · ∂u + ∂Γ · u,
with Γ denoting Christoffel symbols. This suffices to see that the full symbol of ∆ k in the local coordinate system is given by
σ(∆ k )(x, ξ) = g jk (x)ξ j ξ k + (x j − x j 0 ) j (x, ξ) + e(x), where j (x, ξ) is a linear map in ξ with values in End((T k M ) x ), and e(x) is an endomorphism of (T k M ) x . Therefore, σ sub (∆ k )(x 0 , ξ) = 0, since ∂ i g jk (x 0 ) = 0. Thus, S sub (∆ k )(x 0 , ξ) = −iH |ξ| 2 g = −2ig jk ξ k ∂ x j . (4.2)
We now compute the right hand side of (4.1). First, writing dx I = dx i1 ⊗ · · · ⊗ dx i k for multiindices I = (i 1 , . . . , i k ), we note that sections of π * T k M are of the form u I (x, ξ) dx I , while pullbacks (under π) of sections of T k M are of the form u I (x) dx I . By definition, the pullback connection ∇ π * T k M is given by
∇ π * T k M ∂ x j (u I (x) dx I ) = ∇ T k M ∂ x j (u I (x) dx I ), ∇ π * T k M ∂ ξ k (u I (x) dx I ) = 0
on pulled back sections and extended to sections of the pullback bundle using the Leibniz rule; thus,
∇ π * T k M ∂ x j (u I (x, ξ) dx I ) = ∇ T k M ∂ x j (u I (·, ξ) dx I )(x), ∇ π * T k M ∂ ξ k (u I (x, ξ) dx I ) = ∂ ξ k u I (x, ξ) dx I . Thus, in normal coordinates at x 0 ∈ M , we simply have ∇ π * T k M ∂ x j = ∂ x j and ∇ π * T k M ∂ ξ k = ∂ ξ k , therefore ∇ π * T k M H |ξ| 2 g = 2g jk ξ k ∂ x j
at x 0 , which verifies (4.1) in view of (4.2).
To simplify the study of the pullback connection on π * T k M for general k, we observe that there is a canonical bundle isomorphism π * T k M ∼ = k π * T * M ; hence the connection ∇ π * T k M is simply the product connection on k π * T * M . Therefore, if we understand certain properties of S sub (∆ 1 ), we can easily deduce them for S sub (∆ k ) for any k. In our application, we will need to choose a positive definite pseudodifferential inner product B k = b k (x, D) on the bundle T k M with respect to which ∆ k is arbitrarily close to being symmetric in certain subsets of phase space. Concretely, this means that we want the operator S sub (∆ k ) to be (almost) symmetric with respect to the inner product b k on π * T k M . The following lemma shows that it suffices to accomplish this for k = 1:
Lemma 4.2. Let U ⊂ T * M \ 0
be open, and let f ∈ C ∞ (U ) be real-valued. Fix a Hermitian inner product b (antilinear in the second slot) on π * T * M , and define R ∈ End(π * T * M ) by requiring that
U i∇ π * T * M H f u, v b dσ − U u, i∇ π * T * M H f v b dσ = U u, Rv b dσ for all u, v ∈ C ∞ c (U, π * T * M ),
where dσ is the natural symplectic volume density on T * M . There exists a constant C k > 0, independent of U, f and b, such that the following holds: If sup U R b ≤ (using b to measure the operator norm of R acting on each fiber) for some > 0, then the inner product
b k = k b induced by b on k π * T * M ∼ = π * T k M satisfies U i∇ π * T k M H f u, v b k dσ − U u, i∇ π * T k M H f v b k dσ = U u, R k v b k dσ, u, v ∈ C ∞ c (U, π * T k M ), for R k ∈ End(π * T k M ) satisfying sup U R k b k ≤ k . Proof.
We show this for k = 2, the proof for general k being entirely analogous.
Denote S = i∇ π * T * M H f , then S 2 = i∇ π * T2M
H f acts by S 2 (u 1 ⊗u 2 ) = Su 1 ⊗u 2 +u 1 ⊗Su 2 . Hence using S(au) = aSu + iH f (a)u for sections u of π * T * M and functions a on U , we calculate
U S 2 (u 1 ⊗ u 2 ), v 1 ⊗ v 2 b2 dσ = U Su 1 , v 1 b u 2 , v 2 b + u 1 , v 1 b Su 2 , v 2 b dσ = U u 1 , S v 1 u 2 , v 2 b b + U u 2 , S v 2 u 1 , v 1 b b dσ + U u 1 ⊗ u 2 , (R ⊗ id + id ⊗R)(v 1 ⊗ v 2 ) b2 dσ = U u 1 ⊗ u 2 , S 2 (v 1 ⊗ v 2 ) b2 dσ − i U H f ( u 1 , v 1 b u 2 , v 2 b ) dσ + U u 1 ⊗ u 2 , R 2 (v 1 ⊗ v 2 ) b2 dσ = U u 1 ⊗ u 2 , S 2 (v 1 ⊗ v 2 ) b2 dσ + U u 1 ⊗ u 2 , R 2 (v 1 ⊗ v 2 ) b2 dσ with R 2 = R ⊗ id + id ⊗R, where we used that U H f u dσ = − U uH f 1 dσ = 0 for u ∈ C ∞ c (U )
. From the explicit form of R 2 , we see that R 2 b2 ≤ 2 indeed. 4.1. Warped product spacetimes. Let X be an (n − 1)-dimensional manifold equipped with a smooth Riemannian metric h = h(x, dx), and let α ∈ C ∞ (X) be a positive function. We consider the manifold M = R t × X, equipped with the Lorentzian metric g = α 2 dt 2 − h. (4.3) On such a spacetime, we have a natural splitting of 1-forms into their tangential and normal part relative to α dt, i.e.
u = u T + u N α dt. (4.4)
In this section, we will compute the form of ∇ π * T * M H G as a 2 × 2 matrix of differential operators with respect to this decomposition. For brevity, we will use the notation ∇ M := ∇ π * T * M , similarly ∇ X := ∇ π * T * X , and we will moreover use the abstract index notation, fixing x 0 = t, and x = (x 1 , . . . , x n−1 ) are coordinates on X (independent of t). We let Greek indices µ, ν, λ, . . . run from 0 to n − 1, Latin indices i, j, k, . . . from 1 to n − 1. Moreover, the canonical dual variables 13 ξ 0 =: σ and ξ = (ξ 1 , . . . , ξ n−1 ) on the fibers of T * M are indexed by decorated Greek indices µ (running from 0 to n − 1) and Latin indices i, j, . . . (running from 1 to n − 1).
If an index appears both with and without tilde in one expression, it is summed accordingly, for instance a j b j = n j=1 a j b j . Thus, for a section u of π * T * M , we have
∇ M µ u ν = ∇ M µ u ν , ∇ M µ u ν = ∂ µ u ν , where we interpret ∇ M
µ as acting on u for fixed values of the fiber variables, i.e. viewing u as a family of sections of T * M depending on the fiber variables. As before, we denote by G the metric function on T * M , and we let H denote the metric function on T * X, interpreted as a (t, σ)-independent function on T * M . Lastly, we denote the Christoffel symbols of (M, g) by M Γ κ µν , and those of (X, h) by X Γ k ij . Lemma 4.3. The Christoffel symbols of M are given by:
M Γ 0 00 = 0, M Γ 0 i0 = α −1 α i , M Γ 0 ij = 0, M Γ k 00 = αh k α , M Γ k i0 = 0, M Γ k ij = X Γ k ij .
(4.5)
Proof. We have g 00 = α 2 , g 0i = g i0 = 0 and g ij = −h ij , and g is t-independent, thus ∂ 0 g µν = 0. Using M Γ κµν = 1 2 (∂ µ g κν + ∂ ν g κµ − ∂ κ g µν ), we then compute M Γ 000 = 0, M Γ 0i0 = αα i , M Γ 0ij = 0, while on normal forms as above,
∇ M H f u 0 = αf j ∂ j v − αf j ∂ j v = αH f v, ∇ M H f u i = 0. Thus, ∇ M H f = ∇ X H f 0 0 H f .
The claim follows.
4.2.
Schwarzschild-de Sitter space. We stay in the setting of the previous section, and now the spatial metric h has a decomposition
h = α −2 dr 2 + r 2 dω 2 ,
where dω 2 is the round metric on the unit sphere Y = S n−2 , with dual metric denoted Ω; see (2.1). Thus, writing ξ, resp. η, for the dual variables of r, resp. ω ∈ S n−2 , we have H = α 2 ξ 2 + r −2 |η| 2 Ω . Write 1-forms on X as u = u T + u N α −1 dr.
(4.6)
Abbreviate the derivative of a function f with respect to r by f . Since dα = α dr and ∇ X α = α 2 α ∂ r , we have, in the decomposition (4.6),
dα = 0 αα , i ∇ X α = 0 αα .
We will need the Christoffel symbols of h. We continue using the notation to the previous section, except now x 1 = r and ξ 1 = ξ, while x 2 , . . . , x n are r-independent coordinates on S n−2 , and moreover the lower bound for Greek indices is 1, and 2 for Latin indices.
Lemma 4.5. The Christoffel symbols of X are given by:
X Γ 1 11 = −α −1 α , X Γ 1 i1 = 0, X Γ 1 ij = −rα 2 (dω 2 ) ij , X Γ k 11 = 0, X Γ k i1 = r −1 δ k i , X Γ k ij = Y Γ k ij .
(4.7)
Proof. We have h 11 = α −2 , h 1i = h i1 = 0 and h ij = r 2 (dω 2 ) ij , and (dω 2 ) ij is r-independent. We then compute
X Γ 111 = −α −3 α , X Γ 1i1 = 0, X Γ 1ij = −r(dω 2 ) ij , X Γ k11 = 0, X Γ ki1 = r(dω 2 ) ki , X Γ kij = r 2Y Γ kij ,
which immediately gives (4.7).
We are only interested in the subprincipal operator of 1 at the trapped set, which we recall from (2.6) to be the set
Γ = {r = r p , ξ = 0, σ 2 = Ψ 2 |η| 2 }, where Ψ = αr −1 , Ψ (r p ) = 0. (4.8)
Thus, at Γ, we have
H H = 2α 2 ξ∂ r − 2αα ξ 2 ∂ ξ + 2r −3 |η| 2 ∂ ξ + r −2 H |η| 2 = 2r −3 |η| 2 ∂ ξ + r −2 H |η| 2 ,
while σ 2 H α −2 = 2σ 2 α −3 α ∂ ξ . Now α −1 α = (rΨ) −1 (rΨ) = r −1 at r = r p , therefore σ 2 α −3 α = r −3 |η| 2 , and we thus obtain
σ 2 H α −2 − H H = −r −2 H |η| 2 at Γ. (4.9)
Notice that |η| 2 ∈ C ∞ (T * Y ) is independent of (r, ξ).
of 0-th order terms of S sub ( 1 ) is nilpotent, which suggests in analogy to the discussion in Section 3.5 that the imaginary part of S sub ( 1 ) with respect to a Riemannian fiber inner product can be made arbitrarily small. Indeed, for any fixed > 0, define the 'change of basis matrix'
q = id 0 0 0 −1 Ψr 2 0 − −2 |η| −1 Ψ 2 r 2 i η 0 −2 |η| −1 Ψr 3 σ , then qsq −1 = 0 η 0 0 0 |η| 0 0 0 .
In order to compute qS sub ( 1 )q −1 , we note that the diagonal matrix of t-derivatives in (4.11) commutes with q, and it remains to study the derivatives along H |η| 2 ; more specifically, q has a block structure, with the columns and rows 1, 3 being the first block and the (2, 2) entry the second, and the (2, 2) block is an η-independent multiple of the identity, hence commutes with the relevant (2, 2) entry ir −2 H |η| 2 of S sub ( 1 ). For the 1, 3 block, we compute where denotes the Hodge d'Alembertian on the form bundle and δ is the codifferential. Thus, (4.12) in fact vanishes, and therefore
∇ Y H |η| 2 0 0 H |η| 2 , id 0 − −2 |η| −1 Ψ 2 r 2 i η −2 |η| −1 Ψr 3 σ = −2 Ψ 2 r 2 |η| −1 0 0 i η ∇ Y H |η| 2 − H |η| 2 i η 0 .qS sub ( 1 )q −1 = −i 2α −2 σ∂ t − r −2 ∇ Y H |η| 2 −2r 2 η 0 0 2α −2 σ∂ t − r −2 H |η| 2 −2r 2 |η| 0 0 2α −2 σ∂ t − r −2 H |η| 2 .
Equip the 1-form bundle over M in the decomposition (4.10) with the Hermitian inner product B 0 = Ω ⊕ 1 ⊕ 1, (4.13)
then qS sub ( 1 )q −1 has imaginary part (with respect to B 0 ) of size O( ). Put differently, S sub ( 1 ) has imaginary part of size O( ) relative to the Hermitian inner product b := B 0 (q·, q·), which is the symbol of a pseudodifferential inner product on π * T * M . We can now invoke Lemma 4.2 on a neighborhood of Γ ∩ {|σ| = 1} and use the homogeneity of q, b and S sub ( 1 ) to obtain:
Theorem 4.8. For any > 0, there exists a (positive definite) t * -independent pseudodifferential inner product B = b(x, D) on T k M (thus, b is an inner product on π * T k M , homogeneous of degree 0 with respect to dilations in the base T * M \ 0), such that
sup Γ |σ| −1 1 2i (S sub ( k ) − S sub ( k ) * b ) b ≤ ,
where Γ is the trapped set (4.8). Put differently, there is an elliptic ΨDO Q, invariant under t * -translations, acting on sections of T k M , with parametrix Q − , such that relative to the ordinary positive definite inner product (4.13), we have
sup Γ |σ| −1 σ 1 1 2i (Q k Q − − (Q k Q − ) * B0 ) B0 ≤ .
By restriction, the analogous statements are true for acting on subbundles of the tensor bundle on M , for instance differential forms of all degrees and symmetric 2-tensors.
By the t * -translation invariance of the involved symbols, inner products and operators, this is really a statement about Ψ b -inner products, and Q is a b-pseudodifferential operator; see the discussion preceding Theorem 2.1 for the relationship of the stationary and the b-picture.
Remark 4.9. Adding a 0-th order term to does not change or its imaginary part at the principal symbol level, thus does not affect the subprincipal operator of either; therefore, Theorem 4.8 holds in this case as well. Adding a first order operator L (acting on sections of T k M ), which we assume to be t-independent for simplicity, does affect the subprincipal operator, more specifically its 0-th order part, since S sub ( + L) = S sub ( ) + σ 1 (L). Thus, if σ 1 (L) is small at Γ, we can use the same Ψ-inner product as for and obtain a bound on Im b S sub ( + L) which is small, but no longer arbitrarily small. However, the bound merely needs to be smaller than ν min /2, see (2.5), which does hold for small L.
If we do not restrict the size of L, we can still obtain a spectral gap, provided one can choose a Ψ-inner product as in Theorem 4.8, again with > 0 sufficiently (but not necessarily arbitrarily) small. This is the case if the 0-th order part of S sub ( + L) is nilpotent (or has small eigenvalues) and can be conjugated in a tindependent manner to an operator which is sufficiently close to being symmetric, in the sense that it satisfies the bound (2.5) with replaced by + L.
We remark that the subprincipal operator iS sub ( ) = H G + iσ sub (G) induces a notion of parallel transport on π * T k M along the Hamilton flow of H G . As a consequence of the nilpotent structure of S sub ( ) at the trapped set, parallel sections along the trapped set grow only polynomially in size (with respect to a fixed t-invariant positive definite inner product), rather than exponentially. Parallel sections as induced by S sub ( + L), with L as in Remark 4.9, may grow exponentially, with their size bounded by Ce κ|σ|t for some constants C > 0 and κ, where the additional factor of |σ| in the exponent accounts for the homogeneity of the parallel transport. If such a bound does not hold for any κ < ν min /2, the dispersion of waves concentrated at the trapped set caused by the normally hyperbolic nature of the trapping is expected to be too weak to counteract the exponential growth caused by the subprincipal part of + L, and correspondingly one does not expect a spectral gap. Notice that the growth of parallel sections is an averaged condition in that it involves the behavior of the parallel transport for large times, while the choice of Ψ-inner products as explained above is a local condition and depends on the pointwise structure of S sub ( ); thus, establishing spectral gaps only using averaged data is an interesting open problem, even in the scalar setting.
e
−it * σj t m * u jm a jm (x) + u , Date: February 12, 2015. 2010 Mathematics Subject Classification. Primary: 35L05; Secondary: 58J40, 35P25, 83C57. The author was supported by a Gerhard Casper Stanford Graduate Fellowship and in part by András Vasy's National Science Foundation grants DMS-1068742 and DMS-1361432.
Figure 1 .
1Setup for Theorem 1 and Theorem 2.1 below. Shown are the black hole horizon H + and cosmological horizon H + , beyond which we put an artificial spacelike hypersurface H 2 with two connected components. The hypersurface H 1 plays the role of a Cauchy hypersurface, and the forcing as well as the solution to the wave equation are supported in its causal future. The domain Ω is bounded by the hypersurfaces H 1 and H 2 .
satisfying B = B * , and such that moreover the principal symbol σ 0
Definition 3. 3 .
3Let B be a Ψ-inner product, and let P ∈ Ψ m (X, E ⊗ Ω1 2 ), then P * B ∈ Ψ m (X, E ⊗ Ω 1 2 ) is called an adjoint of P with respect to B if there exists an
.
In fact, P = (BP B − ) * , where B − is a parametrix for B. Moreover, (P * B ) * B = P modulo Ψ −∞ (X, E ⊗Ω 1 2 ).In particular, Im B P = 1 2i (P − P * B ) is self-adjoint with respect to B (i.e. its own adjoint modulo Ψ −∞ ).Proof. Let B − be a parametrix of B and put R L
1 2
1(B − + (B − ) * ) (which changes B − by an operator in Ψ −∞ ). Then the second claim follows from (P * B ) * B = (BP * B B − ) * = B − BP B − B = P modulo Ψ −∞ (X,
Lemma 3. 6 .
6Suppose P ∈ Ψ m (X, E ⊗ Ω 1 2 )is self-adjoint with respect to B. Then its principal symbol p is self-adjoint with respect to
2 .
2Let b ij (x, ξ) = b(x, ξ)e j , ι(e i ) , then b(x, ξ) = (b ij (x, ξ)) i,j=1,...,N , a linear map from the fibers of E to the fibers of E * , is the symbol of B in local coordinates: If u = j u j e j |dx| 1 2 and v = j v j e
Let B − := b − (x, D) be a parametrix for b(x, D), in particular b − (x, ξ) = b(x, ξ)−1 modulo S −1 ; we may assume B − (x, D) * = B − (x, D). We then have p(x, D) = b − (x, D)p(x, D) * b(x, D)
Remark 3 . 9 .
39For Ψ b -inner products, the subprincipal operator of an operator
1 2
1) is an operator acting on E-valued half-densities, with principal symbol q. (We do not assume Q is principally scalar.) Thenσ m+m −1 ([P, Q]) = [S sub (P ), q].
σ m+m −1 ([P, Q]) according to the usual (full) symbolic calculus. Furthermore, S sub,m (QP Q − ) = S sub,m (P ) + S sub,m (Q[P, Q − ]) = S sub,m (P ) + qσ m+m −1 ([P, Q − ]) = S sub,m (P ) + q[S sub,m (P ), q −1 ]
1 2
1with E, likewise for all other half-density bundles appearing in the statement. Denote the principal symbol of B by b ∈ S 0 hom (T * X \ 12 On a symbolic level, this is the same as equation(3.6).
Y H |η| 2 and H |η| 2 are the restrictions of the pullback connection ∇ π * ΛS n−2 H |η| 2 of the full form bundle to 1-forms and functions, respectively, and the latter commutes with i η , since by Proposition 3.10,0 = S sub ([ , δ]) = −i[S sub ( ), i η ] = − ∇ π * ΛS n−2 H |η| 2 , i η ,
are extendible distributions at H 2 and supported distributions at H 1 , i.e. they are restrictions to Ω of distributions on M which are supported in t * ≥ 0. See Hörmander [21, Appendix B] for details. We also have weighted b-Sobolev spaces H s,r
discussion preceding[18, Theorem 5.5]) shows that a sufficient condition for these to hold is14, Theorem 1], or [18, Theorem 4.5] for a
microlocalized version of Dyatlov's estimate.
Thus, the crucial point is to obtain high energy estimates at the trapped set for
the operator
acting on T k in Im σ > −δ. Dyatlov's result [14, Theorem 1] (see
also the
The discussion until Proposition 3.8 works for principally non-scalar operators as well with mostly notational changes.
Thus, once we discuss Schwarzschild-de Sitter space in the next section, in the region where t * = t (which we can in particular arrange near the trapped set), σ in the present notation is equal to −σ in the notation of Section 2.
We use that π * T * X can be canonically identified with the horizontal subbundle of T * (T * X).
Acknowledgments. I am very grateful to Semyon Dyatlov, András Vasy and Alexandr Zamorzaev for many useful discussions.which immediately gives (4.5).Proposition 4.4. For the metric g as in(4.3), the subprincipal operator of 1 (the tensor wave operator acting on 1-forms on M ) in the decomposition (4.4) of 1-forms is given byProof. We start by computing the form of ∇ M µ u ν and ∇ M µ u ν for tangential and normal 1-forms. For tangential forms u = u µ dx µ with u 0 = 0, we haveMoreover, for any f ∈ C ∞ (T * X) (we will take f = α −2 and f = H), viewed as a (t, σ)-independent function on T * M , we haveFor a function f ∈ C ∞ (T * Y ), viewed as an (r, ξ)-independent function on X, we havein the decomposition (4.6) of 1-forms on X.Proof. On tangential forms u, i.e. u 1 = 0, we haveCombining Proposition 4.4 and Lemma 4.6, we can thus compute the subprincipal operator of 1 acting on 1-forms (sections of the pullback of T * M to T * M \ 0) decomposed as u = u T T + u T N α −1 dr + u N α dt. (4.10) In view of (4.9), we merely need to apply Lemma 4.6 to f = |η| 2 , in which case H f = 2Ω jk η j ∂ k − ∂ Ω jk η j η k ∂ , so i H f = 2i η on 1-forms (identifying the 1-form η with a tangent vector using the metric dω 2 ), while i H f dω 2 = 2η. Thus, we obtain: Proposition 4.7. In the decomposition (4.10), the subprincipal operator of 1 on Schwarzschild-de Sitter space at the trapped set Γ is given by(4.11)Since 1 is symmetric with respect to the natural inner product G on the 1form bundle, which in the decomposition (4.10) is an orthogonal direct sum of inner products, G = (−r −2 Ω) ⊕ (−1) ⊕ 1, the operator S sub ( 1 ) is a symmetric operator acting on sections of π * T * M over T * M \ 0 if we equip π * T * M with the fiber inner product G and use the symplectic volume density on T * M \ 0.The matrix −2r −2 s, with
Uniform energy bound and asymptotics for the maxwell field on a slowly rotating kerr black hole exterior. Lars Andersson, Pieter Blue, PreprintLars Andersson and Pieter Blue. Uniform energy bound and asymptotics for the maxwell field on a slowly rotating kerr black hole exterior. Preprint, 2013.
Gravitational scattering of electromagnetic field by Schwarzschild black-hole. Alain Bachelot, Annales de l'IHP Physique théorique. Elsevier54Alain Bachelot. Gravitational scattering of electromagnetic field by Schwarzschild black-hole. In Annales de l'IHP Physique théorique, volume 54, pages 261-320. Elsevier, 1991.
Distribution of resonances for spherical black holes. Sá Antônio, Maciej Barreto, Zworski, Mathematical Research Letters. 4Antônio Sá Barreto and Maciej Zworski. Distribution of resonances for spherical black holes. Mathematical Research Letters, 4:103-122, 1997.
Decay of the Maxwell field on the Schwarzschild manifold. Pieter Blue, J. Hyperbolic Differ. Equ. 54Pieter Blue. Decay of the Maxwell field on the Schwarzschild manifold. J. Hyperbolic Differ. Equ., 5(4):807-856, 2008.
Decay and non-decay of the local energy for the wave equation on the de sitter-schwarzschild metric. Jean-François Bony, Dietrich Häfner, Communications in Mathematical Physics. 2823Jean-François Bony and Dietrich Häfner. Decay and non-decay of the local energy for the wave equation on the de sitter-schwarzschild metric. Communications in Mathematical Physics, 282(3):697-719, 2008.
Stability and instability of the Cauchy horizon for the spherically symmetric Einstein-Maxwell-scalar field equations. Mihalis Dafermos, Ann. of Math. 1582Mihalis Dafermos. Stability and instability of the Cauchy horizon for the spherically sym- metric Einstein-Maxwell-scalar field equations. Ann. of Math. (2), 158(3):875-928, 2003.
Black holes without spacelike singularities. Mihalis Dafermos, Comm. Math. Phys. 3322Mihalis Dafermos. Black holes without spacelike singularities. Comm. Math. Phys., 332(2):729-757, 2014.
Mihalis Dafermos, Igor Rodnianski, Lectures on black holes and linear waves. Evolution equations, Clay Mathematics Proceedings. 17Mihalis Dafermos and Igor Rodnianski. Lectures on black holes and linear waves. Evolution equations, Clay Mathematics Proceedings, 17:97-205, 2008.
On pointwise decay of linear waves on a Schwarzschild black hole background. Roland Donninger, Wilhelm Schlag, Avy Soffer, Communications in Mathematical Physics. 3091Roland Donninger, Wilhelm Schlag, and Avy Soffer. On pointwise decay of linear waves on a Schwarzschild black hole background. Communications in Mathematical Physics, 309(1):51- 86, 2012.
Quasi-normal modes and exponential energy decay for the Kerr-de Sitter black hole. Semyon Dyatlov, Comm. Math. Phys. 3061Semyon Dyatlov. Quasi-normal modes and exponential energy decay for the Kerr-de Sitter black hole. Comm. Math. Phys., 306(1):119-163, 2011.
Asymptotic distribution of quasi-normal modes for kerr-de sitter black holes. Semyon Dyatlov, Annales Henri Poincaré. Springer13Semyon Dyatlov. Asymptotic distribution of quasi-normal modes for kerr-de sitter black holes. In Annales Henri Poincaré, volume 13, pages 1101-1166. Springer, 2012.
Semyon Dyatlov, arXiv:1305.1723Asymptotics of linear waves and resonances with applications to black holes. PreprintSemyon Dyatlov. Asymptotics of linear waves and resonances with applications to black holes. Preprint, arXiv:1305.1723, 2013.
Resonance projectors and asymptotics for r-normally hyperbolic trapped sets. Semyon Dyatlov, arXiv:1301.5633PreprintSemyon Dyatlov. Resonance projectors and asymptotics for r-normally hyperbolic trapped sets. Preprint, arXiv:1301.5633, 2013.
Spectral gaps for normally hyperbolic trapping. Semyon Dyatlov, arXiv:1403.6401PreprintSemyon Dyatlov. Spectral gaps for normally hyperbolic trapping. Preprint, arXiv:1403.6401, 2014.
The long-time dynamics of Dirac particles in the Kerr-Newman black hole geometry. Felix Finster, Niky Kamran, Joel Smoller, Shing-Tung Yau, Advances in Theoretical and Mathematical Physics. 71Felix Finster, Niky Kamran, Joel Smoller, and Shing-Tung Yau. The long-time dynamics of Dirac particles in the Kerr-Newman black hole geometry. Advances in Theoretical and Mathematical Physics, 7(1):25-52, 2003.
Global well-posedness of quasilinear wave equations on asymptotically de Sitter spaces. Peter Hintz, arXiv:1311.6859PreprintPeter Hintz. Global well-posedness of quasilinear wave equations on asymptotically de Sitter spaces. Preprint, arXiv:1311.6859, 2013.
Semilinear wave equations on asymptotically de Sitter, Kerr-de Sitter and Minkowski spacetimes. Peter Hintz, András Vasy, arXiv:1306.4705PreprintPeter Hintz and András Vasy. Semilinear wave equations on asymptotically de Sitter, Kerr-de Sitter and Minkowski spacetimes. Preprint, arXiv:1306.4705, 2013.
Global analysis of quasilinear wave equations on asymptotically Kerr-de Sitter spaces. Peter Hintz, András Vasy, arXiv:1404.1348PreprintPeter Hintz and András Vasy. Global analysis of quasilinear wave equations on asymptotically Kerr-de Sitter spaces. Preprint, arXiv:1404.1348, 2014.
Asymptotics for the wave equation on differential forms on Kerr-de Sitter space. Peter Hintz, András Vasy, PreprintPeter Hintz and András Vasy. Asymptotics for the wave equation on differential forms on Kerr-de Sitter space. Preprint, 2015.
Invariant manifolds. Morris W Hirsch, Michael Shub, Charles C Pugh, SpringerMorris W. Hirsch, Michael Shub, and Charles C. Pugh. Invariant manifolds. Springer, 1977.
The analysis of linear partial differential operators. I-IV. Classics in Mathematics. Lars Hörmander, SpringerBerlinLars Hörmander. The analysis of linear partial differential operators. I-IV. Classics in Math- ematics. Springer, Berlin, 2007.
Asymptotic properties of the electromagnetic field in the external Schwarzschild spacetime. Walter Inglese, Francesco Nicolo, Annales Henri Poincaré. Springer1Walter Inglese and Francesco Nicolo. Asymptotic properties of the electromagnetic field in the external Schwarzschild spacetime. In Annales Henri Poincaré, volume 1, pages 895-944. Springer, 2000.
The Atiyah-Patodi-Singer Index Theorem. Richard B Melrose, Research Notes in Mathematics. PetersRichard B. Melrose. The Atiyah-Patodi-Singer Index Theorem. Research Notes in Mathe- matics, Vol 4. Peters, 1993.
Geometric scattering theory. Richard B Melrose, Cambridge University Press1Richard B. Melrose. Geometric scattering theory, volume 1. Cambridge University Press, 1995.
Local energy decay for maxwell fields part i: Spherically symmetric black-hole backgrounds. Jacob Sterbenz, Daniel Tataru, arXiv:1305.5261PreprintJacob Sterbenz and Daniel Tataru. Local energy decay for maxwell fields part i: Spherically symmetric black-hole backgrounds. Preprint, arXiv:1305.5261, 2013.
Analytic continuation and high energy estimates for the resolvent of the Laplacian on forms on asymptotically hyperbolic spaces. András Vasy, arXiv:1206.5454PreprintAndrás Vasy. Analytic continuation and high energy estimates for the resolvent of the Lapla- cian on forms on asymptotically hyperbolic spaces. Preprint, arXiv:1206.5454, 2012.
Microlocal analysis of asymptotically hyperbolic and Kerr-de Sitter spaces. András Vasy, Inventiones mathematicae. with an appendix by Semyon DyatlovAndrás Vasy. Microlocal analysis of asymptotically hyperbolic and Kerr-de Sitter spaces (with an appendix by Semyon Dyatlov). Inventiones mathematicae, pages 1-133, 2013.
Mode stability of the Kerr black hole. F Bernard, Whiting, Journal of Mathematical Physics. 306Bernard F. Whiting. Mode stability of the Kerr black hole. Journal of Mathematical Physics, 30(6):1301-1305, 1989.
Resolvent estimates for normally hyperbolic trapped sets. Jared Wunsch, Maciej Zworski, Annales Henri Poincaré. Springer12Department of Mathematics, Stanford UniversityCA 94305-2125, USA E-mail address: [email protected] Wunsch and Maciej Zworski. Resolvent estimates for normally hyperbolic trapped sets. In Annales Henri Poincaré, volume 12, pages 1349-1385. Springer, 2011. Department of Mathematics, Stanford University, CA 94305-2125, USA E-mail address: [email protected]
|
[] |
[
"No-go theorem for static scalar field dark matter halos with no Noether charges",
"No-go theorem for static scalar field dark matter halos with no Noether charges"
] |
[
"Alberto Diez-Tejedor \nDepartamento de Física\nDivisión de Ciencias e Ingenierías\nUniversidad de Guanajuato\nCampus León37150LeónMéxico\n",
"Alma X Gonzalez-Morales \nDepartamento de Física\nDivisión de Ciencias e Ingenierías\nUniversidad de Guanajuato\nCampus León37150LeónMéxico\n\nInstituto de Ciencias Nucleares\nUniversidad Nacional Autónoma de México\nCircuito Exterior\n\nC.U\nA.P. 70-543D.F. 04510México, México\n"
] |
[
"Departamento de Física\nDivisión de Ciencias e Ingenierías\nUniversidad de Guanajuato\nCampus León37150LeónMéxico",
"Departamento de Física\nDivisión de Ciencias e Ingenierías\nUniversidad de Guanajuato\nCampus León37150LeónMéxico",
"Instituto de Ciencias Nucleares\nUniversidad Nacional Autónoma de México\nCircuito Exterior",
"C.U\nA.P. 70-543D.F. 04510México, México"
] |
[] |
Classical scalar fields have been considered as a possible effective description of dark matter. We show that, for any metric theory of gravity, no static, spherically symmetric, regular, spatially localized, attractive, stable spacetime configuration can be sourced by the coherent excitation of a scalar field with positive definite energy density and no Noether charges. In the weak-field regime the result also applies for configurations with a repulsive gravitational potential. This extends Derrick's theorem to the case of a general (non-canonical) scalar field, including the self-gravitational effects. Some possible ways out are briefly discussed. PACS numbers: 95.35.+d,
|
10.1103/physrevd.88.067302
|
[
"https://arxiv.org/pdf/1306.4400v2.pdf"
] | 119,287,000 |
1306.4400
|
d2780a2f8e42e786a975a5b569313d72eaf4bb77
|
No-go theorem for static scalar field dark matter halos with no Noether charges
26 Sep 2013
Alberto Diez-Tejedor
Departamento de Física
División de Ciencias e Ingenierías
Universidad de Guanajuato
Campus León37150LeónMéxico
Alma X Gonzalez-Morales
Departamento de Física
División de Ciencias e Ingenierías
Universidad de Guanajuato
Campus León37150LeónMéxico
Instituto de Ciencias Nucleares
Universidad Nacional Autónoma de México
Circuito Exterior
C.U
A.P. 70-543D.F. 04510México, México
No-go theorem for static scalar field dark matter halos with no Noether charges
26 Sep 2013(Dated: May 7, 2014)
Classical scalar fields have been considered as a possible effective description of dark matter. We show that, for any metric theory of gravity, no static, spherically symmetric, regular, spatially localized, attractive, stable spacetime configuration can be sourced by the coherent excitation of a scalar field with positive definite energy density and no Noether charges. In the weak-field regime the result also applies for configurations with a repulsive gravitational potential. This extends Derrick's theorem to the case of a general (non-canonical) scalar field, including the self-gravitational effects. Some possible ways out are briefly discussed. PACS numbers: 95.35.+d,
I. INTRODUCTION
There is not yet a definite answer to the dark matter (DM) problem. At the fundamental level, DM should probably be described in terms of a quantum field theory. There has been much progress in this direction within the last few decades [1], although direct [2] and indirect [3] detection methods are still inconclusive. From a different perspective, it would be possible that, at the effective level, DM admits a classical description, e.g. if the DM particles develop a Bose-Einstein condensate [4], or reach a hydrodynamic regime [5]. In this paper we will consider that a metric theory (not necessarily Einstein) describes the gravitational interaction, and restrict our attention to classical scalar field theories.
As candidates to describe the DM, scalar fields are expected to develop static, spherically symmetric, regular, spatially localized, attractive, stable, self-gravitating spacetime configurations, that can be identified with galactic halos. We show that, with no global symmetries in the action, these configurations are only possible at the expense of having negative energy densities.
In flat spacetime and for the case of a canonical scalar field this is a consequence of Derrick's theorem [6]: in three spatial dimensions there are no regular, static, localized scalar field configurations with positive definite energy density (today we know several means, either topological [7] or non-topological [8], to evade this theorem). Here we extend this result to the case of a general scalar field with an arbitrary kinetic term, including the self-gravitational effects.
This is not only a purely academic exercise. In flat spacetime a static, spherically symmetric, spatially localized perfect fluid distribution is necessarily trivial, but non-trivial self-gravitating solutions do exist [9]. These solutions have been used thoroughly for the study of stellar structure [10]. Nothing prevents something similar from happening for scalar field configurations; it is then natural to look for a version of Derrick's theorem in presence of gravity. Real galaxies may not match all the con-ditions in the theorem, however, large deviations are not expected: presumably DM halos have a small angular momentum [11] and triaxiality [12]; see Ref. [13] for a debate. The existence of a smooth transformation from idealized halos to actual ones gives some (astro)physical support to the results in this paper.
To proceed we will consider the most general action that can be constructed from a real, minimally coupled scalar field and its first derivatives,
S = d 4 x √ −g M 4 L(φ/M, X/M 4 ) .(1)
We assume that the theory is local, and Lorentz invariant. Here L is the Lagrangian density, and X ≡ − 1 2 ∂ µ φ∂ µ φ the kinetic scalar. The coupling of this field to the standard model of particle physics is highly constrained by observations, and in this paper it is considered to be negligible. We are adopting the mostly plus signature (−, +, +, +) for the spacetime metric, and taking units with 4πG = c = = 1. The characteristic scale M and the scalar field φ are measured in units of energy. A theory of the form in Eq. (1) is appropriate for the description of a single scalar degree of freedom, and is usually dubbed k-essence [14]. We will discuss the case with more than one field at the end of the paper.
With the notation in Eq. (1), the Lagrangian density for a canonical scalar field takes the form L can = X − [15]. If the Lagrangian density depends only on the kinetic scalar the resulting theory is called purely-kinetic [16].
M 4 V (φ/M ), with M 4 V (φ/M ) a potential term
In order to have a sensible theory, the Hamiltonian should be bounded from below. In particular, we will adopt the weak energy condition; that is, for every futurepointing timelike vector field t µ , the energy density measured by the corresponding observers should always be non-negative, ρ t ≡ T µν t µ t ν ≥ 0. Otherwise, a vacuum energy scale would appear in the theory, bringing back fine-tuning issues usually associated with the cosmological constant problem.
We also neglect higher-derivative terms in Eq. (1): On the one hand, they source extra dynamical degrees of freedom, most of which are not −generically− well behaved; see however Ref. [17]. Additionally, these new degrees of freedom couple gravitationally to the standard matter, introducing departures from general relativity.
(If dark matter exists, general relativity would probably describe the gravitational interaction at galactic scales.)
II. STATIC SCALAR FIELD CONFIGURATIONS
The behavior of a scalar field depends crucially on the character of the derivative terms. If they are timelike, X > 0, the energy-momentum tensor can be formally identified with that of a perfect fluid, i.e. p = p ⊥ in Eq. (4) below. On the contrary, if the derivative terms are space-like, X < 0, the energy-momentum tensor of the scalar field takes the form of a relativistic anisotropic fluid with
p ⊥ = −ρ = L , p = L − 2X∂L/∂X .(2)
Here ρ is a energy density, and p and p ⊥ are a longitudinal and a transverse pressures, respectively. From now on and in order to simplify the notation we will omit the characteristic scale M . In cosmology, the homogeneous and isotropic background guarantees a perfect fluid description. However, static spacetime configurations restrict the possible scalar distributions in a different way. In the case of spherical symmetry, although the perfect fluid analogy is still allowed (we will discuss that point later in Section III), a static, radial-dependent scalar field φ = φ(r) is required in most physical situations. For a static field the derivative terms are space-like, X < 0, and the anisotropic description necessary. This suggests the following attractive picture: DM could mimic a perfect fluid "dust" in cosmology, but a(n anisotropic) relativistic one in galaxies. This is not possible for a standard perfect fluid, where the observed non-relativistic rotation curves guarantee a Newtonian description, p ≪ ρ; see the Appendix A. (Do not confuse the Newtonian with the weakfield regime.) This route has been followed by many authors before [18][19][20], however, as we find next, there are some crucial aspects of these configurations that have been overlooked until now.
For a static, spherically symmetric configuration, the most general expression for the spacetime metric (in polar-areal coordinates, such that spheres of constant r have area 4πr 2 ) takes the form
ds 2 = − exp(2ψ(r))dt 2 + h(r)dr 2 + r 2 dΩ .(3)
The effective gravitational potential ψ(r) and the metric function h(r) > 0 are dimensionless, and dΩ = dθ 2 +sin 2 dϕ 2 is the standard solid angle element in three dimensions, with r ∈ [0, ∞). A regular spacetime metric demands ψ(r = 0) = const., h(r = 0) = 1; attractive spacetime configurations dψ/dr ≥ 0. Note that Eq. (3) is only a possible parametrization for the spacetime metric, and it does not contain any physical content beyond the underlying symmetries. The most general expression for the energy-momentum tensor compatible with the spacetime symmetries is given by
T µν = (ρ + p ⊥ )u µ u ν + p ⊥ g µν + (p − p ⊥ )n µ n ν . (4)
Here ρ is the energy density, p the pressure in the direction parallel to n µ , and p ⊥ the pressure in an orthogonal direction, all measured by an observer at rest with respect to the four-velocity u µ . For static, spherically symmetric configurations u µ = (− exp(ψ), 0, 0, 0), and n µ = (0, h 1/2 , 0, 0). Regularity at the origin demands ρ(r = 0) = ρ 0 , p (r = 0) = p 0 , and p ⊥ (r = 0) = p ⊥0 , with ρ 0 , p 0 and p ⊥0 all finite. By localized matter distribution we shall mean one where ρ(r → ∞) = p (r → ∞) = p ⊥ (r → ∞) = 0.
A. A first proof of the no-go theorem
Now we can prove the main result of this paper: that a static scalar field φ = φ(r) can source no static, spherically symmetric, regular, spatially localized, attractive, stable spacetime configuration with positive definite energy density. We use the Appendix B to show that, for the canonical and the purely-kinetic scalar fields, these configurations are not possible even at the expense of having negative energy densities.
The argument is simple, and it relies on the impossibility of fulfilling all the previous conditions at the same time. From the energy-momentum conservation, ∇ µ T µν = 0, we obtain the equation for hydrostatic equilibrium,
dp dr = −(ρ + p ) dψ dr − 2(p − p ⊥ ) r .(5)
For the case of a static scalar field, X < 0, the identities in Eq. (2) lead to ρ + p = p − p ⊥ = −2X∂L/∂X. In order to avoid tachyons and ghosts, we should satisfy ∂L/∂X > 0; see the Appendix C for details. Then a static, spherically symmetric, stable, attractive spacetime sourced by a static scalar field requires dp /dr < 0 for hydrostatic equilibrium. This condition, together with that for a localized matter distribution, p (r → ∞) = 0, guarantees a positive definite radial pressure, p (r) > 0. A regular spacetime metric demands ∂ r φ(r = 0) = 0, i.e. X(r = 0) = 0, and then, ρ 0 = −p 0 = −p ⊥0 . In particular, since p is positive definite, that implies ρ 0 < 0, i.e. the energy density should be negative, at least close to the center of the configuration.
In general relativity not only the energy density but the combination ρ + p ⊥ + 2p sources gravity [21]. For static scalar fields ρ 0 +p ⊥0 +2p 0 = −2ρ 0 , and then it is natural to understand why attractive spacetime configurations with positive energy density are not possible in general relativity. Note, however, that Eq. (5) and the paragraph below are generic, and apply for any metric theory, i.e. gravity is described by the metric tensor of a spacetime manifold, with test particles following timelike geodesics. In the weak-field regime, r|dψ/dr| ≪ 1, the last term in Eq. (5) dominates, and it is not necessary to assume an attractive gravitational potential.
It is pertinent to mention a couple of examples where the theorem holds. Negative energy densities are present in the analytic solution reported in 1 Ref. [18], where the authors demand halos with flat rotation curves, and also in the numerical solutions obtained in Ref. [20], where the condition on the rotational curves is relaxed. (See also Ref. [22] for a previous discussion of static scalar field configurations in the strong-field regime.) If the scalar fields are non-canonical, see Refs. [19]. Here we show that negative energy densities are generic, and they are not restricted to the particular solutions in Refs. [18][19][20]22].
As applied to the galaxies in the Universe, this nogo theorem assumes a very simple model for the galactic halos. One could probably argue that the presence of baryons might play an important role in a more realistic model, particularly close to the center of these configurations, where the negative energy densities were identified. We do not expect to recover all the physical properties of the halo without taking into account the existence of other matter sources in galaxies, but we consider baryonic matter cannot be an essential ingredient for the main existence of these configurations. After all, according to the standard cosmological picture, DM sourced the primordial wells for the subsequent development of cosmic structure. Furthermore, we know of the existence of dwarf galaxies which are DM dominated [23]. Even so and for the more skeptical of our readers we use some lines to show that the presence of additional matter sources cannot avoid the appearance of negative energy densities.
B. A second proof of the no-go theorem
As was noted in Refs. [18,24], we can always write the effective gravitational potential in the form metric satisfies v 2 c (r = 0) = 0; an attractive gravitational potential v 2 c (r) ≥ 0. Introducing Eqs. (3) and (4) into Einstein equations, and using the expression in Eq. (6), we get
ψ(r) = ∞ r v 2 c (r) r dr ,(6)1 hr 2 h ′ h r + h − 1 = 2ρ , (7a) 1 hr 2 [(1 + ℓ) − h] = 2p , (7b) − 1 4hr 2 (2 + ℓ) h ′ h r − (ℓ 2 + 2rℓ ′ ) = 2p ⊥ . (7c)
The prime here denotes the derivative with respect to the radial coordinate, and we have introduced ℓ(r) = 2v 2 c (r). Eqs. (7a) and (7c) can be combined to obtain
(ℓ + 2)ρ + 4p ⊥ = (2 + ℓ)m r + ℓ 2 + 2rℓ ′ 2h 1 r 2 .(8)
As usual, the effective gravitational mass m is defined from h = 1/(1 − 2m/r), with m(r) = r 0 ρ(r)r 2 dr. Eq. (8) is valid for all values of the radial coordinate, and for all the static, spherically symmetric configurations. In galaxies, baryons and DM contribute to ρ and p ⊥ . However, for those regions dominated by a static scalar field, if any, p ⊥ = −ρ, Eq. (8) simplifies to
ρ = − 1 2 − ℓ (2 + ℓ)m r + ℓ 2 + 2rℓ ′ 2h 1 r 2 .(9)
As long as ℓ 2 + 2rℓ ′ > 0, the only possible way to have a positive energy density is with a negative effective gravitational mass; and the opposite, a positive effective gravitational mass requires a negative energy density. A static scalar field with positive energy density still seems possible, but with a negative effective gravitational mass. However, both conditions are not compatible with a regular spacetime metric: in order to have m < 0 for some value of the radial coordinate, say r = r 0 , we should demand ρ < 0, at least for some region in the interval 0 < r < r 0 . For regular, attractive spacetime configurations, ℓ(r = 0) = 0, ℓ(r > 0) > 0, the condition ℓ 2 + 2rℓ ′ > 0 is guaranteed for some region in the distribution. If we restrict our attention to idealized galaxies without baryons, Eq. (9) is satisfied for all values of the radial coordinate, recovering the negative energy densities we identified close to the center of the configuration by means of the previous argument in Section II A.
In a more realistic model, one should consider the presence of other matter sources. In spiral galaxies, for instance, baryons and DM can contribute equally to the mass within the optical radius [25]. However, the external regions of spiral galaxies where (nearly) flat rotation curves are observed are dominated by DM. There are several examples of galaxies for which the relation ℓ 2 + 2rℓ ′ > 0 is still valid in the outer regions, see for instance NGC 2403 and NGC 3621 in Ref. [26]. Again, the negative energy densities emerge, no matter what you could have in the core of the galaxy. Note that, contrary to the first proof in Section II A, we used Einstein equations, but this time it was not necessary to assume a stable theory.
III. DISCUSSION
All these results extend trivially to the case with more than one field in the action. For a generic theory a set of static scalars φ i = φ i (r) is necessary in order to recover a static spacetime background. Here i = 1, . . . , n labels the different fields. However, if there is an internal continuous global symmetry, φ i → φ ′ i = T (α, φ j ), the staticity of the spacetime metric can be recovered in another way: by proposing a solution of the form φ i (t, r) = T (ωt, ϕ j (r)).
Here α is a continuous parameter for the transformation, and ω a constant with dimensions of the inverse of time. The fields are now dynamical, but the spacetime metric is static.
A case of particular interest is that of a canonical scalar field with an internal U (1) symmetry, φ → e iα φ, leading to boson stars [27]: scalar field configurations of the form φ(t, r) = e iωt ϕ(r). Now, for specific radial functions ϕ(r) and particular values of the constant ω, there are static, spherically symmetric, regular, spatially localized, attractive, stable, self-gravitating configurations, but at the expense of a non-zero Noether charge in the system: the difference between the number of particles and antiparticles. This charge is associated with the timedependency of the scalar field, and then the arguments in Section II do not apply.
Another example of interest is provided by perfect fluids. The action principle describing a perfect fluid in general relativity can be written in terms of the velocity potentials [28]. Spherical symmetry guarantees no vorticity, and then u µ = ∂ µ ϕ, with u µ the four-velocity of the fluid. The action for a perfect fluid is invariant under shifttransformations in the velocity potential, ϕ → ϕ+const., and it is this symmetry that makes possible the existence of static, spherically symmetric, regular, spatially localized, attractive, stable perfect fluid configurations with positive definite energy density [9]; see also Ref. [29]. In this case the conserved Noether charge associated to the shift-invariance is the total entropy in the system [30].
As with any no-go theorem, the results in this paper can be circumvented by relaxing some of the initial assumptions: possible ways out involve dynamical spacetimes [31], galactic halos made of smaller mini-halos [32], thermal distributions for the scalar field [33], and dark sectors with more fields and (gauge) symmetries [34].
Localized, regular canonical scalar field configurations satisfy ∂ r φ(r = 0) = 0, φ(r → ∞) = const.. Together with the Klein-Gordon equation, ✷φ − ∂L/∂φ = 0, this implies ∂L/∂φ(r = 0) = ∂L/∂φ(r → ∞) = 0. That is, for two different values of the scalar field, φ(r = 0) = φ 0 and φ(r → ∞) = φ ∞ , we need to satisfy ∂L/∂φ| φ0 = ∂L/∂φ| φ∞ = 0. This is possible only if ∂ 2 L/∂φ 2 changes sign between φ 0 and φ ∞ , signaling the appearance of tachyons in the low-energy spectra; see Eq. (C3) in the Appendix C. Here it has not been necessary to assume an attractive effective gravitational potential.
For the case of a purely-kinetic scalar field, deriving Eq. (2) for p with respect to the radial coordinate, we obtain dp dr
= − ∂L ∂X + 2X ∂ 2 L ∂X 2 ∂X ∂r .(B1)
Regular, static scalar field configurations satisfy X(r = 0) = 0, X(r > 0) ≤ 0. That is, the sign of ∂X/∂r is negative, at least for some values of the radial coordinate. Since dp /dr < 0 for hydrostatic equilibrium, the sign of ∂L/∂X + 2X∂ 2 L/∂X 2 should be negative also, at least for this same interval with negative gradients of the kinetic scalar, signaling the appearance of tachyons in the low-energy spectra; see again Eq. (C3) in the Appendix C.
Appendix C: Absence of tachyons and ghosts in the low-energy spectra
In order to have a sensible theory, at least at the effective level, we should avoid the appearance of classical and quantum instabilities in the spectrum of low-energy perturbations.
Let us consider the behavior of the small perturbations around a static, spherically symmetric scalar field configuration. Two comments are in order here. First, we will consider only perturbations in the scalar field, neglecting any possible backreaction on the metric tensor. Second, since any regular spacetime metric is locally Minkowski, we can restrict our analysis to flat spacetime. We can then propose a solution of the form φ(t, x) = φ 0 (z) + δφ(t, x), with φ 0 (z) the background solution and z signaling the direction of the field gradients, x = (x, y, z). Expanding Eq. (1) to the quadratic order in field perturbations, δφ(t, x), we obtain
L ∼ c 1 (∂ 0 δφ) 2 − c 1 (∂ ⊥ δφ) 2 − c 2 (∂ z δφ) 2 − 2c 3 (∂ z δφ)δφ − c 4 (δφ) 2 ,(C1a)
with (∂ ⊥ δφ) 2 = (∂ x δφ) 2 + (∂ y δφ) 2 , and
c 1 = ∂L ∂X , c 2 = ∂L ∂X + 2X ∂ 2 L ∂X 2 ,(C1b)c 3 = 1 2 ∂ 2 L ∂X∂φ ∂ z φ , c 4 = − ∂ 2 L ∂φ 2 .(C1c)
All these quantities are evaluated at φ 0 , 2X 0 = −(∂ z φ 0 ) 2 . In order to have a positive definite Hamiltonian density, we should satisfy
c 1 > 0 , c + ± δ ≥ 0 ,(C2)
where c ± = (c 2 ± c 4 )/2, and δ 2 = c 2 − + c 2 3 . All the conditions in Eq. (C2) are necessary in order to avoid tachyons. The first condition, ∂L/∂X > 0, guarantees the absence of ghosts. [Here we are only proving the local (in)stability of the configurations, but global considerations could make them stable, e.g. global monopoles [36].]
For the particular case in which c 3 = 0 (a canonical scalar field, for instance, or a purely-kinetic theory), the conditions
∂L ∂X > 0 , ∂L ∂X + 2X ∂ 2 L ∂X 2 ≥ 0 , ∂ 2 L ∂φ 2 ≤ 0 ,(C3)
guarantee the absence of classical and quantum instabilities. Notice that these conditions coincide with those obtained for the stability of a homogeneous and isotropic background.
AcknowledgmentsWe are grateful to Jarah Evslin, Alex Feinstein, Robert Scherrer, and Luis Ureña-Lopez for useful comments and discussions. This work was partially supported by PIFI, PROMEP, DAIP-UG, CAIP-UG, the "InstitutoAppendix A: Perfect fluid halos are NewtonianThe three conditions below, when satisfied simultaneously, guaranty the viability of a Newtonian description: i) Weak gravitational fields, g µν = η µν + γ µν , ii) Negligible stresses when compared to the mass-energy density, p ij ≪ ρ, and, iii) Relative motions much smaller than the speed of light, u i ≪ 1. (See for instance Ref.[35]for details.) Here η µν = diag(−1, 1, 1, 1) is the Minkowski spacetime metric, γ µν ≪ 1 a measure for the deviations with respect to the Minkowski metric, p ij the spatial stresses, and u µ = (u 0 , u i ) the four-velocity for the particles in the configuration.Perfect fluids satisfy p = p ⊥ . Combining this identity with Eqs. (7b) and (7c), we obtain a differential equation for the metric function h(r),Astrophysical observations provide ℓ(r) 10 −5 . To the first order in ℓ, the solution to the Eq. (A1) that is regular at the origin takes the form h(r) = 1 + ℓ(r). Introducing this expression into the Eqs.(7), we can read u i test ∼ O(ℓ 1/2 ), γ, ρ ∼ O(ℓ), p ∼ O(ℓ 2 ), and u i halo = 0, i.e. perfect fluid halos are Newtonian objects. This is no longer true for the static scalar field configurations, where p ⊥ = −ρ; see Eqs. (2) above.Appendix B: The canonical and the purely-kinetic scalar fields
Particle dark matter: Evidence, candidates and constraints. G Bertone, D Hooper, J Silk, hep-ph/0404175Phys. Rept. 405G. Bertone, D. Hooper and J. Silk, "Particle dark mat- ter: Evidence, candidates and constraints," Phys. Rept. 405 279-390 (2005) [hep-ph/0404175];
Dark matter evidence, particle physics candidates and detection methods. L Bergstrom, arXiv:1205.4882Annalen Phys. 524L. Bergstrom, "Dark matter evidence, particle physics candidates and detection methods," Annalen Phys. 524 479-496 (2012) [arXiv:1205.4882]
Toward a consistent picture for CRESST, CoGeNT and DAMA. C Kelso, D Hooper, M R Buckley, arXiv:1110.5338Phys. Rev. D. 8543515C. Kelso, D. Hooper and M.R. Buckley, "Toward a consistent picture for CRESST, CoGeNT and DAMA," Phys. Rev. D 85 043515 (2012) [arXiv:1110.5338];
Chasing a consistent picture for dark matter direct searches. C Arina, arXiv:1210.4011Phys. Rev. D. 86123527C. Arina, "Chasing a consistent picture for dark mat- ter direct searches," Phys. Rev. D 86 123527 (2012) [arXiv:1210.4011]
An evidence for indirect detection of dark matter from galaxy clusters in Fermi-LAT data. A Hektor, M Raidal, E Tempel, arXiv:1207.4466Astrophys. J. 76222A. Hektor, M. Raidal and E. Tempel, "An evidence for indirect detection of dark matter from galaxy clusters in Fermi-LAT data, " Astrophys. J. 762 L22 (2013) [arXiv:1207.4466];
Dissecting cosmic-ray electron-positron data with Occam's Razor: the role of known Pulsars. S Profumo, arXiv:0812.4457Central Eur. J. Phys. 10S. Profumo, "Dissecting cosmic-ray electron-positron data with Occam's Razor: the role of known Pulsars," Central Eur. J. Phys. 10 1-31 (2012) [arXiv:0812.4457]
Fuzzy cold dark matter: the wave properties of ultralight particles. W Hu, R Barkana, A Gruzinov, astro-ph/0003365Phys. Rev. Lett. 8511581161W. Hu, R. Barkana and A. Gruzinov, "Fuzzy cold dark matter: the wave properties of ultralight particles," Phys. Rev. Lett. 85 11581161 (2000) [astro-ph/0003365];
Bose-Einstein condensation of dark matter axions. P Sikivie, Q Yang, arXiv:0901.1106Phys. Rev. Lett. 103111301P. Sikivie and Q. Yang, "Bose-Einstein condensation of dark matter axions," Phys. Rev. Lett. 103 111301 (2009) [arXiv:0901.1106]
Seeding supermassive black holes with a non-vortical dark-matter subcomponent. I Sawicki, V Marra, W Valkenburg, arXiv:1307.6150I. Sawicki, V. Marra and W. Valkenburg, "Seeding su- permassive black holes with a non-vortical dark-matter subcomponent," (2013) [arXiv:1307.6150]
The interested reader can look also at the pseudovirial theorem of Rosen. G H Derrick, ; G Rosen, ibid. 7J. Math. Phys. 52071J. Math. Phys.G.H. Derrick, "Comments on nonlinear wave equations as models for elementary particles," J. Math. Phys. 5 1252 (1964); The interested reader can look also at the pseudovirial theorem of Rosen, G. Rosen, "Existence of particlelike solutions to nonlinear field theories," J. Math. Phys. 7 2066 (1966); ibid. 7 2071 (1966)
Cosmic strings and other topological defects. A Vilenkin, E P S Shellard, Cambridge University Press580A. Vilenkin and E.P.S. Shellard, "Cosmic strings and other topological defects," Cambridge University Press, 580 pages, (2000)
Nontopological solitons. T D Lee, Y Pang, Phys. Rep. 221251350T.D. Lee and Y. Pang , "Nontopological solitons," Phys. Rep. 221, 251350 (1992)
Static solutions of Einstein's field equations for spheres of fluid. R Tolman, Phys. Rev. 55R.C Tolman, "Static solutions of Einstein's field equa- tions for spheres of fluid," Phys. Rev. 55 364-373 (1939)
Neutron star observations: Prognosis for equation of state constraints. J M Lattimer, P Maddapa, astro-ph/0612440Phys. Rept. 442J.M. Lattimer and P. Maddapa, "Neutron star observa- tions: Prognosis for equation of state constraints," Phys. Rept. 442 109-165 (2007) [astro-ph/0612440]
A universal angular momentum profile for galactic halos. J S Bullock, astro-ph/0011001Astrophys. J. 555J.S. Bullock et al., "A universal angular momentum pro- file for galactic halos," Astrophys. J. 555 240-257 (2001) [astro-ph/0011001]
Great circle tidal streams: Evidence for a nearly spherical massive dark halo around the Milky Way. R Ibata, astro-ph/0004011Astrophys. J. 551R. Ibata et al., "Great circle tidal streams: Evi- dence for a nearly spherical massive dark halo around the Milky Way," Astrophys. J. 551 294-311 (2001) [astro-ph/0004011]
Does the Sagittarius Stream constrain the Milky Way halo to be triaxial?. R Ibata, arXiv:1212.4958Astrophys. J. 76515R. Ibata et al., "Does the Sagittarius Stream constrain the Milky Way halo to be triaxial?," Astrophys. J. 765 L15 (2013) [arXiv:1212.4958]
k-inflation. C Armendariz-Picon, T Damour, M Mukhanov, hep-th/9904075Phys. Lett. B. 458C. Armendariz-Picon, T. Damour abd M. Mukhanov, "k-inflation," Phys. Lett. B 458 209-218 (1999) [hep-th/9904075];
Essentials of k-essence. C Armendariz-Picon, V F Mukhanov, P J Steinhardt, astro-ph/0006373Phys. Rev. D. 63103510C. Armendariz-Picon, V.F. Mukhanov and P.J. Steinhardt, "Essentials of k-essence," Phys. Rev. D 63 103510 (2001) [astro-ph/0006373];
Dynamics of k-essence. A D , gr-qc/0511158Class. Quant. Grav. 23A.D. Rendall, "Dynamics of k-essence," Class. Quant. Grav. 23 1557- 1570 (2006) [gr-qc/0511158]
Coherent scalar field oscillations in an expanding universe. M S Turner, Phys. Rev. D. 281243M.S. Turner, "Coherent scalar field oscillations in an ex- panding universe," Phys. Rev. D 28 1243 (1983)
Purely kinetic k-essence as unified dark matter. R J Scherrer, astro-ph/0402316Phys. Rev. Lett. 9311301R.J. Scherrer, "Purely kinetic k-essence as unified dark matter," Phys. Rev. Lett. 93 011301 (2004) [astro-ph/0402316]
The Galileon as a local modification of gravity. A Nicolis, R Rattazzi, E Trincherini, arXiv:0811.2197Phys. Rev. D. 7964036A. Nicolis, R. Rattazzi and E. Trincherini, "The Galileon as a local modification of gravity," Phys. Rev. D 79 064036 (2009) [arXiv:0811.2197]
Spherical scalar field halo in galaxies. T Matos, F S Guzman, D Nuñez, astro-ph/0003398Phys. Rev. D. 6261301T. Matos, F.S. Guzman and D. Nuñez, "Spherical scalar field halo in galaxies," Phys. Rev. D 62 061301 (2000) [astro-ph/0003398]
Haloes of kessence. C Armendariz-Picon, E A Lim, astro-ph/0505207JCAP. 05087C. Armendariz-Picon and E.A. Lim, "Haloes of k- essence," JCAP 0508 007 (2005) [astro-ph/0505207];
Halos of unified dark matter scalar field. D Bertacca, N Bartolo, S Matarrese, arXiv:0712.0486JCAP. 08055D. Bertacca, N. Bartolo and S. Matarrese, "Halos of uni- fied dark matter scalar field," JCAP 0805 005 (2008) [arXiv:0712.0486]
Testing DM halos using rotation curves and lensing: warning on the determination of the halo mass. D Nuñez, A X Gonzalez-Morales, J L Cervantes-Cota, T Matos, arXiv:1006.4875Phys. Rev. D. 8224025D. Nuñez, A.X. Gonzalez-Morales, J.L. Cervantes-Cota and T. Matos, "Testing DM halos using rotation curves and lensing: warning on the determination of the halo mass," Phys. Rev. D 82 024025 (2010) [arXiv:1006.4875]
Pressure as a source of gravity. J Ehlers, I Ozsvath, E L Schucking, Y Shang, gr-qc/0510041Phys. Rev. D. 72124003J. Ehlers, I. Ozsvath, E.L. Schucking, Y. Shang, "Pres- sure as a source of gravity ," Phys. Rev. D 72 124003 (2005) [gr-qc/0510041]
Bound states of nonlinear scalar field. T Kodama, K C Chung, A F Da, F Teixeira, Il Nuovo Cimento. 46206T. Kodama, K.C. Chung and A.F. da F. Teixeira, "Bound states of nonlinear scalar field," Il Nuovo Ci- mento 46 206 (1978)
The various kinematics of dwarf irregular galaxies in nearby groups and their dark matter distributions. S Côté, C Carignan, K C Freeman, Astrophys. J. 120S. Côté, C. Carignan, and K.C. Freeman, "The various kinematics of dwarf irregular galaxies in nearby groups and their dark matter distributions," Astrophys. J. 120 3027-3059 (2000);
The observed properties of dark matter on small spatial scales. G Gilmore, astro-ph/0703308Astrophys. J. 663G. Gilmore etál, "The observed prop- erties of dark matter on small spatial scales," Astrophys. J. 663 948-959 (2007) [astro-ph/0703308]
An alternative approach to the galactic dark matter problem. U Nucamendi, M Salgado, D Sudarsky, gr-qc/0011049Phys. Rev. D. 63125016U. Nucamendi, M. Salgado and D. Sudarsky, "An alter- native approach to the galactic dark matter problem," Phys. Rev. D 63 125016 (2001) [gr-qc/0011049]
Kinematic constraints on the stellar and dark matter content of spiral and S0 galaxies. M J Williams, M Bureau, M Cappellari, arXiv:0909.0680Mon. Not. Roy. Astron. Soc. 400M.J. Williams, M. Bureau and M. Cappellari, "Kine- matic constraints on the stellar and dark matter content of spiral and S0 galaxies," Mon. Not. Roy. Astron. Soc. 400 1665-1689 (2009) [arXiv:0909.0680]
High-Resolution Rotation Curves and Galaxy Mass Models from THINGS. W J G De Blok Etál, arXiv:0810.2100Astron. J. 136W.J.G. de Blok etál, "High-Resolution Rotation Curves and Galaxy Mass Models from THINGS," Astron. J. 136 2648-2719 (2008) [arXiv:0810.2100]
Systems of selfgravitating particles in general relativity and the concept of an equation of state. R Ruffini, S Bonazzola, Phys. Rev. 187R. Ruffini and S. Bonazzola, "Systems of selfgravitating particles in general relativity and the concept of an equa- tion of state," Phys. Rev. 187 1767-1783 (1969);
Dynamical Boson Stars. S L Liebling, C Palenzuela, arXiv:1202.5809Living Rev. Rel. 156S.L. Liebling and C. Palenzuela, "Dynamical Boson Stars," Living Rev. Rel. 15 6 (2012) [arXiv:1202.5809]
Perfect fluids in general relativity: velocity potentials and a variational principle. B F Schutz, Phys. Rev. D. 2B.F. Schutz, "Perfect fluids in general relativity: veloc- ity potentials and a variational principle," Phys. Rev. D 2 2762-2773 (1970);
Variational aspects of relativistic field theories, with applications to perfect fluids. B F Schutz, R Sorkin, Ann. Phys. 107B.F. Schutz and R. Sorkin, "Varia- tional aspects of relativistic field theories, with applica- tions to perfect fluids," Ann. Phys. 107 1-43 (1977);
Action functionals for relativistic perfect fluids. J D Brown, gr-qc/9304026Class. Quant. Grav. 10J.D. Brown, "Action functionals for relativistic perfect fluids," Class. Quant. Grav. 10 1579-1606 (1993) [gr-qc/9304026]
The Homogeneous scalar field and the wet dark sides of the universe. A Diez-Tejedor, A Feinstein, gr-qc/0604031Phys. Rev. D. 7423530A. Diez-Tejedor and A. Feinstein, "The Homogeneous scalar field and the wet dark sides of the universe," Phys. Rev. D 74 023530 (2006) [gr-qc/0604031]
Note on scalars, perfect fluids, constrained field theories, and all that. A Diez-Tejedor, arXiv:1309.4756A. Diez-Tejedor, "Note on scalars, perfect fluids, constrained field theories, and all that," (2013) [arXiv:1309.4756]
Oscillating soliton stars. E Seidel, W M Suen, Phys. Rev. Lett. 66E. Seidel and W.M. Suen, "Oscillating soliton stars," Phys. Rev. Lett. 66 1659-1662 (1991)
Self-gravitating system made of axions. J Barranco, A Bernal, arXiv:1001.1769Phys. Rev. D. 8343525J. Barranco and A. Bernal, "Self-gravitating system made of axions," Phys. Rev. D 83 043525 (2011) [arXiv:1001.1769]
Selfgravitating bosons at nonzero temperature. N Bilic, H Nikolic, gr-qc/0006065Nucl. Phys. B. 590N. Bilic and H. Nikolic, "Selfgravitating bosons at nonzero temperature," Nucl. Phys. B 590 575-595 (2000) [gr-qc/0006065]
Dwarf galaxy sized monopoles as dark matter?. J Evslin, S B Gudnason, arXiv:1202.0560J. Evslin and S.B. Gudnason, "Dwarf galaxy sized monopoles as dark matter?," (2012) [arXiv:1202.0560]
General relativity. R M Wald, University of Chicago Press506Sec. 4.4R.M. Wald, "General relativity," University of Chicago Press (June 1, 1984), 506pp, Sec. 4.4
Gravitational field of a global monopole. M Barriola, A Vilenkin, Phys. Rev. Lett. 63341M. Barriola and A. Vilenkin, "Gravitational field of a global monopole," Phys. Rev. Lett. 63 341 (1989);
Repulsive gravitational effects of global monopoles. D Harari, C Lousto, Phys. Rev. D. 42D. Harari and C. Lousto, "Repulsive gravitational effects of global monopoles," Phys. Rev. D 42 2626-2631 (1990);
The (in)stability of global monopoles revisited. A Achucarro, J Urrestilla, hep-ph/0003145Phys. Rev. Lett. 85A. Achucarro and J. Urrestilla, "The (in)stability of global monopoles revisited," Phys. Rev. Lett. 85 3091-3094 (2000) [hep-ph/0003145]
|
[] |
[
"Cameron-Liebler sets in Hamming graphs",
"Cameron-Liebler sets in Hamming graphs"
] |
[
"Jun Guo \nDepartment of Mathematics\nLangfang Normal University\n065000LangfangChina\n",
"Lingyu Wan \nDepartment of Mathematics\nLangfang Normal University\n065000LangfangChina\n"
] |
[
"Department of Mathematics\nLangfang Normal University\n065000LangfangChina",
"Department of Mathematics\nLangfang Normal University\n065000LangfangChina"
] |
[] |
In this paper, we discuss Cameron-Liebler sets in Hamming graphs, obtain several equivalent definitions and present all classification results.
| null |
[
"https://arxiv.org/pdf/2005.02227v2.pdf"
] | 218,502,215 |
2005.02227
|
db0b9ff959f91268ecc50b44a9181dd663d557c3
|
Cameron-Liebler sets in Hamming graphs
Jun Guo
Department of Mathematics
Langfang Normal University
065000LangfangChina
Lingyu Wan
Department of Mathematics
Langfang Normal University
065000LangfangChina
Cameron-Liebler sets in Hamming graphs
arXiv:2005.02227v1 [math.CO] 3 May 2020AMS classification: 05E3051E30 Key words: Cameron-Liebler setHamming graphWord
In this paper, we discuss Cameron-Liebler sets in Hamming graphs, obtain several equivalent definitions and present all classification results.
Introduction
Cameron-Liebler sets of lines were first introduced by Cameron and Liebler [3] in their study of collineation groups of PG (3, q). There have been many results for Cameron-Liebler sets of lines in the projective space PG (3, q). See [14,15,16] for classification results, and [2,4,8,10] for the constructions of two non-trivial examples. Over the years, there have been many interesting extensions of this result. See [1,11,13,17] for Cameron-Liebler sets of k-spaces in PG(n, q), [5] for Cameron-Liebler sets of generators in polar spaces, and [6] for Cameron-Liebler classes in finite sets.
One of the main reasons for studying Cameron-Liebler sets is that there are many connections to other geometric and combinatorial objects, such as blocking sets, intersecting families, linear codes, and association schemes. Filmus and Ihringer [9] investigated recently Cameron-Liebler sets for several classical distance-regular graphs, including Johnson graphs, Grassmann graphs, dual polar graphs, and bilinear forms graphs. Their research stimulates us to consider Cameron-Liebler sets in Hamming graphs.
For positive integers n and q with q ≥ 2, let [n] = {1, 2, . . . , n} and Q be an alphabet of q symbols. For i = 1, 2, . . . , n, a pair (A, f ) is called an i-word if A is an i-element subset of [n] and f is a function from A to Q. In particular, n-words are called words. Let M(i; n, q) be the collection of all i-words. For (A, f ) ∈ M(i; n, q) and (B, g) ∈ M(j; n, q), we say that
(B, g) contains (A, f ), denoted by (A, f ) (B, g), if A ⊆ B and g| A = f , where g| A is the restriction of g on A.
For convenience, we write M(i; n, q) as M i for i = 1, 2, . . . , n. Let 1 ≤ i ≤ j ≤ n. For a fixed (B, g) ∈ M j , let M i (B, g) be the collection of all i-words contained in (B, g). For : f (a) = g(a)}| = 1. The Hamming graph H(n, q) is a distancetransitive graph with q n vertices and diameter n. Note that the distance between two vertices ([n], f ) and ([n], g) is n − |{a ∈ [n] : f (a) = g(a)}|.
a fixed (A, f ) ∈ M i , let M ′ j (A, f )
We always assume that vectors are regarded as column vectors. For any vector α whose positions correspond to elements in a set, we denote its value on the position corresponding to an element a by (α) a . The characteristic vector χ S of a subset S of M n is the vector whose positions correspond to the elements of M n , such that (χ S ) ( In this paper, we consider Cameron-Liebler sets in the Hamming graph H(n, q). The rest of this paper is structured as follows. In Section 2, we give several equivalent definitions for these Cameron-Liebler sets in H(n, q). In Section 3, we obtain several properties of these Cameron-Liebler sets in H(n, q). By using the properties, we give the following classification result: If q ≥ 2, then any non-empty Cameron-Liebler set in H(n, q) is trivial. [7].) The distinct eigenvalues of G are q(n − 1), 0 and −q, and the corresponding multiplicities are 1, (q − 1)n and n − 1, respectively.
Several equivalent definitions
Lemma 2.2 Let 1 ≤ i ≤ j ≤ n.
Then the following hold:
(i) The size of M i is q i n i . (ii) For a fixed (B, g) ∈ M j , the size of M i (B, g) is j i . (iii) For a fixed (A, f ) ∈ M i , the size of M ′ j (A, f ) is q j−i n−i j−i . (iv) For a fixed ([n], f ) ∈ M n , the size of M n ([n], f ) is (q − 1) n . (v) For fixed ({a}, f ) ∈ M 1 and ([n], g) ∈ M n , the size of M ′ n ({a}, f ) ∩ M n ([n], g) is 0 if ({a}, f ) ([n], g), and (q − 1) n−1 otherwise. Lemma 2.3
The rank of the incidence matrix M is (q − 1)n + 1 over the real field R.
N ({a},f ),({b},g) = q n−1 if ({a}, f ) = ({b}, g), q n−2 if ({a}, f ) ∨ ({b}, g) ∈ M 2 , 0 otherwise, which implies that N = q n−1 I + q n−2 A,(1)
where I is the identity matrix of order qn, and A is the adjacency matrix of the graph G. By Lemma 2.1 and (1), we obtain that the distinct eigenvalues of N are q n−1 n, q n−1 and 0, and the corresponding multiplicities are 1, (q − 1)n and n − 1, respectively. It follows that the rank of N is (q − 1)n + 1. , g) are disjoint. Let K be the adjacent matrix of HK(n, q). The eigenvalues and the dimensions of the eigenspaces of HK(n, q) is described in [12]. [12].) The distinct eigenvalues of HK(n, q) are λ j = (−1) j (q − 1) n−j , j = 0, 1, . . . , n, and the eigenspace V j corresponding to λ j has dimension (q − 1) j n j .
Lemma 2.5 Im(M t ) = V 0 ⊕ V 1 , where V 0 = j .
Proof. By Lemma 2.3, the matrix M has qn rows with rank(M ) = (q − 1)n + 1. By Lemma 2.4, dim V 1 = (q − 1)n, and therefore dim(
V 0 ⊕ V 1 ) = dim Im(M t ). From Lemma 2.2 (iv) and (v), we deduce that Kχ M ′ n ({a},f ) = (q − 1) n−1 (j − χ M ′ n ({a},f ) ) and Kj = (q − 1) n j, which imply that K(χ M ′ n ({a},f ) − q −1 j) = −(q − 1) n−1 (χ M ′ n ({a},f ) − q −1 j). It follows that χ M ′ n ({a},f ) − q −1 j ∈ V 1 . Therefore, we have χ M ′ n ({a},f ) ∈ V 0 ⊕ V 1 . Since χ M ′ n ({a},f ) is the column of M t corresponding to the element ({a}, f ), we have Im(M t ) ⊆ V 0 ⊕ V 1 . From dim(V 0 ⊕ V 1 ) = dim Im(M t ), we deduce that Im(M t ) = V 0 ⊕ V 1 . ✷
The incidence vector v S of a subset S of M 1 is the vector whose positions correspond to the elements of M 1 , such that (v S ) τ = 1 if τ ∈ S and 0 otherwise.
Lemma 2.6 Let ([n], f ) ∈ M n . Then χ Mn([n],f ) − (q − 1) n−1 (q −(n−1) j − χ {([n],f )} ) ∈ ker(M ).χ Mn([n],f ) − (q − 1) n−1 (q −(n−1) j − χ {([n],f )} ) ∈ ker(M ), as desired. ✷
An n-partition of M 1 is a set P of words in M n such that any one-word is contained exactly in one word of P. Let P be an n-partition of M 1 . Every subset of P is called a partial n-partition of M 1 . A pair of conjugate switching sets is a pair of disjoint partial n-partitions of M 1 that cover the same subset of M 1 . Now, we give several equivalent definitions for a Cameron-Liebler set in M n .
Theorem 2.7 Let L be a non-empty set in M n with |L| = xq n−1 . Then the following properties are equivalent.
(i) χ L ∈ Im(M t ). (ii) χ L ∈ ker(M ) ⊥ . (iii) For every ([n], f ) ∈ M n , the number of elements in L disjoint to ([n], f ) is (q − 1) n−1 (x − (χ L ) ([n],f ) ). (iv) The vector v = χ L − xq −1 j is a vector in V 1 . (v) χ L ∈ V 0 ⊕ V 1 .
(vi) |L ∩ P| = x for every n-partition P of M 1 . Since χ L ∈ ker(M ) ⊥ , we have
χ Mn([n],f ) · χ L − (q − 1) n−1 (q −(n−1) j · χ L − χ {([n],f )} · χ L ) = 0 ⇔ |M n ([n], f ) ∩ L| − (q − 1) n−1 (q −(n−1) |L| − (χ L ) ([n],f ) ) = 0 ⇔ |M n ([n], f ) ∩ L| = (q − 1) n−1 (x − (χ L ) ([n],f ) ).
The last equality shows that the desired result follows.
(iii) ⇒ (iv): From (iii), we deduce that
Kχ L = (q − 1) n−1 (xj − χ L ) = −λ 1 (xj − χ L ).
By Lemma 2.6, we have Kj = −λ 1 (q − 1)j, and therefore
Kv = K(χ L − xq −1 j) = −λ 1 (xj − χ L ) + λ 1 (q − 1)xq −1 j = λ 1 (χ L + x((q − 1)q −1 − 1)j) = λ 1 (χ L − xq −1 j) = λ 1 v.
By Lemma 2.4, we obtain v ∈ V 1 .
(iv) ⇒ (v): From V 0 = j , we deduce that the desired result follows.
(v) ⇒ (i): By Lemma 2.5, the desired result follows. Now we show that the property (vi) is also equivalent to the other properties.
(ii) ⇒ (vi): Let P be an n-partition of M 1 . Since M χ P = j, by Lemma 2.2 (iii), χ P − q −(n−1) j ∈ ker(M ). Since χ L ∈ ker(M ) ⊥ , we have
0 = χ L · (χ P − q −(n−1) j) = |L ∩ P| − q −(n−1) |L|, which implies that |L ∩ P| = q −(n−1) |L| = x.(x − (χ L ) ([n],f ) ) ℓ 1 ℓ 2 = (x − (χ L ) ([n],f ) )(q − 1) n−1 .
Next, we show that property (vii) is equivalent with the other properties.
(ii) ⇒ (vii): Since R and R ′ cover the same subset of M 1 , we have χ R − χ R ′ ∈ ker(M ), which implies that
χ L · (χ R − χ R ′ ) = χ L · χ R − χ L · χ R ′ = 0. It follows that |L ∩ R| = χ L · χ R = χ L · χ R ′ = |L ∩ R ′ |.
(vii) ⇒ (vi): For any two n-partitions P 1 and P 2 of M 1 , the the sets P 1 \ P 2 and P 2 \ P 1 form a pair of conjugate switching sets. So |L ∩ (P 1 \ P 2 )| = |L ∩ (P 2 \ P 1 )|, which implies that |L ∩ P 1 | = |L ∩ P 2 )| = c. Now we prove c = x = |L|q −(n−1) . Let ℓ i , for i = 0, 1, be the number of n-partitions of M 1 that contain i fixed pairwise disjoint elements in M n . We count the number of couples (([n], f ), P), where ([n], f ) ∈ M n and P is an n-partition of M 1 containing ([n], f ), then ℓ 0 q = ℓ 1 q n , which implies that ℓ 0 /ℓ 1 = q n−1 .
By counting the number of couples (([n], f ), P), where ([n], f ) ∈ L and P is an n-partition of M 1 containing ([n], f ), then the number of elements in L ∩ S equals |L| ℓ1 ℓ0 = |L|q −(n−1) = x. ✷
Classification results
In this section, we give some examples and list all classification results for Cameron-Liebler sets in the Hamming graph H(n, q). We begin with a simple lemma.
Lemma 3.1 Let L and L ′ be two Cameron-Liebler sets in H(n, q) with parameters x and x ′ , respectively. Then the following hold:
(i) 0 ≤ x ≤ q.
(ii) The set of all elements in M n not in L is a Cameron-Liebler set in H(n, q) with parameter q − x.
(iii) If L ∩ L ′ = ∅, then L ∪ L ′ is a Cameron-Liebler set in H(n, q) with parameter x + x ′ .
(iv) If L ′ ⊆ L, then L \ L ′ is a Cameron-Liebler set in H(n, q) with parameter x − x ′ . H(n, q). Next, we give all classification results for Cameron-Liebler sets in H(n, q). We need the following definition.
Now, we give some examples of Cameron-Liebler sets in
Let ω S be a vector over R whose positions correspond to the elements of a finite set S. The spectrum of ω S is the set {(ω S ) s : s ∈ S} := spec(ω S ), and the spectrum volume of ω S is the size of the set spec(ω S ). Note that the spectrum volume of the characteristic vector of each Cameron-Liebler set in H(n, q) is at most 2. Since q i=1 χ M ′ n ({a},fai) = j, we have spec(ω − kj) = {0} ∪ i∈Ka {k ai − k a }, which implies that there exists some d such that d = k ai − k a for each i ∈ K a since |spec(ω − kj)| = 2. Then w = kj + d i∈Ka χ M ′ n ({a},fai) . Therefore, the desired result follows. If spec(ω) = {0, 1}, by the conclusion above, we have spec(ω) = {k, k + d} = {0, 1}, then either k = 0 and d = 1, which imply that ω = i∈Ka χ M ′ n ({a},fai) ; or k = 1 and d = −1, which imply that w = j − i∈Ka χ M ′ n ({a},fai) = i∈[q]\Ka χ M ′ n ({a},fai) . ✷ Since |L| = xq n−1 = tq n−1 , we have x = t, and thus L is trivial. ✷ By Lemma 3.2 (ii), the Cameron-Liebler set in H(n, q) with parameter x = q is a union of q intersecting families, and therefore is trivial. By Theorem 3.4, we obtain the following classification result.
ω = kj + d t i=1 χ M ′ n ({a},fi) . Moreover, if spec(ω) = {0, 1}, then either ω = t i=1 χ M ′ n ({a},fi) or ω = q i=t+1 χ M ′ n ({a},ω − kj = a∈[n] q i=1 (k ai − k a )χ M ′ n ({a},fai) = a∈A i∈Ka (k ai − k a )χ M ′ n ({a},fai) . Next, we show |A| = 1. Suppose A ⊇ {b, c}. Since K a = ∅ for each a ∈ A, there exist some i b , i c ∈ [q] such that i b ∈ K b and i c ∈ K c . Let
Then
(ω − kj) ([n],f ) = 0, (ω − kj) ([n],g1) = k bi b − k b , (ω − kj) ([n],g2) = k cic − k c , (ω − kj) ([n],g) = k bi b − k b + k cic − k c , which imply that |spec(ω − kj)| ≥ 3. Since spec(ω − kj) = {b − k : b ∈ spec(w)},
Theorem 3.5 Let q ≥ 2. Then any non-empty Cameron-Liebler set in H(n, q) is trivial.
[n],f ) = 1 if ([n], f ) ∈ S and 0 otherwise. The all-one vector will be denoted by j. Let M be the incidence matrix with rows indexed with M 1 and columns indexed with M n such that entry M (A,f ),([n],g) = 1 if and only if ([n], g) contains (A, f ). A subset L of M n is called a Cameron-Liebler set in the Hamming graph H(n, q) with parameter x = q −(n−1) |L| if χ L ∈ Im(M t ), where M t is the transpose matrix of M . A family F ⊆ M n is called intersecting if ([n], f ) and ([n], g) are intersecting for all ([n], f ), ([n], g) ∈ F . A Cameron-Liebler set L in H(n, q) with parameter x is called trivial if L is a union of x intersecting families.
For
({a}, f ), ({b}, g) ∈ M 1 with a = b, let ({a}, f ) ∨ ({b}, g) = ({a, b}, h), where h(a) = f (a) and h(b) = g(b). Let G be the graph with the vertex set M 1 , and two vertices ({a}, f ) and ({b}, g) are adjacent if ({a}, f ) ∨ ({b}, g) ∈ M 2 . Then G is a complete n-partite graph.
Proof. Let N = M M t . Then both N and M have the same rank over R. Note that N is a qn × qn matrix with rows and columns indexed with the elements in M 1 . For any ({a}, f ), ({b}, g) ∈ M 1 , the entry N ({a},f ),({b},g) is the number of elements in M n containing both ({a}, f ) and ({b}, g). By Lemma 2.2 (iii), we have
✷
The Hamming Kneser graph HK(n, q) has the vertex set M n , and two vertices ([n], f ) and ([n], g) are adjacent if ([n], f ) and ([n]
Proof. By Lemma 2.2 (v), we haveM χ Mn([n],f ) = (q − 1) n−1 (j − v M1([n],f ) ). By M j = q n−1 j and M χ {([n],f )} = v M1([n],f ) , we obtain M χ Mn([n],f ) = (q − 1) n−1 (q −(n−1) M j − M χ {([n],f )} ), which implies that
( vii )
viiFor every pair of conjugate switching sets R and R ′ , we have |L ∩ R| = |L ∩ R ′ |. Proof. (i) ⇔ (ii): Since Im(M t ) = ker(M ) ⊥ , the desired result follows. (ii) ⇒ (iii): Let ([n], f ) ∈ M n . By Lemma 2.6, we obtain χ Mn([n],f ) − (q − 1) n−1 (q −(n−1) j − χ {([n],f )} ) ∈ ker(M ).
(vi) ⇒ (iii): Let ℓ i , for i = 1, 2, be the number of n-partitions of M 1 that contain i fixed pairwise disjoint elements in M n . Since the Hamming graph H(n, q) is distance-transitive, this number only depends on i, and not on the chosen elements. For a fixed ([n], f ) ∈ M n , if we count the number of couples (([n], g), P), where ([n], g) ∈ M n such that ([n], f ) and ([n], g) are disjoint and P is an n-partition of M 1 containing ([n], f ) and ([n], g), by Lemma 2.2 (iv), we have ℓ 1 (q − 1) = ℓ 2 (q − 1) n , which implies that ℓ 1 /ℓ 2 = (q − 1) n−1 . For a fixed ([n], f ) ∈ M n , if we count the number of couples (([n], g), P), where ([n], g) ∈ L such that ([n], f ) and ([n], g) are disjoint and P is an n-partition of M 1 containing ([n], f ) and ([n], g), then the number of subspaces in L disjoint to ([n], f ) is
Lemma 3. 2
2(i) For a fixed element ({a}, f ) ∈ M 1 , the set M ′ n ({a}, f ) is a Cameron-Liebler set in H(n, q) with parameter 1. (ii) For each integer x with 0 ≤ x ≤ q, there exists a Cameron-Liebler set in H(n, q) with parameter x. Proof. (i). Since the characteristic vector χ M ′ n ({a},f ) is the row of M corresponding to the element ({a}, f ), by Theorem 2.7 (i), the set M ′ n ({a}, f ) is a Cameron-Liebler set in H(n, q) with parameter 1. (ii). First note that a Cameron-Liebler set in H(n, q) with parameter 0 is the empty set. Let ({a}, f 1 ), ({a}, f 2 ), . . . , ({a}, f q ) be q pairwise different elements in M 1 . Then M ′ n ({a}, f i ) ∩ M ′ n ({a}, f j ) = ∅ for all i = j, and the set M ′ n ({a}, f i ) is a Cameron-Liebler set in H(n, q) with parameter 1 for each i. By Lemma 3.1 (iii), the set x i=1 M ′ n ({a}, f i ) is a Cameron-Liebler set in H(n, q) with parameter x. ✷
Lemma 3. 3
3Let ω ∈ Im(M t ) with |spec(ω)| = 2. Then there exist d, k ∈ R with d = 0, and t (< q) pairwise different elements ({a}, f 1 ), . . . , ({a}, f t ) in M 1 such that
f
(a) =î a for each a ∈ [n], g 1 (b) = i b and g 1 (a) =î a for each a ∈ [n] \ {b}, g 2 (c) = i c and g 2 (a) =î a for each a ∈ [n] \ {c}, g(b) = i b , g(c) = i c and g(a) =î a for each a ∈ [n] \ {b, c}.
we have |spec(ω − kj)| = |spec(w)| = 2, a contradiction. So, we complete the proof of |A| = 1. Let A = {a}. Then ω − kj = i∈Ka (k ai − k a )χ M ′ n ({a},fai) .
Theorem 3. 4
4Let 1 ≤ x ≤ q − 1.Then any Cameron-Liebler set in H(n, q) with parameter x is trivial.Proof. Let L be a Cameron-Liebler set in H(n, q) with parameter x. Then spec(χ L ) = {0, 1}. By Lemma 3.3, there exist t (< q) pairwise different elements({a}, f 1), . . . , ({a}, f t ) in M 1 such that ω = t i=1 χ M ′ n ({a},fi) . Then L = t i=1 M ′ n ({a}, f i ).
be the collection of all j-words containing (A, f ). Two words ([n], f ) and ([n], g) are called intersecting if there exists some ({a}, h) ∈ M 1 such that ({a}, h) ([n], f ) and ({a}, h) (B, g). For a fixed ([n], f ) ∈ M n , let M n ([n], f ) be the collection of all words disjoint to ([n], f ). The Hamming graph H(n, q) has the vertex set M n , and two vertices ([n], f ) and ([n], g) are adjacent if |{a ∈ [n]
fi) . Proof. For each a ∈ [n], let ({a}, f a1 ), ({a}, f a2 ), . . . , ({a}, f aq ) be q pairwise different elements in M 1 . Then there exist k ai ∈ R for all a ∈ [n] and 1 ≤ i ≤ q, such that ω = a∈[n] q i=1 k ai χ M ′ n ({a},fai) . min{k ai : 1 ≤ i ≤ q} if For each a ∈ [n], the set K a = {i : k ai − k a = 0} is a proper subset of [q] = {1, 2, . . . , q}, which implies that there exists someî a ∈ [q] such thatî a ∈ K a . Let A = {a ∈ [n] : K a = ∅}. Since q i=1 χ M ′ n ({a},fai) = j for each a ∈ [n], we haveLet
k =
a∈[n]
k a , where k a =
q
i=1
k ai = 0;
0
otherwise.
Cameron-Liebler sets of k-spaces in PG(n, q). A Blokhuis, M Boeck, J , Des. Codes Cryptogr. 87A. Blokhuis, M. De Boeck, J. D'haeseleer, Cameron-Liebler sets of k-spaces in PG(n, q), Des. Codes Cryptogr. 87 (2019) 1839-1856.
The construction of Cameron-Liebler line classes in PG(3, q). A A Bruen, K Drudge, Finite Fields Appl. 5A.A. Bruen, K. Drudge, The construction of Cameron-Liebler line classes in PG(3, q), Finite Fields Appl. 5 (1999) 35-45.
Tactical decompositions and orbits of projective groups. P J Cameron, R A Liebler, Linear Algebra Appl. 46P.J. Cameron, R.A. Liebler, Tactical decompositions and orbits of projective groups, Linear Algebra Appl. 46 (1982) 91-102.
A new family of tight sets in Q + (5, q). J De Beule, J Demeyer, K Metsch, M Rodgers, Des. Codes Cryptogr. 78J. De Beule, J. Demeyer, K. Metsch, M. Rodgers, A new family of tight sets in Q + (5, q), Des. Codes Cryptogr. 78 (2016) 655-678.
Cameron-Liebler sets of generators in finite classical polar spaces. M De Boeck, M Rodgers, L Storme, A Švob, J. Combin. Theory Ser. A. 167M. De Boeck, M. Rodgers, L. Storme, A.Švob, Cameron-Liebler sets of generators in finite classical polar spaces, J. Combin. Theory Ser. A 167 (2019) 340-388.
The Cameron-Liebler problem for sets. M De Boeck, L Storme, A Švob, Discrete Math. 339M. De Boeck, L. Storme, A.Švob, The Cameron-Liebler problem for sets, Discrete Math. 339 (2016) 470-474.
On the spectrum of a complete multipartite graph. F Esser, F Harary, European J. Combin. 1F. Esser, F. Harary, On the spectrum of a complete multipartite graph, European J. Combin. 1 (1980) 211-218.
Cameron-Liebler line classes with parameter x = q 2 −1 2. T Feng, K Momihara, Q Xiang, J. Combin. Theory Ser. A. 133T. Feng, K. Momihara, Q. Xiang, Cameron-Liebler line classes with parameter x = q 2 −1 2 , J. Combin. Theory Ser. A 133 (2015) 307-338.
Boolean degree 1 functions on some classical association schemes. Y Filmus, F Ihringer, J. Combin. Theory Ser. A. 162Y. Filmus, F. Ihringer, Boolean degree 1 functions on some classical association schemes, J. Combin. Theory Ser. A 162 (2019) 241-270.
Derivation of Cameron-Liebler line classes. A L Gavrilyuk, I Matkin, T Pentilla, Des. Codes Cryptogr. 86A.L. Gavrilyuk, I. Matkin, T. Pentilla, Derivation of Cameron-Liebler line classes, Des. Codes Cryptogr. 86 (2018) 231-236.
A L Gavrilyuk, I Y Mogilnykh, Cameron-Liebler line classes in PG(n, 4). 73A.L. Gavrilyuk, I.Y. Mogilnykh, Cameron-Liebler line classes in PG(n, 4), Des. Codes Cryptogr. 73 (2014) 969-982.
C Godsil, K Meagher, Erdős-Ko-Rado Theorem, Algebraic Approaches. CambridgeCambridge University PressC. Godsil, K. Meagher, Erdős-Ko-Rado Theorem: Algebraic Approaches, Cambridge University Press, Cambridge, 2016.
A gap result for Cameron-Liebler k-classes. K Metsch, Discrete Math. 340K. Metsch, A gap result for Cameron-Liebler k-classes, Discrete Math. 340 (2017) 1311- 1318.
The non-existence of Cameron-Liebler line classes with parameter 2 < x < q. K Metsch, Bull. Lond. Math. Soc. 42K. Metsch, The non-existence of Cameron-Liebler line classes with parameter 2 < x < q, Bull. Lond. Math. Soc. 42 (2010) 991-996.
An improved bound on the existende of Cameron-Liebler line classes. K Metsch, J. Combin. Theory Ser. A. 121K. Metsch, An improved bound on the existende of Cameron-Liebler line classes, J. Combin. Theory Ser. A 121 (2014) 89-93.
Cameron-Liebler line classes. M Rodgers, Des. Codes Cryptogr. 68M. Rodgers, Cameron-Liebler line classes, Des. Codes Cryptogr. 68 (2013) 33-37.
Cameron-Liebler k-classes in PG(2k + 1, q). M Rodgers, L Storme, A Vansweevelt, Combinatorica. 38M. Rodgers, L. Storme, A. Vansweevelt, Cameron-Liebler k-classes in PG(2k + 1, q), Combinatorica 38 (2018) 739-757.
|
[] |
[
"Cavity Molecular Dynamics Simulations of Liquid Water under Vibrational Ultrastrong Coupling",
"Cavity Molecular Dynamics Simulations of Liquid Water under Vibrational Ultrastrong Coupling"
] |
[
"Tao E Li [email protected] ",
"Abraham Nitzan [email protected] ",
"Joseph E Subotnik [email protected] ",
"\nof Chemistry\n‡School of Chemistry\nUniversity of Pennsylvania\n19104PhiladelphiaPennsylvaniaUSA\n",
"\nTel Aviv University\n69978Tel AvivIsrael\n"
] |
[
"of Chemistry\n‡School of Chemistry\nUniversity of Pennsylvania\n19104PhiladelphiaPennsylvaniaUSA",
"Tel Aviv University\n69978Tel AvivIsrael"
] |
[] |
We simulate vibrational strong (VSC) and ultrastrong coupling (V-USC) for liquid water with classical molecular dynamics simulations. When the cavity modes are resonantly coupled to the O H stretch mode of liquid water, the infrared spectrum shows asymmetric Rabi splitting. The lower polariton (LP) may be suppressed or enhanced relative to the upper polariton (UP) depending on the frequency of the cavity mode.Moreover, although the static properties and the translational diffusion of water are not changed under VSC or V-USC, we do find the modification of the orientational autocorrelation function of H 2 O molecules especially under V-USC, which could play a role in ground-state chemistry.
|
10.1073/pnas.2009272117
|
[
"https://arxiv.org/pdf/2004.04888v1.pdf"
] | 215,737,225 |
2004.04888
|
d3c6ae54d03099098d9bdf0242a6f745783801b3
|
Cavity Molecular Dynamics Simulations of Liquid Water under Vibrational Ultrastrong Coupling
Tao E Li [email protected]
Abraham Nitzan [email protected]
Joseph E Subotnik [email protected]
of Chemistry
‡School of Chemistry
University of Pennsylvania
19104PhiladelphiaPennsylvaniaUSA
Tel Aviv University
69978Tel AvivIsrael
Cavity Molecular Dynamics Simulations of Liquid Water under Vibrational Ultrastrong Coupling
We simulate vibrational strong (VSC) and ultrastrong coupling (V-USC) for liquid water with classical molecular dynamics simulations. When the cavity modes are resonantly coupled to the O H stretch mode of liquid water, the infrared spectrum shows asymmetric Rabi splitting. The lower polariton (LP) may be suppressed or enhanced relative to the upper polariton (UP) depending on the frequency of the cavity mode.Moreover, although the static properties and the translational diffusion of water are not changed under VSC or V-USC, we do find the modification of the orientational autocorrelation function of H 2 O molecules especially under V-USC, which could play a role in ground-state chemistry.
Introduction
Strong light-matter interactions between a vibrational mode of molecules and a cavity mode have attracted great attention of late. 1 The signature of strong interactions is the formation 1 arXiv:2004.04888v1 [physics.chem-ph] 10 Apr 2020 of lower (LP) and upper (UP) polaritons, which are manifested in the Rabi splitting of a vibrational peak in the molecular infrared (IR) spectrum. According to the normalized ratio (η) between the Rabi splitting frequency (Ω N ) and the original vibrational frequency (ω 0 ), or η = Ω N /2ω 0 , one often classifies 0 < η < 0.1 as vibrational strong coupling (VSC) and η > 0.1 as vibrational ultrastrong coupling (V-USC). 2 The investigation of VSC or V-USC in liquid phase was initially suggested by Ebbesen et al , [3][4][5] and it was later found experimentally that VSC or V-USC can modify the ground-state chemical reaction rates of molecules even without external pumping. 6 This exotic catalytic effect provides a brand new way to control chemical reactions remotely. As such, there has been a recent push to understand the origins and implications of VSC and V-USC.
While the experimental side has focused on the search for large catalytic effects 7-10 as well as understanding polariton relaxation dynamics through two-dimensional IR (2D-IR) spectroscopy, 11,12 on the theoretical side, the nature of VSC and V-USC remains obscured.
On the one hand, Rabi splitting can be easily modeled by, e.g., diagonalizing a model Hamiltonian in the singly excited manifold [13][14][15] or solving equations of motion classically for a set of one-dimensional (1D) harmonic oscillators. 16,17 On the other hand, a robust explanation of the catalytic effect of VSC or V-USC remains illusive. [18][19][20][21] For example, as recently shown by us and others, 21-23 the potential of mean force along a reaction pathway is not changed by usual VSC or V-USC setups for standard experiments of interest. Moreover, as demonstrated below, any static equilibrium property of a molecule is not changed under VSC or V-USC.
These findings, unfortunately, show that one cannot explain the observed effect under VSC or V-USC from a static view of point. From such a conclusion, one must hypothesize that the manifestations of VSC or V-USC effect on chemical rates should arise from the modification of non-equilibrium, or dynamical, properties of molecules under VSC or V-USC.
The first step towards proving the above hypothesis is to ascertain whether or not any dynamical property of molecules is actually changed for a realistic experiment, a goal which forms the central objective of this manuscript. In order to investigate whether such modification occurs, below we will model VSC and V-USC using cavity molecular dynamics (MD) simulation, where the nuclei are evolved under a realistic electronic ground-state potential surface. Such an approach is an extension of the usual simplified 1D models where the matter side is evolved as two-level systems [24][25][26] or coupled harmonic oscillators. 16,17,27,28 Although such simplified models are adequate enough for studying Rabi splitting qualitatively by fitting experimental parameters, these models usually ignore translation, rotation, collision, as well as the intricate structure of molecular motion, all of which are crucial for determining the dynamic properties of molecules. Therefore, explicit cavity MD simulations become a more appropriate approach for studying all dynamic properties. Moreover, even though one can find a Rabi splitting from 1D models, performing cavity MD simulations is also very helpful for as providing more details about the IR spectrum and this approach can be used to benchmark the validity of 1D models under various conditions.
There have been a few flavors of cavity MD schemes for electronic strong coupling. [29][30][31] For example, Luk et al applied multiscale quantum mechanics/molecular mechanics (QM/MM) simulation for studying the dynamics of electronic polaritons for Rhodamine molecules. 30 By contrast, MD simulations for vibrational strong coupling (VSC and V-USC), to our best knowledge, have not been extensively studied before. Therefore, below we will first establish a framework for cavity MD simulation including implementation details, and second we will investigate the Rabi splitting and the dynamical properties of liquid water.
The motivation for studying liquid water is two-fold: (i) Among common liquids, water shows strong Rabi splitting and strong catalytic effects under VSC or V-USC. 8,10,32 More interesting, when the cavity mode is resonantly coupled to the O H stretch mode, experiments 10 have observed that the intensity of the vibrational LP peak is much smaller than the UP peak in the IR spectrum, an observation that cannot be accounted for by standard strong coupling models. (ii) MD simulations of water outside the cavity have been extensively studied and good agreement with experiments can be achieved. [33][34][35] Extending such simulations to include coupling to cavity modes is expected to show the cavity-induced spectral changes and provides numbers that are directly comparable to experimental results. This manuscript is organized as follows. Sec. 2 provides theoretical background of VSC and V-USC. Sec. 3 describes how to perform cavity MD simulations for liquid water. Sec. 4 shows simulation results of liquid water. Sec. 5 concludes our numerical findings and provides future directions for research into VSC and V-USC.
General Theory of V-USC
The full-quantum Hamiltonian for light-matter interactions reads: 21,36
H QED =Ĥ M +Ĥ F (1a)
Here,Ĥ M denotes the conventional (kinetic + potential) Hamiltonian for the molecular
systemĤ M = ip 2 i 2m i +V Coul ({r i }) (1b)
where m i ,p i ,r i denote the mass, momentum operator, and position operator for the i-th particle (nucleus or electron), respectively, andV Coul ({r i }) denotes the Coulombic interaction operator between all nuclei and electrons. Under the long-wave approximation, the fieldrelated HamiltonianĤ F readŝ
H F = k,λ 1 2 ω 2 k,λq 2 k,λ + 1 2 p k,λ − 1 √ Ω 0μ S · ξ λ 2 (1c)
where ω k,λ ,q k,λ ,p k,λ denote the frequency, position operator, and momentum operator for a photon with wave vector k and polarization direction ξ λ , and the index λ = 1, 2 denotes the two polarization directions which satisfy k · ξ λ = 0. In free space, the dispersion relation
gives ω k,λ = c|k| = ck. 0 and Ω denote the vacuum permittivity and the cavity volume.μ S denotes the dipole operator for the whole molecular system:
µ S = i Z i er i(2)
where e denotes the electron charge and Z i e denotes the charge for the i-th particle (nucleus or electron).μ S can also be grouped into a summation of molecular dipole moments (indexed by n):μ
S = N n=1μ n ;μ n = j∈n Z j er j(3)
Note that the self-dipole term in Eq. (1c) (i.e., theμ 2 S term in the expanded square) is of vital importance in describing USC and is needed to render the nuclear motion stable; see
Refs. [37][38][39] for details. Because we will not neglectμ 2 S below, our simulation is valid for both VSC and U-VSC.
When the cavity mode frequency is within the timescale of the nuclear dynamics, the Born-Oppenheimer approximation implies that electrons stay in the ground state. Therefore, we will project the quantum Hamiltonian (1) onto the electronic ground state,Ĥ G QED = Ψ G |Ĥ QED |Ψ G , where |Ψ G denotes the electronic ground state for the whole molecular system. Furthermore, under the Hartree approximation, |Ψ G can be approximated as a product of the electronic ground states for individual molecules: |Ψ G = N n=1 |ψ ng . After such a projection on the electronic ground state, the Hamiltonian (1) reduces tô
H G QED =Ĥ G M +Ĥ G F .(4a)
Here, the ground-state molecular HamiltonianĤ G M = Ψ G |Ĥ M |Ψ G depends on the nuclear degrees of freedom only, and can be expressed aŝ
H G M = N n=1 j∈nP 2 nj 2M nj +V (n) g ({R nj }) + N n=1 l>nV (nl) inter (4b)
where the capital lettersP nj ,R nj , and M nj denote the momentum operator, position operation, and mass for the j-th nuclus in molecule n,V inter denotes the intermolecular interactions between molecule n and l.
The field-related Hamiltonian becomes 21
H G F = k,λ 1 2 ω 2 k,λq 2 k,λ + 1 2 p k,λ − N n=1 1 √ Ω 0d ng,λ 2 + λ N n=1 1 2Ω 0 ψ ng |δd 2 ng,λ |ψ ng (4c)
where we defined ng,λ ≡ ψ ng |μ n |ψ ng · ξ λ (5a) Similarly, on the last line in Eq. (4c), the self-dipole fluctuation term 1 2Ω 0 ψ ng |δd 2 ng,λ |ψ ng , which denotes the cavity modification of the single-molecule potential, should also be very small for standard VSC setups where micron-length cavities are used. Therefore, in what follows, we will assume thatV
δd ng,λ ≡μ n · ξ λ −d ng,λ(5b)H G F = k,λˆ p 2 k,λ 2m k,λ + 1 2 m k,λ ω 2 k,λ ˆ q k,λ + N n=1d ng,λ ω k,λ Ω 0 m k,λ 2(6)
Here, to be compatible with standard MD simulations (which requires the information of mass for particles), an auxiliary mass m k,λ for each photon is also introduced:
p k,λ =ˆ p k,λ √ m k,λ q k,λ = √ m k,λˆ q k,λ(7)
Note that the auxiliary mass of photon does not alter any dynamics and serves only as a convenient notation for further MD treatment.
Classical Molecular Dynamics
The above quantum Hamiltonian, although depending only on the nuclear and photonic degrees of freedom, is still too expensive to evolve exactly. The simplest approximation we can make is the classical approximation, i.e., all quantum operators are mapped to the corresponding classical observables, which leads to the following classical Hamiltonian:
H G QED = H G M + H G F (8a) H G M = N n=1 j∈n P 2 nj 2M nj + V (n) g ({R nj }) + N n=1 l>n V (nl) inter (8b) H G F = k,λ p 2 k,λ 2m k,λ + 1 2 m k,λ ω 2 k,λ q k,λ + N n=1 d ng,λ ω k,λ Ω 0 m k,λ 2 (8c)
Eq. (8) serves as the starting point of this work. We note that one can go beyond the treatment here by propagating the quantum Hamiltonian (4) using the path-integral technique 43,44 and evolve the ring polymer Hamiltonian with n copies of coupled classical trajectories (aka n beads). In the present manuscript, we focus on the classical system, deferring the path-integral calculation to a later study.
In our classical MD simulations, the simulated system is represented by particles that obey the Newtonian equations of motion:
M njRnj = F (0) nj − k,λ ε k,λ q k,λ + ε 2 k,λ m k,λ ω 2 k,λ N l=1 d lg,λ ∂d ng,λ ∂R nj (9a) m k,λ¨ q k,λ = −m k,λ ω 2 k,λ q k,λ − ε k,λ N n=1 d ng,λ (9b)
where the cavity-free force F
(0) nj is calculated by F (0) nj = −∂V (n) g /∂R nj − l =n ∂V (nl)
inter /∂R nj , and the coupling between particles representing photons and nuclear degrees of freedom is
given by ε k,λ ≡ m k,λ ω 2 k,λ /Ω 0 .
Periodic Boundary Condition
In order to perform a realistic simulation for VSC or V-USC, we need a macroscopic number (say, 10 9 ∼ 10 11 ) of molecules, 6-9 which is far beyond our computational power if we simulate Eq. (9) directly. To proceed, we assume that the whole molecular ensemble can be divided into N cell periodic cells, in which the molecules evolve identically, i.e., we can approximate the second term on the right of Eq. (9b) by N n=1 d ng,λ = N cell N sub n=1 d ng,λ , where N sub = N/N cell denotes the number of molecules in a single cell. By further denoting
≈ q k,λ = q k,λ / √ N cell , ε k,λ = √ N cell ε k,λ ,
we can rewrite the equations of motion in Eq. (9) in a symmetric form
M njRnj = F (0) nj − k,λ ε k,λ ≈ q k,λ + ε 2 k,λ m k,λ ω 2 k,λ N sub l=1 d lg,λ ∂d ng,λ ∂R nj (10a) m k,λ≈ q k,λ = −m k,λ ω 2 k,λ ≈ q k,λ − ε k,λ M n=1 d ng,λ(10b)
The form of Eq. (10) has several advantages. First, we simulate the VSC of a macroscopic number of molecules by evolving molecules in a single cell plus the few photon modes that we are interested in. Second, when considering the dependence of Rabi splitting on molecular numbers, we can fix the number of molecules in a single cell (N sub ) and vary only the coupling constant ε k,λ = √ N cell ε k,λ . Such a change is very easy to implement in practice and has the physical interpretation of increasing the number of cells while leaving the number of molecules per cell and the size of the simulation cell fixed.
q-TIP4P/F Water Force Field
The question remains as to exactly how we will calculate the ground-state quantities F (0) nj , d ng,λ , and ∂d ng,λ /∂R nj . In general, these properties can be calculated by classical empirical force field or ab initio electronic structure theory. For this initial publication of liquid water, we will use a classical force field -the q-TIP4P/F water model 34 -which provides the simplest description of both the equilibrium and dynamic properties of liquid water.
In the q-TIP4P/F model, the pairwise intermolecular potential is characterized by the Lennard-Jones potential between oxygen atoms plus the Coulombic interactions between partial changes:
V (nl) inter = 4 σ R OO nl 12 − σ R OO nl 6 + i∈n j∈l Q i Q j R ij(11)
where R OO nl denotes the distance between the oxygen atoms and R ij denotes the distance between the partial charge sites in molecules n and l. Within a single H 2 O molecule, two positively partial charge with magnitude Q M /2 are assigned to the hydrogen atoms, and the negatively charge site with magnitude −Q M is placed at R M :
R M = γR O + 1 − γ 2 R H 1 + R H 2(12)
For parameters, = 0.1852 kcal mol −1 , σ = 3.1589 Å, Q M = 1.1128 |e| (where e denotes the charge of the electron), and γ = 0.73612.
The intramolecular interaction is characterized by
V (n) g = V OH (R n1 ) + V OH (R n1 ) + 1 2 k θ (θ n − θ eq ) 2(13)
where
V OH (r) = D r α 2 r (r − r eq ) 2 − α 3 r (r − r eq ) 3 + 7 12 α 4 r (r − r eq ) 4(14)
Here, R n1 and R n2 denote the lengths of two O H bonds, θ n and θ eq denote the H O H angle and the equilibrium angle. For parameters, D r = 116.09 kcal mol −1 , α r = 2.287Å −1 , r eq = 0.9419Å, k θ = 87.85 kcal mol −1 rad −2 , and θ eq = 107.4 deg.
Given the q-TIP4P/F force field, one can easily calculate the cavity-free force F
(0)
nj as a function of the nuclear configurations by standard molecular dynamics packages. The dipole moment can be calculated by
d ng,λ = Q M 2 R nH 1 + R nH 2 − Q M R nM · ξ λ = γQ M 2 R nH 1 + R nH 2 − γQ M R nO · ξ λ(15)
and the derivative ∂d ng,λ /∂R nj is trivial. Our modification is illustrated as the green region in Fig. 1 (10)).
Implementation Details
{ } { } { (0) } Call LAMMPS, CP2K, etc.
After calculating the forces, we use the interface of I-PI to update momenta and positions.
Due to the user-friendly structure of I-PI, the current cavity MD code should be easily generalized to the cases of ab initio calculation and path-integral cavity MD simulations, results which will be reported in a separate publication. We consider the following scenario for simulation. As shown in Fig. 2 The simulation step is set as 0.5 fs and we store the snapshots of trajectories every 2 fs.
Simulation Details
Results
Asymmetric Rabi Splitting
The signature of VSC is the collective Rabi splitting in the IR spectrum. In our MD simulations, the IR spectrum is calculated by linear response theory. For isotropic liquids, the absorption coefficient α(ω) is expressed as the Fourier transform of the autocorrelation function of the total dipole moment µ S : 48-52
n(ω)α(ω) = πβω 2 3 0 V c 1 2π +∞ −∞ e −iωt µ S (0)µ S (t) dt(16)
Here, n(ω) denotes the refractive index and V denotes the volume of the system (i.e., the simulation cell). The factor ω 2 arises from the absorbed photon energy by the liquid. For Note that, as ε increases, the LP peak is suppressed and the UP peak is enhanced.
is along the z-axis), 32 we need to modify the above equation to
n(ω)α(ω) = πβω 2 2 0 V c 1 2π +∞ −∞ e −iωt i=x,y (µ S (0) · e i ) (µ S (t) · e i ) dt(17)
where e i denotes the unit vector along direction i = x, y. Eq. (17) states that the average is performed only along the polarization directions of the detecting signal (i.e., the x and y directions here). When the incident light is unpolarized these two directions are of course equivalent. and UP peaks. More interestingly, our simulation results also suggest that the UP and LP peaks can be largely asymmetric especially when ε is large, which agrees with experimental findings at least qualitatively. 10 In Fig. 4a we plot the Rabi splitting frequency (the difference between the UP and LP frequencies, or ω + − ω − ) as a function of ε. The simulation data (black triangles) can be fit with a linear ansatz (gray line) very well. As mentioned above, because ε = √ N cell ε ∝ √ N , Fig. 4a demonstrates that the Rabi splitting is proportional to the square root of the total number of molecules, which agrees with theoretical expectation and experimental observation: 32,53
ω + − ω − = Ω N ≡ 2g 0 √ N(18)
where g 0 denotes the coupling constant between a single molecule and the photon mode. Fig. 3. In Fig. 4a the simulation data (black triangles) are fit linearly (gray line). In Fig. 4b-c the simulation data (blue stars for LP and red circles for UP) are compared with the analytical expressions from a simplified 1D model (Eqs. (19) and (20)), where the parameters are given as ω 0 = ω c = 3550 cm −1 and Ω N is taken as the values in Fig. 4a.
Of particular interest is the asymmetric nature of the LP and UP: this asymmetry is manifest in two aspects. As shown in Fig. 4b-c, both the polariton frequencies and the integrated peak areas of the LP (blue stars) and UP (red circles) show asymmetric scalings as a function of the normalized Rabi frequency (Ω N /2ω 0 , where Ω N is taken from Fig. 4a), especially in the V-USC limit (the red-shadowed region). Note that the standard treatment of collective Rabi splitting does not account for this asymmetry and the observation of the suppression (or enhancement) of the LP (or the UP) in Ref. 10 was explained by the higher absorption of water and gold cavity mirrors in the LP region. Some insight into the origin of this asymmetry can be obtained from a simple 1D model where N independent harmonic oscillators interact with a single photon mode. As calculated in the Appendix, by taking the self-dipole term into account (to describe V-USC), we obtain
ω 2 ± = 1 2 ω 2 0 + Ω 2 N + ω 2 c ± (ω 2 0 + Ω 2 N + ω 2 c ) 2 − 4ω 2 0 ω 2 c(19)
where ω 0 and ω c denote the frequencies of the harmonic oscillators and the photon mode.
Given ω 0 = ω c = 3550 cm −1 and Ω N in Fig. 4a Fig. 4b. We see that this analytical result already shows some asymmetry in the positions of the polariton peaks when plotted versus Ω N . While Eq. (19) agrees with our simulation data very well in the VSC limit (the green-shadowed region), the simulation data seem to be more asymmetric than Eq. (19) in the V-USC limit. Such disagreement may arise from the strong intermolecular interactions between H 2 O molecules, which is completely ignored in the simplified 1D model of the Appendix.
Likewise, the simplified 1D model in the Appendix also suggests that the integrated peak areas of the LP and UP are
I LP ∝ ω 2 − sin 2 θ 2 (20a) I UP ∝ ω 2 + cos 2 θ 2 (20b) where tan (θ) = 2ω c Ω N / (ω 2 0 + Ω 2 N − ω 2 c )
. Again, as shown in Fig. 4c, Eq. (20) (black dashed lines) matches the simulation data roughly but not quantitatively, which may come from ignoring all the intermolecular interactions in the 1D model. Nevertheless, from Eq.
(20), we find that the asymmetry in the IR spectrum comes from two factors: (i) the factor ω 2 ± and (ii) the angular part sin 2 θ 2 or cos 2 θ 2 . While the first part originates from the absorbed photon energies associated with the vibration modes and is universal for all IR spectrum (so that it is trivial), the second factor is quite nontrivial: at resonance (ω 0 = ω c ) one would naively assume that sin 2 θ 2 = cos 2 θ 2 and this is true if one ignores the selfdipole term (which means ignoring the Ω 2 N term in tan (θ); see the Appendix for details).
However, when the self-dipole term is considered, one finds sin 2 θ 2 < cos 2 θ 2 , which leads to an additional suppression of the LP and the enhancement of the UP.
For liquid water in the cavity, in Fig. 5, we further investigate how (a) the polariton frequencies and (b) the integrated peak areas of polaritons depend on the cavity mode frequency for ε = 5 × 10 −4 a.u., which is well in the USC regime. The simulation data (scatter points) agree well with the analytical result (dashed black lines) for the simplified 1D model (Eqs. (19) and (20)). As shown in Fig. 5a, the energy difference between the polaritons is minimal at resonance (∼ 3550 cm −1 ), in which the uncoupled O H stretch mode frequency crosses with the cavity mode frequency; see gray solid lines. Such a cross corresponds to the maximally hybridized light-matter state. By contrast, when the cavity mode frequency is larger (smaller) than the molecular frequency, the LP (UP) becomes increasingly dominated by the O H stretch mode (as evident from the uncoupled case for which this mode is represented by the gray horizontal line).
Our model implies that for the uncoupled molecule-cavity case, only the molecular optical transition is coupled to the far field. This suggests that in contrast to the resonance case when the UP peak is larger than the LP peak, when the cavity mode frequency becomes sufficiently large (i.e., the LP is mostly constituted by the matter side), the LP should have a larger peak size than the UP. This finding is confirmed by Fig. 5b. More interestingly, Fig. 5b also shows the symmetric peak size of polaritons occurs when the cavity mode frequency is ∼ 4250 cm −1 , which is far beyond the O H stretch frequency of liquid water (∼ 3550 cm −1 ). Therefore, in principle, from this fact, one would predict that one can engineer the relative ) and (20)). For simulation parameters, ε = 5 × 10 −4 a.u. and all other parameters are the same as in Fig. 4. For parameters of the analytical expressions, we take ω 0 = 3550 cm −1 and Ω N = 937 cm −1 , which corresponds to the resonant Rabi frequency when ε = 5×10 −4 a.u. (see Fig. 4a). The gray solid lines in Fig. 5a represents the uncoupled O H stretch mode frequency and the cavity mode frequency. The insert in Fig. 5b plots the cavity mode frequency corresponding to the case of symmetric polaritons (i.e., the crossing point frequency in Fig. 5b) as a function of Ω N /2ω 0 . strength of polaritons by tuning the cavity mode frequency. Furthermore, the inset of Fig. 5b plots the cavity mode frequency (for which the polariton intensities become symmetric) as a function of Ω N /2ω 0 . Again, we find that for large Ω N /2ω 0 , detecting polaritons with symmetric intensities requires a very large off-resonant cavity mode frequency. (g(r)) of oxygen atoms in liquid water. The result outside the cavity (solid black) is compared with that inside the cavity (with effective coupling strength ε = 4 × 10 −4 a.u., cyan dashed). All other parameters are set the same as Fig. 3. Note that g(r) is not changed by VSC or V-USC.
Static Equilibrium
Rabi splitting represents the collective optical response of liquid water. As shown above, although MD simulations can obtain the IR spectrum of the polaritons in a straightforward way, one can argue that since most important features of the IR spectrum can be qualitatively described by the 1D harmonic model (see the Appendix), there is little advantage to perform expensive MD simulations. As has been argued above, the real advantage of the MD simulations is that one can simultaneously obtain many other physical properties of molecules alongside with the IR spectrum. Below we will investigate whether any property of individual H 2 O molecules can be changed under VSC or U-VSC. According to linear response theory, the translational diffusion of H 2 O can be described by the VACF (C vv (t)) of the center of mass of each molecule:
C vv (t) = v(t)v(0)(22)
One can calculate the diffusion constant D from C vv (t) by D = 1 3 +∞ 0 C vv (t)dt. C vv (ω), which is shown in Fig. 8b Fig. 9b so we do not report it here.
As for the rotational behavior, according to linear response theory, one must compute the orientational autocorrelation function (OACF, denoted by C l (t)), [54][55][56] which is defined
as C l (t) = P l [u n (0) · u n (t)](23)
where u n (t) denotes the three principal inertial axes of molecule n at time t, and P l denotes the Legendre polynomial of index l. For simplicity, we will study only the first order of OACF, which means P 1 [u n (0) · u n (t)] = u n (0) · u n (t).
For H 2 O, the z axis of the principal axes coincides with the dipole moment direction. In respectively]. Fig. 9b plots the corresponding spectrum I z 1 (ω), which is defined as
I z 1 (ω) = ω 2 C z 1 (ω)(24)
I z 1 (ω) can be regarded as the single-molecule IR spectroscopy along the dipole-motion direction, which describes how a single molecule rotates in the environment. As clearly shown in the zoom-in inset, for large enough ε (in the V-USC limit, or ε ≥ 4×10 −4 a.u.), an additional small peak emerges with intensities 2% ∼ 8% of the peak from a bare molecule. Compared with the IR spectrum of the liquid water in Fig. 3, these additional small peaks have the same frequencies as the UP peaks, demonstrating the modification of single-molecule rotation under V-USC. Note that for smaller ε (i.e., in the VSC limit), the additional peak will be covered by the large bare-molecule peak and is hardly identifiable. The change of the rotational behavior of individual molecules may possibly change the ground-state chemistry for many scenarios, which should be extensively studied in the future. Lastly, we emphasize that apart from these additional peaks, the width of the bare-molecule peaks is mostly unchanged. (ω) = ω 2 C z 1 (ω)). A zoom-in inset is also plotted in each subplot. The results outside the cavity (black dashed) is compared with those inside the cavity (with effective coupling strength ε as 4 × 10 −4 (cyan solid), 6 × 10 −4 (red dashed), and 8 × 10 −4 a.u. (blue dash-dotted), respectively. All other parameters are set the same as Fig. 3. Note that OACF is changed by V-USC.
Conclusion
In conclusion, we have performed classical cavity MD simulations under VSC or V-USC.
With liquid water as an example, when the cavity modes are resonantly coupled to the O H stretch mode, we have found asymmetric Rabi splitting of the O H stretch peak in the IR spectrum where the LP is suppressed and the UP is enhanced. Such asymmetry can be inverted (i.e., the LP is enhanced and the UP is suppressed) by increasing the cavity mode frequency. Moreover, while we have found no modification of the static equilibrium properties as well as the translational diffusion of liquid water, we have observed that the
Appendix A Simplified 1D Model for V-USC
Starting from the classical Hamiltonian for V-USC (Eq. (8)), let us assume that (i) the molecules are non-interacting 1D harmonic oscillators, (ii) the dipole moment for a single molecule is linear (i.e., d ng,λ = d 0 x n ), and (iii) only a single cavity mode is considered. With these simplifications, the Hamiltonian can be written as:
H G QED = N n=1 p 2 n 2 + p 2 c 2 + V ({x n }, x c ) (A1a) where V ({x n , x c }) = N n=1 1 2 ω 2 0 x 2 n + 1 2 ω 2 c x c + 2g 0 ω c N n=1 x n 2(A1b)
Here, g 0 ≡ d 0 /2 √ Ω 0 , and we have assumed all masses to be 1. Note that the self-dipole term (the N n=1 x n 2 term in the expanded square above) is necessary for studying V-USC.
With the Hamiltonian in Eq. (A1), the equations of motion now read
x n = −ω 2 0 x n − 2g 0 ω c x c − 4g 2 0 N l=1 x l (A2a) x c = −ω 2 c x c − 2g 0 ω c N n=1 x n (A2b)
Let us define the bright mode as
x B = 1 √ N N n=1
x n , so that the equations of motion for the bright mode and the cavity mode becomë
x B = −ω 2 0 x B − ω c Ω N x c − Ω 2 N x B (A3a) x c = −ω 2 0 x c − ω c Ω N x B (A3b)
where Ω N = 2 √ N g 0 is the usual Rabi frequency. In the matrix form, the above equations can be written as¨
x = −K x (A4) where x = (x B , x c ) T and K = ω 2 0 + Ω 2 N ω c Ω N ω c Ω N ω 2 c (A5)
Note that the Ω 2 N term above comes from the self-dipole term.
A.1 Polariton frequencies
The polariton frequencies (ω ± ) can be determined by solving the eigenvalues of the matrix K:
ω 2 ± = 1 2 ω 2 0 + Ω 2 N + ω 2 c ± (ω 2 0 + Ω 2 N + ω 2 c ) 2 − 4ω 2 0 ω 2 c (A6)
At resonance (ω c = ω 0 ), the polariton frequencies are reduced to
ω 2 ± = ω 2 0 + Ω 2 N 2 ± Ω N ω 2 0 + Ω 2 N 4 (A7)
In the VSC limit (Ω N ω 0 ), Eq. (A7) can be further simplified as
ω ± ≈ ω 2 0 ± Ω N ω 0 ≈ ω 0 ± Ω N 2 (A8)
which is the usual strong-coupling result.
A.2 IR spectrum
The IR spectrum of molecules is calculated by Eq. (16). With our 1D model, the IR spectrum is expressed as
n(ω)α(ω) ∝ ω 2 +∞ −∞ e −iωt x B (0)x B (t) dt (A9)
where we have neglected all prefactors (including the temperature as we take room temperature throughout this manuscript). According to Eq. (A3), the solution of x B (t) is
x B (t) = x B (0) cos 2 θ 2 + x c (0) cos θ 2 sin θ 2 e iω + t + x B (0) sin 2 θ 2 − x c (0) cos θ 2 sin θ 2 e iω − t(A10a)
where tan (θ) = 2ω c Ω N / ω 2 0 + Ω 2 N − ω 2 c (A10b) By substituting Eq. (A10) into Eq. (A9) and using x B (0)x c (0) = 0, we obtain n(ω)α(ω) ∝ ω 2 cos 2 θ 2 δ(ω − ω + ) + sin 2 θ 2 δ(ω − ω − ) (A11)
The integrated peak areas for LP and UP are I LP ∝ ω 2 − sin 2 θ 2 (A12a)
I UP ∝ ω 2 + cos 2 θ 2(A12b)
From the above, we find that the asymmetric peaks come from two origins: (i) the prefactor ω 2 ± and (ii) the self-dipole term in the dipole-gauge Hamiltonian. While the first origin is trivial, the second origin can be understood as follows. If we had neglected the self-dipole term, we would naively take ω 2 0 +Ω 2 N → ω 2 0 in Eq. (A5) and obtain a different expression for θ, tan (θ) = 2ω c Ω N / (ω 2 0 − ω 2 c ) (where the Ω 2 N term now vanishes compared to Eq. (A10b)). At resonance, we would obtain that tan (θ) = ∞, i.e., θ = π/2 and cos 2 (θ/2) = sin 2 (θ/2) = 1/2.
However, because of the Ω 2 N term, θ < π/2, which implies sin 2 (θ/2) < cos 2 (θ/2). In other words, when ω c = ω 0 , the self-dipole term forces the LP to be further suppressed and the UP be further enhanced.
(42) In principle, when one considers the exact quantum Hamiltonian for systems with lightmatter interactions, all (i) instantaneous interactions between molecules are canceled exactly by the presence of terms that involve (ii) the non-local (and also instantaneous) self-interaction of delocalized photon modes. This exact cancellation allows for causality to be enforced, such that all meaningful intermolecular interactions are carried exclusively the transverse photon field at the speed of light. In the present paper, we do not worry about causality and so we have ignored the details of the cancellation alluded to above, i.e. we do not address how this cancellation is affected by the cavity and the presence of a finite number of cavity modes. In principle, the presence of a cavity leads to a dressedV
take the free-space form and also neglect the self-dipole fluctuation term. However, we emphasize that, for smaller cavities, both the change of intermolecular interactions and the self-dipole fluctuation may play an important role in ground-state chemistry as already discussed in different contexts, 18,29,38 a fact which needs further investigation.In MD simulations, a standard potential is a function of positions only. In Eq. (4c), however, the momenta of photons are coupled directly to the molecular dipole moments (which are a function of the nuclear positions of the molecules). However, since photons are harmonic oscillators, we may exchange the momentum and position of each photon, so that Eq. (4c) can be rewritten aŝ
Figure 1 :
1Illustration of the algorithm structures of (a) the original I-PI for MD simulations and (b) our modified I-PI structure for cavity MD simulations, where the modification is labeled in red. We have implemented the above cavity MD scheme by modifying an open-source MD package I-PI, 45 which was designed for both classical and path-integral MD simulations. The general structure of I-PI is illustrated as the gray region in Fig. 1: At every time step, given the molecular positions {R nj }, the forces {F (0) nj } are calculated by calling external packages such as LAMMPS (for classical MD) 46 or CP2K (for ab initio MD); 47 after calculating the forces, the momenta {P nj } and positions {R nj } are updated accordingly.
Figure 2 :
2The structure of the cavity for our simulation. Water molecules are constrained at the center of the cavity by a pair of thick SiO 2 layers.
Figure 3 :
3VSC and V-USC experiments, however, because the experimental setups usually detect an IR spectrum by sending light along the cavity direction (which means the k direction of light Simulated IR spectrum of liquid water under VSC or V-USC. From top to bottom, we plot the results (a) outside the cavity, or inside the cavity with effective coupling strength ε as (b) 2 × 10 −4 , (c) 4 × 10 −4 , (d) 6 × 10 −4 , and (e) 8 × 10 −4 a.u.. All other simulation details are listed in Sec. 3.4.
Fig
. 3a plots the simulated IR spectrum of liquid water outside the cavity. The O H stretch peaks around ∼ 3550 cm −1 , which is slightly different from experiment (∼ 3400 cm −1 ).As noted above, a more accurate O H stretch peak can be simulated by performing pathintegral calculations instead of a classical simulation.34 For the case that the frequency of the two photon modes (with polarization directions perpendicular to the cavity direction) are both set to be at resonance with the O H stretch(3550 cm −1 ), Figs. 3(b)-(d) plot the simulated IR spectrum; the effective coupling strength ε is set as 2 × 10 −4 , 4 × 10 −4 , 6 × 10 −4 , and 8 × 10 −4 a.u., respectively. Clearly, when the cavity modes are coupled to the H 2 O molecules, the O H stretch peak is spit into a pair of LP
Figure 4 :
4(a) Rabi frequency (Ω N ) as a function of the effective coupling strength ε for liquid water. (b) Polariton frequency and (c) integrated peak area of polaritons as a function of normalized Rabi frequency (Ω N /2ω 0 ). All simulation details are the same as
Figure 5 :
5(a) Polariton frequency and (b) integrated peak area of polaritons as a function of the cavity mode frequency. Note that the energy splitting between polaritons is minimized when the cavity mode has frequency 3550 cm −1 , but the upper and lower polaritons become symmetric in intensity at a different cavity mode frequency (4250 cm −1 ). The simulation data for liquid water (scattered points) are compared with the analytical expressions of the simplified 1D model (black lines, see Eq.(19
Figure 6 :
6Normalized bond length distribution of O H in liquid water. The result outside the cavity (solid black) is compared with that inside the cavity (with effective coupling strength ε = 4 × 10 −4 a.u., cyan dashed). All other parameters are set the same asFig. 3. Note that the bond length distribution is not changed by VSC or V-USC.
Figure 7 :
7Radical pair distribution function
First
, let us consider the static equilibrium properties of H 2 O molecules. We recently argued that the potential of mean force for a single molecule is not changed by the cavity 21 under typical VSC or V-USC setups. In fact, with the same proof procedure, it is easy to show that any static thermodynamic quantity of the molecules are not changed by the cavity. This can be illustrated as follows. Given an observable O = O({P nj }, {R nj }) which is a function of the molecules only, the thermodynamic average for this variable inside the cavity ( O QED ) is calculated byO QED = d{R nj }d{P nj }d{ q k,λ }d{ p k,λ }Oe −βH G QED d{R nj }d{P nj }d{ q k,λ }d{ p k,λ }e −βH G QED (21a) = d{R nj }d{P nj }Oe −βH G M d{R nj }d{P nj }e −βH G M = O M(21b) which is identical to the average outside the cavity ( O M ) after the integration over the photon modes, where H G QED and H G M are defined in Eq. (8). Even though the mathematical proof guarantees that the static thermodynamic properties are not changed inside the cavity, it is still very helpful to check some static properties in simulation, as it provides a tool for checking the numerical convergence. Fig. 6 plots the normalized bond length distribution of the O H bond. Fig. 7 plots the radical pair distribution function between the oxygen atoms. For these two static properties, the results outside the cavity (solid black) agree exactly with the results inside the cavity (with effective coupling strength ε = 4 × 10 −4 a.u.). We have checked the results under other coupling strengths and this conclusion is not changed. Hence, both analytical and numerical treatments suggest that the static thermodynamic properties are not changed inside the cavity.
Figure 8 :
8Velocity autocorrelation function (VACF) of the center of mass of individual H 2 O molecules: (a) the time-domain results and (b) the corresponding Fourier transform. The results outside the cavity (black solid) is compared with those inside the cavity (with effective coupling strength ε = 4 × 10 −4 a.u., cyan dashed). All other parameters are set the same as Fig. 3. Note that VACF is not changed by VSC or V-USC. Second, let us move to the dynamical properties of individual H 2 O molecules. In particular, we are interested in whether the translational or rotational motion of a single H 2 O molecule is changed under VSC.
Fig. 8 (
8a) plots C vv (t) as a function of time for the center of mass of H 2 O. The exact agreement between the result outside the cavity (black solid) and that inside the cavity (cyan dotted, with effective coupling strength ε = 4×10 −4 a.u.) suggests that C vv (t) is not changed by VSC or V-USC. This finding can also be convinced by looking at the Fourier transform
Fig. 9a ,
9awe plot C z 1 (t), the z-component of the first-order OACF, as a function of time. The inset zooms in the initial rotation relaxation process when time t < 0.1 ps. The outside-cavity result (black dashed) largely agrees with results inside cavity [with the effective coupling strength ε as 4 × 10 −4 (cyan solid), 6 × 10 −4 (red dashed), 8 × 10 −4 a.u. (blue dash-dotted),
Figure 9 :
9z-component of first-order orientational autocorrelation function (OACF) of individual H 2 O molecules. (a) plots the time-domain results (C z 1 (t)) and (b) plots the corresponding spectrum (I z 1
OACF of H 2 O molecules are modified under V-USC. Such observation may perhaps help understand the catalytic effect of VSC or V-USC. Based on the current framework of cavity MD, future directions should focus on (i) pathintegral calculations to study quantum effects in the modification of the molecular dynamical properties; and (ii) ab initio cavity MD simulations of chemical reactions under VSC or V-USC. This cavity MD framework can also be used to simulate recently reported 2D-IR spectroscopy studies 11,12 on polariton relaxation dynamics. At the same time, obtaining analytical solutions of cavity modification of the dynamical properties would also be very helpful. We hope such studies will help solve the mystery of the catalytic effects underlying VSC or V-USC in the near future.
, i.e. a dressed intermolecular interaction (with image charges), and such effects are well understood within QED.40,41 However, there is a caveat to this last point: an exact expression forV(nl) inter would require that we treat all for all EM cavity modes correctly, and the resultingV(nl) inter will be complex and exceedingly difficult to implement computationally. In practice, we assume that one long-wavelength cavity mode that is resonant with the O H mode can be treated explicitly, while all modes of higher frequencies are taken as part of the environment. For such a prescription, there is no simple means to address the correctV (nl) inter ; however, given how long the length scales are (microns), the correctly dressedV (nl) inter cannot be very different from the standard form ofV (nl) inter . For all of these reasons, we have chosen in the present manuscript to work with the standard form of the intermolecular interactions (V (nl) inter ), knowing full well that our Hamiltonian slightly double counts some light-matter interactions. (43) Tuckerman, M. Statistical mechanics: theory and molecular simulation; Oxford University Press: New York, 2010. (44) Markland, T. E.; Ceriotti, M. Nuclear quantum effects enter the mainstream. Nat. Rev. Chem. 2018, 2, 0109. (45) Kapil, V. et al. i-PI 2.0: A universal force engine for advanced molecular simulations. Comput. Phys. Commun. 2019, 236, 214-223.
inter should be nearly identical to those in free space. 42Note that, since Coulombic interactions are modified by proximity to dielectric boundaries,
in the cavity, the intermolecular interactionsV
(nl)
inter in Eq. (4b) may differ from the free-
space form. 40,41 However, as we have argued before, 21 for standard VSC setups with a cavity
length on the order of microns,V
(nl)
. We store both the nuclear and photonic degrees of freedom in I-PI. At every time step, we first truncate a nuclear position array {R nj } from the total (nuclear + photonic) position array, and then use the interface of I-PI to calculate the cavity-free forces {F nj }. We also calculate the dipole moments and their derivatives from {R nj }. With the cavity-free forces and the dipole moments, we calculate the overall forces on all nuclei and photons {F nj , F k,λ } (the right hand side of Eq.(0)
, the cavity is placed along the z-axis. A pair of thick SiO 2 layers are placed between the cavity mirrors so that the water molecules can move freely only in a small region (but still on the order of microns) near the cavity center. Such additional SiO 2 layers are used (i) to ensure the intermolcular interactions between H 2 O molecules are the same as those in free space, and (ii) to validate the long-wave approximation that we have taken from the very beginning.We consider only two cavity modes with polarization directions ξ λ along x and y directions,both of which are resonant with the O H stretch mode. We set the auxiliary mass for the two photons as m k,λ = 1 a.u. (atomic units). We simulate 216 H 2 O molecules in a cubic cell with length 35.233 a.u., so that the water density is 0.997 g cm −1 . At 300 K, we first run the simulation for 150 ps to guarantee thermal equilibrium under a NVT ensemble where a Langevin thermostat is added on the momenta of all particles (nuclei + photons). The resulting equilibrium configurations are used as starting points for 80 consecutive NVE trajectories of length 20 ps. At the beginning of each trajectory the velocities are resampled by a Maxwell-Boltzman distribution under 300 K. The intermolecular Coulombic interactions are calculated by an Ewald summation.
AcknowledgementsThis material is based upon work supported by the U.S. Department of Energy, Office of
Molecular polaritons for controlling chemistry with quantum optics. F Herrera, J Owrutsky, J. Chem. Phys. 2020100902Herrera, F.; Owrutsky, J. Molecular polaritons for controlling chemistry with quantum optics. J. Chem. Phys. 2020, 152, 100902.
Ultrastrong coupling between light and matter. A Frisk Kockum, A Miranowicz, S De Liberato, S Savasta, F Nori, Nat. Rev. Phys. 1Frisk Kockum, A.; Miranowicz, A.; De Liberato, S.; Savasta, S.; Nori, F. Ultrastrong coupling between light and matter. Nat. Rev. Phys. 2019, 1, 19-40.
Coherent coupling of molecular resonators with a microcavity mode. A Shalabney, J George, J Hutchison, G Pupillo, C Genet, T W Ebbesen, Nat. Commun. 65981Shalabney, A.; George, J.; Hutchison, J.; Pupillo, G.; Genet, C.; Ebbesen, T. W. Co- herent coupling of molecular resonators with a microcavity mode. Nat. Commun. 2015, 6, 5981.
Liquid-Phase Vibrational Strong Coupling. J George, A Shalabney, J A Hutchison, C Genet, T W Ebbesen, J. Phys. Chem. Lett. 6George, J.; Shalabney, A.; Hutchison, J. A.; Genet, C.; Ebbesen, T. W. Liquid-Phase Vibrational Strong Coupling. J. Phys. Chem. Lett. 2015, 6, 1027-1031.
Multiple Rabi Splittings under Ultrastrong Vibrational Coupling. J George, T Chervy, A Shalabney, E Devaux, H Hiura, C Genet, T W Ebbesen, Phys. Rev. Lett. 153601George, J.; Chervy, T.; Shalabney, A.; Devaux, E.; Hiura, H.; Genet, C.; Ebbesen, T. W. Multiple Rabi Splittings under Ultrastrong Vibrational Coupling. Phys. Rev. Lett. 2016, 117, 153601.
. A Thomas, J George, A Shalabney, M Dryzhakov, S J Varma, J Moran, T Chervy, X Zhong, E Devaux, C Genet, J A Hutchison, T W Ebbesen, Thomas, A.; George, J.; Shalabney, A.; Dryzhakov, M.; Varma, S. J.; Moran, J.; Chervy, T.; Zhong, X.; Devaux, E.; Genet, C.; Hutchison, J. A.; Ebbesen, T. W.
Ground-State Chemical Reactivity under Vibrational Coupling to the Vacuum Electromagnetic Field. Angew. Chemie Int. 55Ground-State Chemical Reactivity under Vibrational Coupling to the Vacuum Electro- magnetic Field. Angew. Chemie Int. Ed. 2016, 55, 11462-11466.
Cavity Catalysis by Cooperative Vibrational Strong Coupling of Reactant and Solvent Molecules. J Lather, P Bhatt, A Thomas, T W Ebbesen, J George, AngewLather, J.; Bhatt, P.; Thomas, A.; Ebbesen, T. W.; George, J. Cavity Catalysis by Cooperative Vibrational Strong Coupling of Reactant and Solvent Molecules. Angew.
. Chemie Int. Ed. 58Chemie Int. Ed. 2019, 58, 10635-10638.
Cavity Catalysis -Accelerating Reactions under Vibrational Strong Coupling. H Hiura, A Shalabney, J George, Hiura, H.; Shalabney, A.; George, J. Cavity Catalysis -Accelerating Reactions under Vibrational Strong Coupling. 2018,
Tilting a ground-state reactivity landscape by vibrational strong coupling. A Thomas, L Lethuillier-Karl, K Nagarajan, R M A Vergauwe, J George, T Chervy, A Shalabney, E Devaux, C Genet, J Moran, T W Ebbesen, Science. 363Thomas, A.; Lethuillier-Karl, L.; Nagarajan, K.; Vergauwe, R. M. A.; George, J.; Chervy, T.; Shalabney, A.; Devaux, E.; Genet, C.; Moran, J.; Ebbesen, T. W. Tilt- ing a ground-state reactivity landscape by vibrational strong coupling. Science 2019, 363, 615-619.
Modification of Enzyme Activity by Vibrational Strong Coupling of Water. Angew. Chemie Int. R M A Vergauwe, A Thomas, K Nagarajan, A Shalabney, J George, T Chervy, M Seidel, E Devaux, V Torbeev, T W Ebbesen, 58Vergauwe, R. M. A.; Thomas, A.; Nagarajan, K.; Shalabney, A.; George, J.; Chervy, T.; Seidel, M.; Devaux, E.; Torbeev, V.; Ebbesen, T. W. Modification of Enzyme Activity by Vibrational Strong Coupling of Water. Angew. Chemie Int. Ed. 2019, 58, 15324- 15328.
Two-dimensional infrared spectroscopy of vibrational polaritons. B Xiang, R F Ribeiro, A D Dunkelberger, J Wang, Y Li, B S Simpkins, J C Owrutsky, J Yuen-Zhou, W Xiong, Proc. Natl. Acad. Sci. Natl. Acad. Sci115Xiang, B.; Ribeiro, R. F.; Dunkelberger, A. D.; Wang, J.; Li, Y.; Simpkins, B. S.; Owrutsky, J. C.; Yuen-Zhou, J.; Xiong, W. Two-dimensional infrared spectroscopy of vibrational polaritons. Proc. Natl. Acad. Sci. 2018, 115, 4845-4850.
State-Selective Polariton to Dark State Relaxation Dynamics. B Xiang, R F Ribeiro, L Chen, J Wang, M Du, J Yuen-Zhou, W Xiong, J. Phys. Chem. A. 123Xiang, B.; Ribeiro, R. F.; Chen, L.; Wang, J.; Du, M.; Yuen-Zhou, J.; Xiong, W. State- Selective Polariton to Dark State Relaxation Dynamics. J. Phys. Chem. A 2019, 123, 5918-5927.
Theory of the Contribution of Excitons to the Complex Dielectric Constant of Crystals. J J Hopfield, Phys. Rev. 112Hopfield, J. J. Theory of the Contribution of Excitons to the Complex Dielectric Con- stant of Crystals. Phys. Rev. 1958, 112, 1555-1567.
Multi-level quantum Rabi model for anharmonic vibrational polaritons. F J Hernández, F Herrera, J. Chem. Phys. 144116Hernández, F. J.; Herrera, F. Multi-level quantum Rabi model for anharmonic vibra- tional polaritons. J. Chem. Phys. 2019, 151, 144116.
Theory for Polariton-Assisted Remote Energy Transfer. M Du, L A Martínez-Martínez, R F Ribeiro, Z Hu, V M Menon, J Yuen-Zhou, Chem. Sci. 9Du, M.; Martínez-Martínez, L. A.; Ribeiro, R. F.; Hu, Z.; Menon, V. M.; Yuen-Zhou, J. Theory for Polariton-Assisted Remote Energy Transfer. Chem. Sci. 2018, 9, 6659-6669.
Oscillator model for vacuum Rabi splitting in microcavities. S Rudin, T L Reinecke, Phys. Rev. B. 59Rudin, S.; Reinecke, T. L. Oscillator model for vacuum Rabi splitting in microcavities. Phys. Rev. B 1999, 59, 10227-10233.
F Ribeiro, R Dunkelberger, A D Xiang, B Xiong, W Simpkins, B S Owrutsky, J C Yuen-Zhou, J , Theory for Nonlinear Spectroscopy of Vibrational Polaritons. F. Ribeiro, R.; Dunkelberger, A. D.; Xiang, B.; Xiong, W.; Simpkins, B. S.; Owrut- sky, J. C.; Yuen-Zhou, J. Theory for Nonlinear Spectroscopy of Vibrational Polaritons.
. J. Phys. Chem. Lett. 9J. Phys. Chem. Lett. 2018, 9, 3766-3771.
Cavity Casimir-Polder Forces and Their Effects in Ground-State Chemical Reactivity. J Galego, C Climent, F J Garcia-Vidal, J Feist, Phys. Rev. X. 921057Galego, J.; Climent, C.; Garcia-Vidal, F. J.; Feist, J. Cavity Casimir-Polder Forces and Their Effects in Ground-State Chemical Reactivity. Phys. Rev. X 2019, 9, 021057.
Resonant catalysis of thermally activated chemical reactions with vibrational polaritons. J A Campos-Gonzalez-Angulo, R F Ribeiro, J Yuen-Zhou, Nat. Commun. 104685Campos-Gonzalez-Angulo, J. A.; Ribeiro, R. F.; Yuen-Zhou, J. Resonant catalysis of thermally activated chemical reactions with vibrational polaritons. Nat. Commun. 2019, 10, 4685.
A Reaction Kinetic Model for Vacuum-Field Catalysis Based on Vibrational Light-Matter Coupling. H Hiura, A Shalabney, Hiura, H.; Shalabney, A. A Reaction Kinetic Model for Vacuum-Field Catalysis Based on Vibrational Light-Matter Coupling.
On the Origin of Ground-State Vacuum-Field Catalysis: Equilibrium Consideration. T E Li, A Nitzan, J E Subotnik, Li, T. E.; Nitzan, A.; Subotnik, J. E. On the Origin of Ground-State Vacuum-Field Catalysis: Equilibrium Consideration. 2020,
Polaritonic normal modes in Transition State Theory. J A Campos-Gonzalez-Angulo, J Yuen-Zhou, Campos-Gonzalez-Angulo, J. A.; Yuen-Zhou, J. Polaritonic normal modes in Transition State Theory.
Vacuum field in a cavity, light-mediated vibrational coupling, and chemical reactivity. V P Zhdanov, Chem. Phys. 2020110767Zhdanov, V. P. Vacuum field in a cavity, light-mediated vibrational coupling, and chem- ical reactivity. Chem. Phys. 2020, 535, 110767.
Quantum trajectory simulation of controlled phase-flip gates using the vacuum Rabi splitting. H Goto, K Ichimura, Phys. Rev. A. 54301Goto, H.; Ichimura, K. Quantum trajectory simulation of controlled phase-flip gates using the vacuum Rabi splitting. Phys. Rev. A 2005, 72, 054301.
Quasiclassical modeling of cavity quantum electrodynamics. T E Li, H.-T Chen, A Nitzan, J E Subotnik, Phys. Rev. 202033831Li, T. E.; Chen, H.-T.; Nitzan, A.; Subotnik, J. E. Quasiclassical modeling of cavity quantum electrodynamics. Phys. Rev. A 2020, 101, 033831.
Benchmarking semiclassical and perturbative methods for real-time simulations of cavity-bound emission and interference. N M Hoffmann, C Schäfer, N Säkkinen, A Rubio, H Appel, A Kelly, J. Chem. Phys. 244113Hoffmann, N. M.; Schäfer, C.; Säkkinen, N.; Rubio, A.; Appel, H.; Kelly, A. Benchmark- ing semiclassical and perturbative methods for real-time simulations of cavity-bound emission and interference. J. Chem. Phys. 2019, 151, 244113.
Vacuum Rabi splitting in a plasmonic cavity at the single quantum emitter limit. K Santhosh, O Bitton, L Chuntonov, G Haran, Nat. Commun. Santhosh, K.; Bitton, O.; Chuntonov, L.; Haran, G. Vacuum Rabi splitting in a plasmonic cavity at the single quantum emitter limit. Nat. Commun. 2016, 7, ncomms11823.
Effects of exciton-plasmon strong coupling on third harmonic generation by two-dimensional WS 2 at periodic plasmonic interfaces. M Sukharev, R Pachter, J. Chem. Phys. 94701Sukharev, M.; Pachter, R. Effects of exciton-plasmon strong coupling on third harmonic generation by two-dimensional WS 2 at periodic plasmonic interfaces. J. Chem. Phys. 2018, 148, 094701.
Atoms and Molecules in Cavities, from Weak to Strong Coupling in Quantum-Electrodynamics (QED) Chemistry. J Flick, M Ruggenthaler, H Appel, A Rubio, Proc. Natl. Acad. Sci. Natl. Acad. Sci114Flick, J.; Ruggenthaler, M.; Appel, H.; Rubio, A. Atoms and Molecules in Cavities, from Weak to Strong Coupling in Quantum-Electrodynamics (QED) Chemistry. Proc. Natl. Acad. Sci. 2017, 114, 3026-3034.
Multiscale Molecular Dynamics Simulations of Polaritonic Chemistry. H L Luk, J Feist, J J Toppari, G Groenhof, J. Chem. Theory Comput. 13Luk, H. L.; Feist, J.; Toppari, J. J.; Groenhof, G. Multiscale Molecular Dynamics Simulations of Polaritonic Chemistry. J. Chem. Theory Comput. 2017, 13, 4324-4335.
Tracking Polariton Relaxation with Multiscale Molecular Dynamics Simulations. G Groenhof, C Climent, J Feist, D Morozov, J J Toppari, J. Phys. Chem. Lett. 10Groenhof, G.; Climent, C.; Feist, J.; Morozov, D.; Toppari, J. J. Tracking Polariton Re- laxation with Multiscale Molecular Dynamics Simulations. J. Phys. Chem. Lett. 2019, 10, 5476-5483.
Vibrational Ultra Strong Coupling of Water and Ice. H Hiura, A Shalabney, J George, Hiura, H.; Shalabney, A.; George, J. Vibrational Ultra Strong Coupling of Water and Ice. 2019,
A general purpose model for the condensed phases of water: TIP4P. J L F Abascal, C Vega, J. Chem. Phys. 234505Abascal, J. L. F.; Vega, C. A general purpose model for the condensed phases of water: TIP4P/2005. J. Chem. Phys. 2005, 123, 234505.
Zero point energy leakage in condensed phase dynamics: An assessment of quantum simulation methods for liquid water. S Habershon, D E Manolopoulos, J. Chem. Phys. 244518Habershon, S.; Manolopoulos, D. E. Zero point energy leakage in condensed phase dynamics: An assessment of quantum simulation methods for liquid water. J. Chem. Phys. 2009, 131, 244518.
Combined electronic structure/molecular dynamics approach for ultrafast infrared spectroscopy of dilute HOD in liquid H2O and D2O. S A Corcelli, C P Lawrence, J L Skinner, J. Chem. Phys. 120Corcelli, S. A.; Lawrence, C. P.; Skinner, J. L. Combined electronic structure/molecular dynamics approach for ultrafast infrared spectroscopy of dilute HOD in liquid H2O and D2O. J. Chem. Phys. 2004, 120, 8107-8117.
Photons and Atoms: Introduction to Quantum Electrodynamics. C Cohen-Tannoudji, J Dupont-Roc, G Grynberg, WileyNew YorkCohen-Tannoudji, C.; Dupont-Roc, J.; Grynberg, G. Photons and Atoms: Introduction to Quantum Electrodynamics; Wiley: New York, 1997; pp 280-295.
LightâĂŞmatter interaction in the long-wavelength limit: no ground-state without dipole self-energy. V Rokaj, D M Welakuh, M Ruggenthaler, A Rubio, J. Phys. B At. Mol. Opt. Phys. 34005Rokaj, V.; Welakuh, D. M.; Ruggenthaler, M.; Rubio, A. LightâĂŞmatter interaction in the long-wavelength limit: no ground-state without dipole self-energy. J. Phys. B At. Mol. Opt. Phys. 2018, 51, 034005.
Relevance of the Quadratic Diamagnetic and Self-Polarization Terms in Cavity Quantum Electrodynamics. C Schäfer, M Ruggenthaler, V Rokaj, A Rubio, ACS Photonics 2020, acsphotonics.9b01649. Schäfer, C.; Ruggenthaler, M.; Rokaj, V.; Rubio, A. Relevance of the Quadratic Dia- magnetic and Self-Polarization Terms in Cavity Quantum Electrodynamics. ACS Pho- tonics 2020, acsphotonics.9b01649.
Effect of Many Modes on Self-Polarization and Photochemical Suppression in Cavities. N M Hoffmann, L Lacombe, A Rubio, N T Maitra, Hoffmann, N. M.; Lacombe, L.; Rubio, A.; Maitra, N. T. Effect of Many Modes on Self-Polarization and Photochemical Suppression in Cavities. 2020,
Applying electric field to charged and polar particles between metallic plates: Extension of the Ewald method. K Takae, A Onuki, J. Chem. Phys. 124108Takae, K.; Onuki, A. Applying electric field to charged and polar particles between metallic plates: Extension of the Ewald method. J. Chem. Phys. 2013, 139, 124108.
Cavity quantum electrodynamics in the nonperturbative regime. D De Bernardis, T Jaako, P Rabl, Phys. Rev. A. 43820De Bernardis, D.; Jaako, T.; Rabl, P. Cavity quantum electrodynamics in the nonper- turbative regime. Phys. Rev. A 2018, 97, 043820.
Fast Parallel Algorithms for Short-Range Molecular Dynamics. S Plimpton, J. Comput. Phys. 117Plimpton, S. Fast Parallel Algorithms for Short-Range Molecular Dynamics. J. Comput. Phys. 1995, 117, 1-19.
CP2K: atomistic simulations of condensed matter systems. J Hutter, M Iannuzzi, F Schiffmann, J Vandevondele, Wiley Interdiscip. Rev. Comput. Mol. Sci. 4Hutter, J.; Iannuzzi, M.; Schiffmann, F.; VandeVondele, J. CP2K: atomistic simulations of condensed matter systems. Wiley Interdiscip. Rev. Comput. Mol. Sci. 2014, 4, 15-25.
. D A Mcquarrie, Statistical Mechanics. McQuarrie, D. A. Statistical Mechanics;
. Harper-Collins Publish-Ers, New YorkHarper-Collins Publish-ers: New York, 1976.
Ab Initio Molecular Dynamics Computation of the Infrared Spectrum of Aqueous Uracil. M.-P Gaigeot, M Sprik, J. Phys. Chem. B. 107Gaigeot, M.-P.; Sprik, M. Ab Initio Molecular Dynamics Computation of the Infrared Spectrum of Aqueous Uracil. J. Phys. Chem. B 2003, 107, 10344-10358.
Comparison of path integral molecular dynamics methods for the infrared absorption spectrum of liquid water. S Habershon, G S Fanourgakis, D E Manolopoulos, J. Chem. Phys. 74501Habershon, S.; Fanourgakis, G. S.; Manolopoulos, D. E. Comparison of path integral molecular dynamics methods for the infrared absorption spectrum of liquid water. J. Chem. Phys. 2008, 129, 074501.
Chemical Dynamics in Condensed Phases: Relaxation, Transfer and Reactions in Condensed Molecular Systems. A Nitzan, Oxford University PressNew YorkNitzan, A. Chemical Dynamics in Condensed Phases: Relaxation, Transfer and Reac- tions in Condensed Molecular Systems; Oxford University Press: New York, 2006.
This expression reflects one of several suggestions that were made for a correction factor which relates the quantum time-correlation function to its classical counterparts. 49This expression reflects one of several suggestions that were made for a correction factor which relates the quantum time-correlation function to its classical counterparts. 49
Elements of Quantum Optics. P Meystre, M Sargent, Springer Science & Business MediaNew York4th ed.Meystre, P.; Sargent, M. Elements of Quantum Optics, 4th ed.; Springer Science & Business Media: New York, 2007.
Reorientational correlation functions for computersimulated liquids of tetrahedral molecules. R Lynden-Bell, I Mcdonald, Mol. Phys. 43Lynden-Bell, R.; McDonald, I. Reorientational correlation functions for computer- simulated liquids of tetrahedral molecules. Mol. Phys. 1981, 43, 1429-1440.
Spectroscopic and transport properties of water. R Impey, P Madden, I Mcdonald, Mol. Phys. 46Impey, R.; Madden, P.; McDonald, I. Spectroscopic and transport properties of water. Mol. Phys. 1982, 46, 513-539.
Quantum diffusion in liquid water from ring polymer molecular dynamics. T F Miller, D E Manolopoulos, J. Chem. Phys. 154504Miller, T. F.; Manolopoulos, D. E. Quantum diffusion in liquid water from ring polymer molecular dynamics. J. Chem. Phys. 2005, 123, 154504.
|
[] |
[
"LECTURES ON THE PARALLELS BETWEEN MODULI OF QUIVER REPRESENTATIONS AND VECTOR BUNDLES OVER CURVES",
"LECTURES ON THE PARALLELS BETWEEN MODULI OF QUIVER REPRESENTATIONS AND VECTOR BUNDLES OVER CURVES"
] |
[
"Victoria Hoskins "
] |
[] |
[] |
These are lecture notes for a mini-course at the SCGP in March 2019 for the fourth workshop on the Geometry and Physics of Higgs bundles. In this course, we explore similarities between moduli of quiver representations and moduli of vector bundles over a smooth projective curve. After describing the basic properties of these moduli problems and constructions of their moduli spaces via geometric invariant theory and symplectic reduction, we introduce their hyperkähler analogues: moduli spaces of representations of a doubled quiver satisfying certain relations imposed by a moment map and moduli spaces of Higgs bundles. In the final lecture, we survey a surprising link between the counts of absolutely indecomposable objects over finite fields and the Betti cohomology of these (complex) hyperkähler moduli spaces due to work of Crawley-Boevey and Van den Bergh and Hausel, Letellier and Rodriguez-Villegas in the quiver setting, and work of Schiffmann in the bundle setting.
|
10.3842/sigma.2018.127
|
[
"https://arxiv.org/pdf/1809.05738v2.pdf"
] | 102,345,641 |
1809.05738
|
465e2919b6a8760821d2dc4b651e39bb3943d1a8
|
LECTURES ON THE PARALLELS BETWEEN MODULI OF QUIVER REPRESENTATIONS AND VECTOR BUNDLES OVER CURVES
15 Sep 2018
Victoria Hoskins
LECTURES ON THE PARALLELS BETWEEN MODULI OF QUIVER REPRESENTATIONS AND VECTOR BUNDLES OVER CURVES
15 Sep 2018
These are lecture notes for a mini-course at the SCGP in March 2019 for the fourth workshop on the Geometry and Physics of Higgs bundles. In this course, we explore similarities between moduli of quiver representations and moduli of vector bundles over a smooth projective curve. After describing the basic properties of these moduli problems and constructions of their moduli spaces via geometric invariant theory and symplectic reduction, we introduce their hyperkähler analogues: moduli spaces of representations of a doubled quiver satisfying certain relations imposed by a moment map and moduli spaces of Higgs bundles. In the final lecture, we survey a surprising link between the counts of absolutely indecomposable objects over finite fields and the Betti cohomology of these (complex) hyperkähler moduli spaces due to work of Crawley-Boevey and Van den Bergh and Hausel, Letellier and Rodriguez-Villegas in the quiver setting, and work of Schiffmann in the bundle setting.
Introduction
These are notes for a mini-course to be held at the SCGP in March 2019 as part of the fourth workshop on the Geometry and Physics of Higgs bundles. The goal of the course is to describe several parallels between moduli of bundles and quiver representations. The course is divided into three lectures as follows:
• Lecture 1: Properties and constructions of moduli spaces. • Lecture 2: Associated hyperkähler moduli spaces. • Lecture 3: Cohomology of hyperkähler moduli spaces and counting indecomposable objects. Each lecture contains several exercises, which will be discussed in the accompanying problem sessions.
We will focus on some of the most fundamental and striking similarities between these moduli problems. Due to a lack of time, it is not possible to properly survey several important results concerning these moduli problems with the care they deserve. We will largely omit the study of the associated moduli stacks and techniques for studying the cohomology of moduli spaces via Harder-Narasimhan recursions on the stack [2,17,49]. Moreover, we will not discuss Hall algebras associated to these moduli problems in any depth, or the relationship with Donaldson-Thomas theory; for a comprehensive introduction to Hall algebras see [51].
Moduli of quiver representations generalise many natural problems in linear algebra (for example, the classification of similar matrices via Jordan normal form). Despite their seemingly simple nature, quiver moduli spaces are ubiquitous in algebraic geometry (in fact, every projective variety arises as a quiver grassmannian [50]). Moreover, the study of such moduli spaces can shed light on related moduli problems and questions in representation theory.
The moduli problem most closely related to that of quiver representations is moduli of vector bundles (or coherent sheaves) on a smooth projective curve. Both moduli problems have associated abelian categories of homological dimension 1 and have associated moduli stacks which are smooth. In order to construct moduli spaces, one must restrict to a class of stable (or semistable) objects, then one obtains smooth moduli spaces of stable objects. These moduli spaces can be constructed as algebraic quotients using geometric invariant theory [41] (for bundles, the construction was given by Mumford, Seshadri and Newstead [41,55,45], and for quivers, this construction was given by King [33]), or, when over the complex numbers, via symplectic reduction. In fact, quiver moduli spaces have a finite dimensional symplectic construction, whereas moduli of vector bundles have an infinite dimensional gauge-theoretic symplectic construction [2]. For quivers, the algebraic and symplectic quotients are homeomorphic via the Kempf-Ness theorem [32,33]. The Kobayshi-Hitchin correspondence [13,43,58] gives the corresponding relationship for the gauge theoretic constructions of moduli spaces of vector bundles. Before proceeding, let us mention one important difference between moduli spaces of bundles and quiver representations: although moduli spaces of semistable vector bundles on a curve provide compactifications of moduli spaces of stable vector bundles, moduli spaces of semistable quiver representations are only projective over an associated affine quiver variety. The constructions and comparisons between these moduli spaces is the main topic of the first lecture.
In the second lecture, we turn to the study of associated (non-compact) hyperkähler moduli spaces. The symplectic constructions of moduli spaces M of quiver representations and vector bundles both arise by considering a smooth symplectic action of a Lie group on a complex vector space (in the case of vector bundles, the group and vector space both have infinite dimension). We can upgrade this to a hyperkähler setting by taking the cotangent lift of this action and then perform a hyperkähler reduction to construct a hyperkähler analogue H of M such that T * M ⊂ H. The hyperkähler reductions we obtain are moduli spaces of representations of a doubled quiver satisfying certain relations imposed by a moment map (closely related to Nakajima quiver varieties [42]) and moduli spaces of Higgs bundles [21,56].
In the final lecture, we will survey several surprising results relating the counts of absolutely indecomposable objects of these moduli problems over finite fields and the Betti cohomology of their associated hyperkähler moduli spaces. For quivers, this is due to work of Crawley-Boevey and Van den Bergh [12] and Hausel, Letellier and Rodriguez-Villegas [18], and was motivated by Kac's work in representation theory [28]. This work inspired Schiffmann [52] to formulate and prove an analogous statement for bundles, which lead to formulae for the Betti numbers of moduli spaces of Higgs bundles in the coprime setting.
The structure of these notes is as follows: §2 and §3 describe the basic properties and constructions of moduli spaces of quiver representations and vector bundles respectively, which forms the basis of the first lecture. In §4, we introduce the associated hyperkähler moduli spaces and survey some constructions of interesting submanifolds known as branes; this is the content of the second lecture. In §5, we provide the proof of Crawley-Boevey and Van den Bergh relating the counts of absolutely indecomposable quiver representations with the Betti numbers of the associated hyperkähler moduli spaces, and then sketch how Schiffmann extends this to bundles; this final section is the content of the third lecture.
Acknowledgements. The author's visit during the workshop is funded by NSF grant "NSF CAREER Award DMS 1749013" of L. Schaposnik and the Simons Center for Geometry and Physics. The author would also like to thank the participants of a seminar on the topic of the third lecture held at the Freie Universität Berlin for interesting discussions related to this topic.
2. Moduli spaces of quiver representations 2.1. Quiver representations over a field. A quiver Q = (V, A, h, t) is a finite connected directed graph consisting of finite sets of vertices V and arrows A with head and tail maps h, t : A → V giving the directions of the arrows. Throughout this section, we fix a field k. • W v is a finite-dimensional k-vector space for all v ∈ V ; • ϕ a : W t(a) → W h(a) is a k-linear map for all a ∈ A. The dimension vector of W is the tuple dim W = (dim W v ) v∈V . A morphism between two k-representations W := ((W v ) v∈V , (ϕ a ) a∈A ) and W ′ := ((W ′ v ) v∈V , (ϕ ′ a ) a∈A ) is a tuple of linear maps (f v : W v → W ′ v ) v∈V such that for all a ∈ A the following diagram commutes
W t(a) f t(a) ϕa / / W h(a) f h(a) W ′ t(a) ϕ ′ a / / W ′ h(a) .
The category Rep(Q, k) of k-representations of Q is a k-linear abelian category. For two k-representations W and W ′ of Q, the set of morphisms between them is a k-vector space denoted Hom Q (W, W ′ ) and similarly one can consider the spaces of extensions between such representations.
Example 2.2 (The Jordan Quiver). Let Q be the one loop quiver. Then a k-representation of Q is a vector space W with an endomorphism φ : W → W . Two representations (W, ϕ) and (W ′ , ϕ ′ ) of Q are isomorphic if dim W = dim W ′ and there is an isomorphism f : W → W ′ such that f • ϕ = ϕ ′ • f . In particular, for any representation (W, ϕ) of Q of dimension n we can choose a basis of W to obtain an isomorphic representation (k n , M ), where M ∈ Mat n×n (k). A different choice of basis would replace M with a conjugate matrix SM S −1 . Thus the isomorphism classes of n-dimensional k-representations of Q are in bijection with conjugacy classes of n × n-matrices. For an algebraically closed field k, we can classify the latter using Jordan normal forms.
dim Hom Q (W, W ′ ) − dim Ext 1 Q (W, W ′ ) = v∈V dim W v dim W ′ v − a∈A dim W t(a) dim W ′ h(a) .
Following this observation, we define a bilinear form on the free abelian group on the set of vertices V .
Definition 2.4. The Euler form associated to Q is a bilinear form on Z V given by (1) The Euler form depends on the orientation of Q. The Euler form is symmetric if and only if Q is symmetric (that is, for any two vertices v and w, we have |a : v → w| = |a : w → v|).
d, d ′ Q := v∈V d v d ′ v − a∈A d t(a) d ′ h(a) where d = (d v ) v∈V and d ′ = (d ′ v ) v∈V .
(2) The quadratic form q Q on Z V associated to the Euler form only depends on the underlying graph of the quiver Q. In fact, properties of this quadratic form can be related to the properties of the underlying graph (for example, if Q is a Dynkin quiver, then q Q is positive definite [10, §4].)
Exercise 2.6 (The category of quiver representations has homological dimension 1). For U ∈ Rep(Q, k), we can apply Hom Q (−, U ) to any short exact sequence 0 → W ′ → W → W ′′ → 0 in Rep(Q, k) to obtain a long exact sequence
0 →Hom Q (W ′′ , U ) → Hom Q (W, U ) → Hom Q (W ′ , U ) → Ext 1 Q (W ′′ , U ) → → Ext 1 Q (W, U ) → Ext 1 Q (W ′ , U ) → Ext 2 Q (W ′′ , U ) → .
. . Using the description of Ext 1 Q in Exercise 2.3, prove that Ext 1 Q (W, U ) → Ext 1 Q (W ′ , U ) is surjective, from which it follows that Ext 2 Q (W ′′ , U ) = 0 for all W ′′ and U and thus Rep(Q, k) has homological dimension 1. Remark 2.7. The path algebra k(Q) of Q over k is the k-vector space spanned by all paths in Q (including a trivial path e v at each vertex v) with multiplication given by concatenation of paths. In general, this is a non-commutative algebra, which is generated by the paths of length 0 (the vertices V ) and the paths of length 1 (the arrows A). We note that the path algebra is a finite dimensional k-algebra if and only if Q has no oriented cycles. Moreover, the category Rep(Q, k) is equivalent to the category of left k(Q)-modules (see [9,Proposition 1.2.2]). One can also calculate Ext groups of quiver representations by taking projective resolutions of the associated k(Q)-module.
Definition 2.8. A quiver representation W is (1) simple if it has no proper non-zero subrepresentations,(2)
indecomposable if it cannot be written as a direct sum of proper subrepresentations. Clearly every simple representation is indecomposable. Exercise 2.9 (Schur's lemma and simple quiver representations). For a simple k-representation W of Q, prove that End Q (W ) is a division algebra using Schur's Lemma. Hence, if k is algebraically closed, deduce that End Q (W ) ∼ = k and Aut Q (W ) ∼ = k × . Exercise 2.11. Find a quiver Q such that
• there is a simple representation not of the form S(v),
• there is a indecomposable representation which is not simple.
Remark 2.12. For a field extension k ⊂ K we have a natural functor
− ⊗ k K : Rep(Q, k) → Rep(Q, K)
given by extension of scalars.
Exercise 2.13. Show that extension of scalars does not in general preserve simple (respectively indecomposable) quiver representations. For example, consider k = R ⊂ K = C with a 2-dimensional representation of the Jordan quiver.
In fact, the category Rep(Q, k) of k-representations of Q is a Krull-Schmidt category, which means that the endomorphism ring of every idempotent representation is local (see Lemma 5.17) and every representation is isomorphic to a finite direct sum of indecomposable representations (and up to permutation, the indecomposable representations in such a direct sum are uniquely determined up to isomorphism); for details, see [9,Theorem 1.3.4].
2.2. GIT construction of moduli spaces. In this section, we describe King's construction [33] of moduli spaces of representations of a quiver Q = (V, A, h, t) over a field k. We fix a dimension vector d = (d v ) v∈V ∈ N V . Then every k-representation of Q of fixed dimension vector d is isomorphic to a point of the following affine k-space
Rep d (Q) := a∈A Mat d h(a) ×d t(a) .
The reductive k-group GL d := v∈V GL dv acts algebraically on Rep d (Q) by conjugation: for g = (g v ) v∈V ∈ GL d and M = (M a ) a∈A ∈ Rep d (Q), we have (2.1) g · M := (g h(a) M a g −1 t(a) ) a∈A and the orbits for this action are in bijection with the set of isomorphism classes of d-dimensional krepresentations of Q by Exercise 2.14 below. There is a subgroup ∆ := {(tI dv ) v∈V : t ∈ G m } ⊂ GL d acting trivially on Rep d (Q) and therefore a quotient of the action of GL d is equivalent to a quotient of the action of G d := GL d /∆. We have that
(2.2) d, d Q = dim GL d − dim Rep d (Q).
Exercise 2.14. Show the orbit and stabiliser of M ∈ Rep d (Q)(k) have the following descriptions:
GL d (k) · M = {M ′ ∈ Rep d (Q)(k) : M ′ ∼ = M } and Stab GL d (k) (M ) ∼ = Aut Q (M )
. Moreover, deduce from (2.2) and Exercise 2.3 that
dim Ext 1 Q (M, M ) = codim(GL d (k) · M )
. One would like to construct a moduli space for quiver representations as a quotient of this action using geometric invariant theory (GIT) [41]. Since Rep d (Q) is an affine variety and GL d is a reductive group, the ring of invariant functions
O(Rep d (Q)) GL d = {f : Rep d (Q) → A 1 : g · f = f ∀ g ∈ GL d } is a finitely generated k-algebra and the inclusion O(Rep d (Q)) GL d ֒→ O(Rep d (Q)) induces a GL d -invariant morphism π : Rep d (Q) → Rep d (Q)//GL d := Spec O(Rep d (Q)
) GL d of affine varieties, which is the affine GIT quotient. The double quotient notation indicates that this is not an orbit space in general, as π identifies orbits whose closures meet.
Exercise 2.15. Let k be an algebraically closed field. For the Jordan quiver with dimension vector n, we have that Rep d (Q) = Mat n×n ∼ = A n 2 with GL n acting by conjugation. Show that
O(Rep d (Q)) GLn = k[σ 1 , . . . , σ n ]
where σ i ∈ O(Rep d (Q)) are the coefficients of the characteristic polynomial (viewed as functions on Rep d (Q) which are invariant under conjugation). In particular, deduce that π : Rep d (Q) → Rep d (Q)//GL n ∼ = A n identifies orbits when n ≥ 2 (for example, use the classification of orbits given by Jordan normal form). In fact, this is a special case of a result of Le Bruyn and Procesi described below, as the coefficients of the characteristic polynomial of a matrix can be computed by taking traces of powers of the matrix.
The affine GIT quotient π : Rep d (Q) → Rep d (Q)//GL d may identify all orbits; for example, if there are no non-constant invariant functions, which is the case if Q has no oriented cycles by the following theorem.
Theorem 2.16 (Le Bruyn and Procesi [36]). The ring of invariants O(Rep d (Q)) GL d is generated by taking traces along oriented cycles in Q.
Instead King constructs a GIT quotient of the GL d -action on an open subset of Rep d (Q) by linearising the action using a stability parameter θ = (θ v ) v∈V ∈ Z V . The stability parameter θ determines a character
χ θ : GL d → G m (2.3) χ θ ((g v ) v∈V ) := v∈V (det g v ) θv , which descends to a character of G d if and only if χ θ (∆) = 1 (that is, θ · d := v∈V θ v d v = 0). Let L θ denote the GL d -linearisation on the trivial line bundle Rep d (Q) × A 1 ,
where GL d acts on A 1 via multiplication by χ θ . Then we can use invariant sections of positive powers of L θ to construct a GIT semistable set and a GIT quotient. As in [33], the invariant sections of positive tensor powers L n θ of this linearisation are χ n θ -semi-invariant functions; that is, f : Rep d (Q) → A 1 satisfying f (g · X) = χ θ (g) n f (X), for all g ∈ GL d and all X ∈ Rep d (Q). We let O(Rep d (Q)) GLd,χ n θ denote the subset of χ n θ -semi-invariant functions; then
H 0 (Rep d (Q), L n θ ) GL d = O(Rep d (Q)) GL d ,χ n θ .
Since ∆ acts trivially on Rep d (Q), invariant sections of L n θ for n > 0 only exist if χ θ (∆) = 1 (i.e., θ · d = 0). Definition 2.17. For the GL d -linearisation on Rep d (Q) given by χ θ , we say a point X ∈ Rep d (Q) is GIT semistable if there exists n > 0 and an GL d -invariant section f of L n θ with f (X) = 0. We let Rep d (Q) χ θ −ss denote the subset of semistable points.
The semistable set is an open subset of Rep d (Q) and is non-empty only if θ · d = 0. Henceforth, we shall assume that θ · d = 0 in order to have a non-empty semistable set.
Mumford's linearised version of GIT gives us a GIT quotient
Rep d (Q) χ θ −ss → Rep d (Q)// χ θ GL d := Proj n≥0 O(Rep d (Q)) GL d ,χ n θ . Remark 2.18. The 0th graded piece O(Rep d (Q)) GL d ,χ 0 θ = O(Rep d (Q)
) GL d is the ring of invariant functions, and we have a projective (and thus proper) morphism
p : Rep d (Q)// χ θ GL d = Proj ⊕ n≥0 O(Rep d (Q)) GL d ,χ n θ → Rep d (Q)//GL d = Spec O(Rep d (Q)) GL d to an affine variety. If Q is a quiver without oriented cycles, then O(Rep d (Q)) GLd = k and Rep d (Q)// χ θ GL d is a projective variety.
The GL d -invariant sections of positive powers of L θ are also used to determine a GIT notion of stability with respect to χ θ (see, [33,Definition 2.1] for k = k, where we note that the notion of stability is modified to account for the presence of the global stabiliser ∆). This determines an open subset Rep d (Q) χ θ −s of χ θstable points and the GIT quotient restricts to a quotient π|
Rep d (Q) χ θ −s : Rep d (Q) χ θ −s → Rep d (Q) χ θ −s /GL d
which is a geometric quotient (which in particular, is an orbit space) of the GIT stable set.
Using the Hilbert-Mumford criterion to relate GIT semistability of geometric points with stability for 1-parameter subgroups λ : G → GL d , King proves that the GIT notion of (semi)stability can be translated to a notion of (semi)stability for d-dimensional representations of Q. For points over a non-algebraically closed field, GIT stability is related to a notion of geometric stability for representations as described below.
Definition 2.19 (Semistability). Let θ · d = 0. We say a d-dimensional k-representation W of Q is:
(
1) θ-semistable if θ · dim W ′ ≥ 0 for all k-subrepresentations 0 = W ′ W . (2) θ-stable if θ · dim W ′ > 0 for all k-subrepresentations 0 = W ′ W .
(3) θ-polystable if it is isomorphic to a direct sum of θ-stable representations of equal slope.
(4) θ-geometrically stable if W ⊗ k K is θ-stable for all field extensions K/k. There are natural notions of Jordan-Hölder filtrations, and we say two θ-semistable k-representations of Q are S-equivalent if their associated graded objects for their Jordan-Hölder filtrations are isomorphic. Exercise 2.20 (Rephrasing of stability as a slope-type condition). For θ ∈ Z V , we define the slope of a k-representation W of Q by
µ θ (W ) := v∈V θ v dim k W v v∈V dim k W v . Let θ ′ v := θ v w∈V d w − w∈V θ w d w for all v ∈ V ; then show that v∈V θ ′ v d v = 0
and that slope semistability with respect to θ and θ ′ coincide (where slope semistability means all subrepresentations have slope less than or equal to the slope of the representation). Furthermore, show for d-dimensional representations of Q that (−θ ′ )-semistability (as in Definition 2.19) is equivalent to slope semistability with respect to θ ′ .
The slope version of (semi)stability enables one to easily define Harder-Narasimhan (HN) filtrations for quiver representations. In [49], Reineke used the HN stratification on Rep d (Q) (together with the HN system in an associated Hall algebra) to give formulae for the Poincaré polynomials of moduli spaces of semistable representations of quivers without oriented cycles, when semistability is taken with respect to a generic stability parameter in the following sense.
Definition 2.21. A stability parameter θ ∈ Z V is called generic with respect to d if θ · d = 0 and for all non-zero d ′ ∈ N V with d ′ < d, we have θ · d ′ = 0.
Remark 2.22. For a generic stability parameter θ with respect to d, every θ-semistable k-representation of Q is also θ-stable.
Using the Hilbert-Mumford criterion, which gives a criterion for GIT semistability using 1-parameter subgroups of GL d , King shows the open subsets Rep d (Q) θ−ss and Rep d (Q) θ−gs of θ-semistable and θgeometrically stable k-representations in Rep d (Q) coincide with the GIT semistable and stable locus respectively:
Rep d (Q) θ−ss = Rep d (Q) χ θ −ss and Rep d (Q) θ−gs = Rep d (Q) χ θ −s . Hence, the GIT quotient M θ−ss d (Q) := Rep d (Q)// χ θ G d is a k-variety that co-represents the moduli func- tor of θ-semistable k-representations of Q of dimension d (up to S-equivalence). Moreover, M θ−gs d (Q) := Rep d (Q) χ θ −s /G d is an open k-subvariety of M θ−ss d
(Q) that co-represents the moduli functor of θ-geometrically stable k-representations of Q of dimension d (up to isomorphism). We will refer to both these spaces as moduli spaces.
Exercise 2.23. Consider the n-arrow Kronecker quiver with
V = {v 1 , v 2 } and A = {a i : v 1 → v 2 } n i=1 . For the dimension vector d = (1, 1), we have Rep d (Q) ∼ = A n with the action of GL d ∼ = G m × G m given by (t 1 , t 2 ) → diag(t 2 t −1 1 , . . . , t 2 t −1 1 ). If we naturally identify O(Rep d (Q)) ∼ = k[x 1 , . .
. , x n ] with each variable corresponding to an arrow, then O(Rep d (Q)) GL d = k. Moreover, show that for θ + = (1, −1) and θ − = (−1, 1) we have the following semistable loci and GIT quotients
Rep d (Q) θ+−ss = ∅ and Rep d (Q) θ−−ss = A n − {0} → Rep d (Q)// θ− GL d = Proj(k[x 1 , . . . , x n ]) ∼ = P n−1
Exercise 2.24 (Stable representations are simple). Prove that a θ-stable k-representation of Q is simple. If k is algebraically closed, deduce that the automorphism group of a θ-stable representation of Q is isomorphic to the multiplicative group k × .
In fact, for an arbitrary field k, the stabiliser group of an θ-geometrically stable k-representation is the subgroup ∆ ⊂ GL d (for example, see [25,Corollary 2.14]). The action of G d := GL d /∆ on Rep d (Q) θ−gs is free and the geometric quotient
Rep d (Q) θ−gs → M θ−gs d (Q) is a principal G d -bundle; thus M θ−gs d (Q) is smooth. Provided Rep d (Q) θ−gs = ∅, we have dim M θ−gs d (Q) = dim Rep d (Q) − dim G d = dim Rep d (Q) − dim GL d + 1 = 1 − d, d Q .
Remark 2.25. In fact, it is a theorem of Gabriel that the Tits form q Q (d) := d, d Q is positive definite if and only if the underlying graph of Q is a simply-laced Dynkin diagram; this is in turn equivalent to each of the following statements:
i) q Q (d) ≥ 1, ii) there is an open G d -orbit in Rep d (Q),
iii) there are only finitely many G d -orbit in Rep d (Q). By Exercise 2.14 we see (ii) implies (i) and the equivalence of (ii) and (iii) holds as the closure of any orbit is a union of finitely many orbits and, as Rep d (Q) is irreducible, any open orbit is dense.
For an algebraically closed field k, the closed points of M θ−ss d (Q) are in bijection with S-equivalence classes of G d (k)-orbits of χ θ -semistable rational points, where two k-representations M 1 and M 2 are Sequivalent if their orbit closures intersect in Rep d (Q) χ θ −ss . By [33, Proposition 3.2.(ii)], this is the same as the S-equivalence of M 1 and M 2 as θ-semistable representations of Q. Moreover, for k algebraically closed, 26. For a non-algebraically closed field, the rational points of these moduli spaces do not in general correspond to rational orbits. In [25], for a perfect field k, we show using Galois cohomology and descent that there is an injection
we have M θ−s d (k) = Rep d (Q) θ−s (k)/G d (k). Remark 2.Rep d (Q) θ−gs (k)/G d (k) ֒→ M θ−gs d (Q)(k),
and that the remaining points in M θ−gs d (Q)(k) that do not come from isomorphism classes of k-representations can be described as representations of Q over central division algebras over k (or equivalently as twisted quiver representations). More precisely, as the moduli stack We prove that a necessary condition for a division algebra D ∈ Br(k) to lie in the image of T is that deg(D) := dim k (D) divides the dimension vector d; that is, d = d D deg(D) for some dimension vector d D . Moreover, we interpret the fibre T −1 (D) using Galois descent as the space of isomorphism classes of θgeometrically stable d D -dimensional representations of Q over the division algebra D. This gives a complete description of M θ−gs d (Q)(k) in terms of isomorphism classes of representations over division algebras with centre k. Let us mention two special cases of this result:
[Rep d (Q) θ−gs /GL d ] → M θ−gs d (Q)
• For k = R, we have Br(R) = {R, H} and so the points in M θ−gs d (Q)(R) are rational or quaternionic quiver representations (and the latter only occur if 2|d).
• For a finite field k = F q , the Brauer group is trivial; thus M θ−gs d (Q)(F q ) is precisely the set of isomorphism classes of θ-geometrically stable d-dimensional F q -representations of Q.
2.3.
Symplectic construction of complex quiver varieties. Over the complex numbers, the Kempf-Ness theorem [32] relates certain geometric invariant theory quotients by reductive groups to smooth symplectic reductions by maximal compact subgroups [32]. In the case of quiver moduli, we have a complex reductive group GL d acting on a complex affine space Rep d (Q), and via a Kempf-Ness theorem, the GIT quotient of GL d acting on Rep d (Q) with respect to χ θ : GL d → G m is homeomorphic to the smooth symplectic reduction of the action of a maximal compact subgroup of GL d on Rep d (Q) as described by King [33]. In this section, we briefly explain this alternative symplectic construction.
The complex reductive group GL d is the complexification of the maximal compact subgroup
U d = v∈V U (d v ).
We consider the Hermitian form H :
Rep d (Q) × Rep d (Q) → C defined by H(X, Y ) = a∈A Tr(X a Y † a )
where Y † is the complex conjugate transpose of Y . Let We are interested in the situation where K acts linearly on a complex vector space M = C n . A K-invariant Hermitian inner product H on M gives M the structure of a Kähler manifold, as we can write H = g − iω, where g is a metric and ω a Kähler form. In this situation, by the following exercise, a moment map always exists but it is not necessarily unique as we can always shift it by a central value of k * .
Exercise 2.28. Let K act linearly on M = C n and pick a K-invariant Hermitian inner product H = g − iω on M . Prove that a moment map for the K-action on (M, ω) is given by
µ R : M → k * with µ R (m) · B := i 2 H(B m , m).
Furthermore, show that we can shift this moment map by any central value χ ∈ k * .
In particular, there is a moment map µ R : Rep d (Q) → u * d for the action of U d on Rep d (Q) given by
µ R (X) · B = i 2 H(B X , X) = a∈A Tr((B h(a) X a − X a B t(a) )X † a ).
By identifying u d ∼ = u * d using the Killing form, we obtain a map µ * R :
Rep d (Q) → u d where µ * R (X) = a∈A [X a , X † a ] = a:h(a)=v X a X † a − a:t(a)=v X † a X a v∈V .
Moreover, any tuple θ = (θ v ) v∈V ∈ Z V defines a character χ θ : GL d → G m whose restriction to U d has image in U(1). Hence, we can view the derivative dχ θ | U d : u d → u(1) ∼ = 2πiR as a coadjoint fixed point of u * d (often we also denote this by θ); this coadjoint fixed point can be used to shift the moment map. Such shifting of the moment map by a central value merely corresponds to considering different fibres of the moment map; this choice can be used to produce different symplectic reductions as follows.
Definition 2.29. For a symplectic K-action on (M, ω) with moment map µ R : M → k * , we define the (smooth) symplectic reduction of the K-action on M at a coadjoint fixed point χ ∈ k * to be the topological quotient µ −1 R (χ)/K. We note that the level set µ −1 R (χ) ⊂ M is K-invariant by the equivariance of µ R . If χ is a regular value of the moment map, then µ −1 R (χ) ⊂ M is a smooth submanifold. If K is a compact Lie group acting freely on µ −1 R (χ), then the topological quotient µ −1 R (χ)/K is a smooth manifold by the slice theorem and moreover, it inherits a (smooth) symplectic form from the form ω on M by the Marsden-Weinstein Theorem [37]. If in fact, (M, ω) is a Kähler manifold (i.e. there is a compatible complex structure and Riemannian metric on M ), then the symplectic reduction is also Kähler, provided it is smooth. If K acts on µ −1 R (χ) with finite stabilisers, then µ −1 R (χ)/K is a symplectic orbifold, and more generally if K acts with positive dimensional stabilisers, then µ −1 R (χ)/K can be given the structure of a stratified symplectic manifold.
Let (M, ω) be a Kähler manifold that is a smooth affine (or projective) complex variety with a Fubini-Study form. If there is a linear action of a complex reductive group G on M for which a maximal compact subgroup K < G preserves ω, then the Kempf-Ness theorem [32] provides a homeomorphism between the geometric invariant theory quotient of M by G and the symplectic reduction of M by K. We recall that the GIT quotient depends on a choice of linearisation of the action. In the case when M is projective, this work extends to give a comparison between an algebraic GIT stratification and a symplectic Morsetheoretic stratification [34,44]. In the affine setting M ⊂ A n , if χ : G → G m is a character which is used to linearise this action, then we can restrict χ to maximal compact subgroups and take derivatives to obtain dχ| K : k → u(1) ∼ = 2πiR; this defines an element of k * , which by abuse of notation we also denote by χ. Then the Kempf-Ness theorem gives an inclusion µ −1 R (χ) ֒→ M χ−ss which induces a homeomorphism µ −1 R (χ)/K → M// χ G. More precisely, the Kempf-Ness theorem states that the G-orbit closure of a χ-semistable orbit in M meets the level set µ −1 R (χ) in a unique K-orbit and the inclusion µ −1 R (χ) ֒→ M ss induces the above homeomorphism; for details in this affine setting, see [33] and [23], which also relates the GIT instability stratification with the symplectic Morse-theoretic stratification.
Let us return to the action of G = GL d on M = Rep d (Q), then θ determines a character χ θ of GL d and a coadjoint fixed
point θ ∈ u * d . The inclusion µ −1 R (χ θ ) ⊂ Rep d (Q) θ−ss induces a homeomorphism (2.4) µ −1 R (χ θ )/U d ≃ Rep d (Q)// χ θ GL d and if
θ is generic with respect to d (so that semistability and stability with respect to θ coincide for ddimensional representations of Q), then this symplectic reduction is a smooth symplectic (in fact, Kähler) manifold.
Moduli spaces of vector bundles over curves
The moduli problem of classifying algebraic vector bundles over a smooth projective curve has many similarities with that of quiver representations, which we explain in this section.
3.1. Vector bundles over a curve. Let X be a smooth projective curve over a field k. The genus of X is g(X) := h 0 (X, ω X ), where ω X := Ω 1 X is the canonical line bundle. We will often use the equivalence between the category of (algebraic) vector bundles on X and the category of locally free sheaves on X. We recall that this equivalence is given by associating to a vector bundle F → X the sheaf F of sections of F . One should be careful when going between vector bundles and locally free sheaves, as this correspondence does not preserve subobjects in general.
Although the category of locally free sheaves is not abelian, the category Coh(X) of coherent sheaves of O X -modules is abelian. The category Coh(X) has homological dimension 1, as Ext-groups can be described as sheaf cohomology groups:
Ext i (E, F ) = H i (X, Hom(E, F ))
which vanish for i ≥ 2 as dim X = 1. Moreover, the first cohomology groups can be described using Serre duality.
We can also define an Euler characteristic by
χ(E) := dim H 0 (X, E) − dim H 1 (X, E)
and for a pair E and F of locally free sheaves, we define an Euler form by
E, F := χ(E ∨ ⊗ F ) = dim Hom(E, F ) − dim Ext 1 (E, F )
In fact, using the Riemann-Roch formula, this Euler characteristic is entirely described by the invariants of these sheaves
E, F = rk E deg F − rk F deg E + rk E rk F (1 − g)
analogously to the case for quiver representations.
3.2.
Construction of moduli spaces of vector bundles. In this section, we outline some different constructions of moduli spaces of (algebraic) vector bundles of rank n and degree d over X. We start with an algebraic approach using geometric invariant theory which generalises to the construction of moduli spaces of coherent sheaves over projective schemes. We then survey the gauge theoretic construction over the complex numbers as an infinite dimensional symplectic reduction, which generalises to principal bundles and hyperkähler analogues, such as Higgs bundles (cf. §4.3).
Slope stability for vector bundles.
Definition 3.1. The slope of a non-zero vector bundle E over X is the ratio
µ(E) := deg E rk E .
A vector bundle E is slope stable (resp. semistable) if every proper non-zero vector subbundle E ′ ⊂ E satisfies
µ(E ′ ) < µ(E) (resp. µ(E ′ ) ≤ µ(E) for semistability).
A vector bundle E is polystable if it is a direct sum of stable bundles of the same slope.
Remark 3.2. Since the degree and rank are both additive on short exact sequences of vector bundles
0 → E → F → G → 0,
the following statements hold.
(1) If two out of the three bundles have the same slope µ, the third also has slope µ.
(2) µ(E) < µ(F ) (resp. µ(E) > µ(F )) if and only if µ(F ) < µ(G) (resp. µ(F ) > µ(G)).
Exercise 3.3. Let L be a line bundle and E a vector bundle over
X; then show i) L is stable, ii) if E is stable (resp. semistable), then E ⊗ L is stable (resp. semistable).
If we fix a rank n and degree d such that n and d are coprime, then the notion of semistability for vector bundles with invariants (n, d) coincides with the notion of stability.
Exercise 3.4 (Stable vector bundles are simple). Let f : E → F be a non-zero homomorphism of vector bundles on X over k = k; then prove the following statements.
i) If E and F are semistable, µ(E) ≤ µ(F ).
ii) If E and F are stable of the same slope, then f is an isomorphism. iii) Every stable vector bundle E is simple: End(E) = k.
GIT construction.
The moduli problem of rank n and degree d vector bundles over X is unbounded, in the sense that there is no finite type k-scheme parameterising all such vector bundles. We can overcome this problem by restricting to moduli of semistable vector bundles, which is bounded by work of Le Potier and Simpson [57]. The first construction of moduli spaces of semistable vector bundles over X were given by Mumford [41], Seshadri [55] and Newstead [45,46]. In these notes, we will essentially follow the construction due to Simpson [57] which generalises the curve case to a higher dimensional projective scheme. An excellent indepth treatment of the construction following Simpson can be found in the book of Huybrechts and Lehn [27]. We will exploit the fact that we are over a curve to simplify some of the arguments; for example, the boundedness of semistable sheaves is significantly easier over a curve, and in fact if we assume that the degree is sufficiently large, we have the following boundedness result.
Lemma 3.5. Let F be a locally free sheaf over X of rank n and degree d > n(2g − 1). If the associated vector bundle F is semistable, then the following statements hold:
i) H 1 (X, F ) = 0;
ii) F is generated by its global sections.
Proof. For i), we argue by contradiction using Serre duality: if H 1 (X, F ) = 0, then dually there would be a non-zero homomorphism f : F → ω X . We let K ⊂ F be the vector subbundle generically generated by the kernel of f which is a vector subbundle of rank n − 1 with
deg K ≥ deg ker f ≥ deg F − deg ω X = d − (2g − 2).
In this case, by semistability of F , we have
d − (2g − 2) n − 1 ≤ µ(K) ≤ µ(F ) = d n ;
this gives d ≤ n(2g − 2), which contradicts our assumption on the degree of F . For ii), we let F x denote the fibre of the vector bundle at a point x ∈ X. If we consider the fibre F x as a torsion sheaf over X, then we have a short exact sequence
0 → F (−x) := F ⊗ O X (−x) → F → F x = F ⊗ k x → 0
which gives rise to an associated long exact sequence in cohomology and it suffices to show that H 1 (X, F (−x)) = 0. To prove this vanishing, we apply part i) above to the sheaf
F (−x) = F ⊗O X (−x) which is also semistable with deg(F (−x)) = d − n > n(2g − 2).
Given a locally free sheaf F of rank n and degree d that is generated by its global sections, we can consider the evaluation map ev F : H 0 (X, F ) ⊗ O X → F which is, by assumption, surjective. If also H 1 (X, F ) = 0, then by the Riemann-Roch formula
χ(F ) = d + n(1 − g) = dim H 0 (X, F ) − dim H 1 (X, F ) = dim H 0 (X, F );
that is, the dimension of the 0th cohomology is fixed and equal to N := d + n(1 − g). Therefore, we can choose an isomorphism H 0 (X, F ) ∼ = k N and combine this with the evaluation map for F , to produce a surjection q F : O ⊕N X ։ F from a fixed trivial vector bundle. Such surjective homomorphisms from a fixed coherent sheaf are parametrised by a Quot scheme, which is a natural generalisation of the Grassmannian varieties (for a thorough treatement of Quot schemes, see [47]).
Let Q := Quot n,d X (O ⊕N X ) be the Quot scheme of rank n, degree d quotient sheaves of the trivial rank N vector bundle. Let Q µ−(s)s ⊂ Q denote the open subscheme consisting of quotients q : O ⊕N X → F such that F is a slope (semi)stable locally free sheaf and H 0 (q) is an isomorphism.
For a semistable sheaf F , we note that different choices of isomorphism H 0 (X, F ) ∼ = k N give rise to different points in Q µ−ss . Any two choices of the above isomorphism are related by an element in the general linear group GL N and this gives rise to an action of GL N on the Quot scheme Q such that the orbits in Q µ−(s)s (k) are in bijective correspondence with the isomorphism classes of (semi)stable locally free sheaves on X with invariants (n, d). In fact, the diagonal G m < GL N acts trivially, and so it suffices to take a quotient by the action of SL N to construct a moduli space.
We linearise this action to give an equivariant projective embedding in order to construct a GIT quotient. There is a natural family of invertible sheaves on the Quot scheme arising from Grothendieck's embedding of the Quot scheme into the Grassmannians: for sufficiently large m, we have a closed immersion
Q = Quot n,d X (O ⊕N X ) ֒→ Gr(H 0 (O X (m) ⊕N ), M ) ֒→ P(∧ M H 0 (O X (m) ⊕N ) ∨ ) where M = mr + d + r(1 − g). We let L m denote the pull back of O P (1) to the Quot scheme via this closed immersion. There is a natural linear action of SL N on H 0 (O X (m) ⊕N ) = k N ⊗ H 0 (O X (m)), which induces a linear action of SL N on P(∧ M H 0 (O X (m) ⊕N ) ∨ ); hence, L m admits a linearisation of the SL N -action.
This linearised action has a GIT quotient
Q ss → Q// Lm SL N
where Q ss denotes the GIT semistable locus and, as Q is projective, this GIT quotient is also projective. Provided we take d sufficiently large and m sufficiently large, the notion of GIT semistability for this SL Naction coincides with slope semistability; that is Q µ−ss = Q ss [57]. Over an algebraically closed field, GIT stability corresponds to slope stability, and over an arbitrary field k, GIT stability corresponds to geometric stability (cf. [35]). The above GIT quotient is a moduli space M ss C (n, d) := Q// Lm SL N for semistable rank n degree d vector bundles over X (up to S-equivalence), and its restriction to the GIT stable locus is a moduli space for geometrically stable vector bundles (up to isomorphism). As we are over a curve, the open subscheme Q ss ⊂ Q is smooth; however, the GIT quotient M ss C (n, d) of this smooth variety may be singular, as the action is not necessarily free.
For coprime rank and degree, semistability and stability coincide and, as stable vector bundles are simple, it follows that the GIT quotient is a PGL N -principal bundle (by Luna's étale slice theorem). In this case, the projective moduli space M = M ss C (n, d) is also smooth. Moreover, using the deformation theory of vector bundles, one can describe the Zariski tangent spaces to M by
T [E] M ∼ = Ext 1 (E, E).
In particular, dim M = n 2 (g − 1) + 1. Over a higher dimensional base, we have the same description of the tangent spaces at stable sheaves, except now the obstruction to smoothness of the moduli space (and Quot scheme) lies in a second Ext group, which could be non-zero; see [27,Corollary 4.52].
Functorial construction.
Álvarez-Cónsul and King [1] provide a construction of moduli spaces of semistable sheaves by functorially embedding this moduli problem into a moduli problem for quiver representations. More precisely, Simpson's GIT construction [57] of moduli spaces of sheaves on a polarised variety (X, O X (1)) depends on choices of natural numbers m >> n >> 0 (first one takes n sufficiently large, so all semistable sheaves are n-regular and can be parametrised by a Quot scheme, and then one takes m sufficiently to embed this Quot scheme in a Grassmannian and give a linearisation of the action). In [1], a functor
Φ n,m := Hom(O X (−n) ⊕ O X (−m), − ) : Coh(X) → Rep(K n,m )
from the category of coherent sheaves on X to the category of representations of a Kronecker quiver K n,m with two vertices n, m and dim H 0 (O X (m − n)) arrows from n to m is used for m >> n >> 0 to provide an embedding of the subcategory of semistable sheaves with Hilbert polynomial P into a subcategory of semistable quiver representations of fixed dimension (where both the semistability parameter and dimension vector depend on n, m and P ). This functorial approach is then used to construct the moduli space of semistable sheaves on X with Hilbert polynomial P using King's GIT construction of quiver moduli spaces.
3.2.4.
Gauge-theoretic construction. Over k = C, Atiyah and Bott [2] use an alternative gauge theoretic construction of this moduli space as a symplectic (in fact, Kähler) reduction.
In this complex setting, the curve X can be viewed as a compact Riemann surface. Rather than working in the algebraic category, we can switch to the holomorphic category, by using the GAGA-equivalence, which gives an equivalence between the category of algebraic bundles on X (viewed as an algebraic curve) and the category of holomorphic vector bundles on X (viewed as a complex manifold); for details, see [54]. A holomorphic vector bundle can be viewed as a complex vector bundle with a holomorphic structure, which one can equivalently view as a Dolbeault operator, as the integrability condition holds trivially for dimension reasons. For a fixed complex vector bundle E → X, we consider
C = C(E) := {holomorphic structures on E};
this is an infinite dimensional complex vector space which is modelled on Ω 0,1 (X, End(E)). Furthermore, we can pull back holomorphic structures along bundle homomorphisms of E and so this gives an action of
G C := Aut(E)
on C such that the orbits are precisely the isomorphism classes of holomorphic structures on E. The central group C * < G, which corresponds to scalar multiples of the identity map on E, acts trivially on C.
In order to construct a quotient of such an action, Atiyah and Bott relate the G C -space C to a space of unitary connections. We recall that the bundle of frames of E is a principal GL n (C)-bundle, where n = rk E. Since U(n) is a maximal compact subgroup of GL n (C), any principal GL n (C)-bundle admits a reduction to U(n), which we can equivalently think of as a Hermitian metric h on E. We can thus fix a Hermitian metric h on E.
Definition 3.6. An affine connection on E is a linear map ∇ : Ω 0 (X, E) → Ω 1 (X, E) satisfying the Leibniz rule. We say ∇ is h-unitary if dh(s 1 , s 2 ) = h(∇(s 1 ), s 2 ) + h(s 1 , ∇(s 2 )) for all sections s i of E. Let A = A(E, h) denote the space of h-unitary affine connections on E; this is an infinite dimensional complex affine space which is modelled on Ω 1 (X, End(E, h)), where End(E, h) denotes the bundle of h-skew
Hermitian endomorphisms of E. We can also view A as the space of connections on the principal U(n)-bundle associated to (E, h).
Definition 3.7. Let G := Aut(E, h) denote the h-unitary automorphisms of E; then G C = Aut(E) is the complexification of G.
We call G the unitary gauge group and G C the complex gauge group.
The unitary gauge group G acts on A of unitary connections and we can relate this to the action of the complex gauge group G C on C using the following isomorphism.
Lemma 3.8 (Atiyah-Bott isomorphism). There is an isomorphism A(E, h) → C(E) given by ∇ → ∇ (0,1) ,
which we view as a Dolbeault operator on E.
As we are working on a curve, there is no integrability condition and ∇ (0,1) defines a holomorphic structure on E. The inverse is given by taking the Chern connection ∇ ∂E ,h associated to a holomorphic structure ∂ E on E and a Hermitian metric h. Locally the Atiyah-Bott isomorphism corresponds to the isomorphism
Ω 1 (u(n)) ∼ = Ω 0,1 (gl n ).
Although the space A and the isomorphism A ∼ = C both depend on the choice of Hermitian metric h, we can identify the space of Hermitian metrics on E with Aut(E)/ Aut(E, h) = G C /G. Thus any two Hermitian metrics on E are related by a complex gauge transformation.
The space A has the structure of a smooth symplectic manifold: if we identify
T ∇ A ∼ = Ω 1 (X, End(E, h)), then ω R : T ∇ A × T ∇ A → R is defined by ω R (β, γ) := X Tr(β ∧ γ),
where Tr(β ∧γ) ∈ Ω 2 (X). In this infinite dimensional setting, ω R being non-degenerate means that it induces an injection T ∇ A → T * ∇ A. The inner product given by the trace also induces an isomorphism
Lie G * = Ω 0 (X, End(E, h)) * ∼ = Ω 2 (X, End(E, h)).
We recall that the curvature of ∇ ∈ A is the form F ∇ := ∇ 2 ∈ Ω 2 (X, End(E, h)).
Lemma 3.9. The G-action on A is symplectic with moment map µ : A → Ω 2 (X, End(E, h)) ∼ = Lie G * given by taking the curvature (modulo a sign):
µ(∇) = −F ∇ .
The sign here appears due to our sign conventions for the infinitesimal lifting property of the moment map. We leave the verification of this infinitesimal lifting property and the equivariance of µ as an exercise.
To construct a symplectic reduction of the G-action on A, we need to take the level set at a coadjoint fixed point. Since the Lie algebra of the unitary group has centre Z(u(n)) = iRI n , we similarly have that all imaginary scalar multiplies of the identity map on E are central in Lie G. Fix a Riemannian metric on X whose associated volume form induces the given orientation on X; then there is an associated Hodge star operator ⋆ : Ω k (X) → Ω 2−k (X). Using this Hodge star operator, we can view the moment map taking values in Lie G by µ :
A → Lie G ∇ → − ⋆ F ∇ . Definition 3.10. A h-unitary connection ∇ on E is projectively flat if ⋆F ∇ ∈ Ω 0 (X, End(E, h)
) is a constant element in the centre of u(n); that is, an imaginary scalar multiple of Id E .
In fact, the scalar appearing for such projectively flat connections is related to the slope µ(E).
Exercise 3.11. Using the fact that the degree of E can be defined using the curvature F ∇ of any connection on E via
deg(E) := X i 2π Tr(F ∇ ), prove that if ∇ is a projectively flat h-unitary connection (i.e. ⋆F ∇ = −iµId E for some constant µ ∈ R)
, then the constant µ is equal to the slope of E (provided we normalise our Riemmannian metric so the integral of its associated volume form over X is equal to 2π).
The symplectic reduction of the G-action on
A at the central value iµ(E)Id E ∈ Lie G is a moduli space M proj.flat E,h := µ −1 (iµ(E)Id E )/G
for unitary gauge equivalence classes of projectively flat h-unitary connections on E. In fact, as A ∼ = C has a compatible complex structure, it is naturally an infinite-dimensional Kähler manifold and so the associated moduli space inherits a Kähler structure if G/U(1) acts freely on the level set µ −1 (iµ(E)Id E ), which is the case if E has coprime rank and degree.
In order to relate this symplectic reduction with holomorphic structures, we need the following definition.
Definition 3.12. A Hermitian-Einstein connection on a complex vector bundle E is a projectively flat affine connection that is unitary for some Hermitian metric on E.
The moduli space of semistable vector bundles is homeomorphic to the moduli space of representations π 1 (X) → U(n) by the Narasimhan-Seshadri correspondence [43]. An alternative gauge theoretic interpretation of this result was provided by Donaldson [13] and Uhlenbeck and Yau [58] by relating the moduli space of projectively flat h-unitary connections on E to the moduli space of holomorphic structures on E; this is called the Kobayashi-Hitchin correspondence.
Theorem 3.13 (Kobayashi-Hitchin correspondence [13,43,58]). A holomorphic vector bundle E is slope polystable if and only if its underlying complex vector bundle admits a Hermitian-Einstein connection. Moreover, this connection is unique up to unitary gauge transformations.
A holomorphic structure is slope semistable if and only if its G C -orbit closure contains a holomorphic structure that is polystable; let C ss (resp. C ps ) denote the set of semistable (resp. polystable) holomorphic structures. By the Kobayashi-Hitchin correspondence, every point in C ss has G C -orbit closure that meets
µ −1 (iµ(E)Id E ) in a unique G-orbit. The inclusion µ −1 (iµ(E)Id E ) ֒→ C ss
induces a real-analytic isomorphism between the moduli space of projectively flat unitary connections and the moduli space of S-equivalence classes of semistable holomorphic bundles C ss //G C ≃ C ps /G C . This homeomorphism can be viewed as an infinite-dimensional version of the Kempf-Ness theorem, as it relates the symplectic reduction of the G-action on A with the S-equivalence classes of orbits of the complexified group G C in the semistable locus C ss .
4. Hyperkähler analogues of these moduli spaces 4.1. Algebraic symplectic quiver varieties. Throughout this section we assume that k is a field of characteristic different from 2. One key motivation for introducing the doubled quiver is that
Rep d (Q) = Rep d (Q) × Rep d (Q) * ∼ = T * Rep d (Q)
is an algebraic symplectic variety, with the Liouville symplectic form ω on this cotangent bundle. Explicitly, if X = (X a , X a * ) a∈A and Y = (Y a , Y a * ) a∈A are points in Rep d (Q), then
(4.1) ω(X, Y ) = a∈A Tr(X a Y a * − X a * Y a ).
The action of GL d on Rep d (Q) preserves this symplectic form and there is an algebraic moment map µ : Rep d (Q) → gl * d := Lie(GL d ); explicitly, for X ∈ Rep d (Q) and B ∈ gl d we have
(4.2) µ(X) · B = a∈A Tr(X a * (B X ) a ) = a∈A Tr(X a * (B h(a) X a − X a B t(a) )) where B X = (B h(a) X a − X a B tGL d -action on Rep d (Q) at (χ, η) is the GIT quotient µ −1 (η)// χ GL d .
If GIT semistability and stability for the G d -action on µ −1 (η) with respect to χ agree, then the variety µ −1 (η)// χ GL d is a smooth algebraic symplectic variety, with algebraic symplectic form induced by the Liouville form on T * Rep d (Q) by an algebraic version of the Marsden-Weinstein Theorem (see [15]).
The closed subvariety µ −1 (η) ֒→ Rep d (Q) induces a closed immersion
µ −1 (η)// χ θ GL d ֒→ M θ−ss d (Q).
Nakajima quiver varieties can also be constructed in this manner (for example, see [15]).
R η = a : t(a)=v M a M a * − a : h(a)=v M a * M a = η v I dv ∀v ∈ V .
Hence µ −1 (η)// χ θ GL d is the moduli space of θ-semistable d-dimensional representations of (Q, R η ). Under the correspondence between k-representations of Q and modules over the path algebra k(Q), the representations satisfying the relations R η correspond to modules over certain quotients of k(Q). More precisely, the category of k-representations of (Q, R η ) corresponds to the category of modules over the algebra
Π η := k(Q)/( a∈A [a, a * ] − v∈V η v e v )
where e v denotes the trivial path at v. The algebra Π 0 at η = 0 is called the preprojective algebra of Q.
Exercise 4.4. Prove that a necessary condition for the existence of a k-representation of (Q, R η ) of dimension d is that η · d = v∈V η v d v = 0 holds in k (Hint: take traces of the equations defining these relations).
In fact, the equation η · d = 0 in k ensures that η ∈ gl d (k) actually lies in g d (k), which is a necessary condition for µ −1 (η)(k) to be non-empty, as µ has image in g d . Proof. It suffices to prove this claim after base changing to an algebraic closure of k and so we can assume k is algebraically closed and check the statement on closed points. By Exercise 4.4, if there exists a d ′dimensional k-representation of (Q, R θ ), then θ · d ′ = 0 holds in k. Since θ is generic with respect to d, then for all dimension vectors d ′ < d, the equation θ · d ′ = 0 also holds in k, when k has characteristic 0 or p ≫ 0. Hence any k-representation of (Q, R θ ) of dimension d is θ-semistable (and θ-stable), as it has no subrepresentations, which proves the claim.
4.2.
Hyperkähler quiver varieties. Over the complex numbers, the algebraic symplectic reduction has a hyperkähler structure, as it can be interpreted as a hyperkähler reduction via the Kempf-Ness theorem. Indeed the cotangent bundle of a complex vector space is naturally hyperkähler and the action of the maximal compact subgroup U d < GL d on Rep d (Q) ∼ = T * Rep d (Q) preserves this hyperkähler structure, so one can instead perform a hyperkähler reduction [22].
More generally, we can perform a hyperkähler reduction of the cotangent bundle of a complex vector space M = C n . A Hermitian form H on M gives a symplectic form on M and an identification M ∼ = M * . Using the identification C × C ∼ = H given by (m, α) → x − jα in each coordinate, we obtain an identification T * M ∼ = M × M ∼ = H n which we can use to equip T * M with a hyperkähler structure. More precisely, we obtain complex structures I, J and K corresponding to right multiplication by i, j and k on H n and the hyperkähler metric g is the real part of the Quaternionic inner product
Q : H n × H n → H (z, w) → n l=1 z l w † l ,
where w † l denotes the quaternionic conjugate. Thus we can write
Q = g − iω I − jω J − kω K , such that ω I (−, −) = g(I−, −), ω J (−, −) = g(J−, −) and ω K (−, −) = g(K−, −).
We thus obtain a hyperkähler structure (g, I, J, K, ω I , ω J , ω K ) on T * M . We often write the Kähler structures as a pair (ω R , ω C ), where ω R = ω I and ω C = ω J + iω K , which is the Liouville algebraic symplectic form on T * M . Now suppose we additionally have a linear action of a complex reductive group G on M = C n and a maximal compact subgroup K < G for which the Hermitian form H is invariant. Then the induced K-action on T * M preserves the symplectic forms (ω R , ω C ) and there is a hyperkähler moment map µ HK := (µ R , µ C ), where µ R : T * M → k * is a smooth moment map for the K-action and µ C : T * M → g * is an algebraic moment map for the G-action. Explicitly, we have
µ R (m, α) · B := i 2 (H(B m , m) − H(B α , α)) and µ C (m, α) · C = α(C m ),
where B ∈ k and C ∈ g and (m, α) ∈ T * M . For a pair (χ, η) ∈ k * × g * of coadjoint fixed points, the hyperkähler reduction of the K-action on T * M at (χ, η) is the topological quotient of the K-action on µ −1 HK (χ, η) := µ −1 R (χ) ∩ µ −1 C (η). By the Kempf-Ness theorem, this hyperkähler reduction is homeomorphic to the GIT quotient of G acting on µ −1 (η) with respect to the character of G obtained from χ by exponentiating and complexifying; thus
µ −1 HK (χ, η)/K ∼ = µ −1 C (η)// χ GIn
particular, if K acts with finite stabilisers on this level set of the hyperkähler moment map, then this hyperkähler reduction inherits an orbifold hyperkähler structure [22].
Let us apply this to the quiver setting: we have M = Rep d (Q) and G = GL d and we take the Hermitian form H on Rep d (Q) as in §2.3, which is invariant under the action of K = U d . The hyperkähler metric g on T * M ∼ = Rep d (Q) is given by
g(X, Y ) = Re a∈A Tr(X a Y † a ) ;
thus, ω R = ω I is given by ω R (X, Y ) = Im a∈A Tr(X † a Y a ) and ω C = ω J + iω K is the Liouville algebraic symplectic form ω described in (4.1). Moreover, µ C : Rep d (Q) → gl * d is the algebraic moment map µ given by (4.2) and µ R : Rep d (Q) → u * d is given by
µ R (X) · B = i 2 a∈A Tr(B h(a) X a X † a − B t(a) X † a X a ).
Via the identification u d ∼ = u * d , we obtain a map µ * R : Rep d (Q) → u d given by µ * R (X) = i 2 a∈A [X a , X † a ]. If χ θ -semistability coincides with χ θ -stability on µ −1 (η), then we obtain a hyperkähler structure on the algebraic variety µ −1 (η)// χ θ GL d via the Kempf-Ness homeomorphism.
Remark 4.6. Let θ be a generic stability parameter with respect to d. Then θ-semistability and θ-stability for C-representations of Q coincide and the moduli space M θ−ss d (Q) of θ-semistable C-representations of Q is a smooth algebraic variety with a natural Kähler structure coming from the Kempf-Ness homeomorphism. As in work of Proudfoot [48], we can view the hyperkähler reduction of Rep d (Q) at (θ, 0) as a hyperkähler analogue of the kähler manifold M θ−ss d (Q) in the sense that
T * M θ−ss d (Q) ⊂ M θ−ss d (Q, R 0 ) = µ −1 C (0)// χ θ GL d ≃ (µ −1 R (θ) ∩ µ −1 C (0))/U d is contained as a dense open subset (provided M θ−ss d (Q) = ∅). Indeed, if π : Rep d (Q) θ−ss → M θ−ss d (Q)
denotes the GIT quotient, which is a principal G d -bundle as θ is generic, then for X ∈ Rep d (Q) θ−ss we have a short exact sequence Definition 4.7. A holomorphic Higgs bundle over X is a pair (E, Φ) consisting of a holomorphic vector bundle E over X and a holomorphic homomorphism Φ : E → E ⊗ ω X called a Higgs field. We define slope semistability for (E, Φ) by checking the inequality of slopes for all holomorphic Higgs subbundles (i.e. holomorphic subbundles E ′ ⊂ E that are Φ-invariant in the sense that Φ(E ′ ) ⊂ E ′ ⊗ ω X ).
0 → T X (G d · X) → T X Rep d (Q) → T π(X) M θ−ss d (Q) → 0 and dually T * π(X) M θ−ss d (Q) = {ξ ∈ T * X Rep d (Q) : ξ(A X ) = 0 ∀A ∈ g d } = {ξ ∈ T * X Rep d (Q) : µ C (X, ξ) = 0}. Thus, we have T * M θ−ss d (Q) ∼ = {(X, ξ) ∈ µ −1 C (0) ⊂ T * Rep d (Q) : X ∈ Rep d (Q) θ−ss }/G d ⊂ µ −1 C (0)// χ θ G d .
Remark 4.8. For coprime rank and degree, semistability and stability coincide for Higgs bundles.
We recall that the gauge theoretic construction of the moduli space M = M ss X (n, d) of semistable vector bundles is as the space of S-equivalence classes of complex gauge orbits in the space of semistable holomorphic structures C ss M ss X (n, d) = C ss //G C ≃ C ps /G C and by the Kobayashi-Hitchin correspondence, this space is homeomorphic to the symplectic reduction of G on the space of unitary connections (A, ω R ).
Let us fix a complex vector bundle E and Hermitian metric h. The space C of holomorphic structures (or Dolbeault operators ∂ E ) on E has cotangent bundle T * C ∼ = C × Ω 1,0 (X, End(E)). We write elements of T * C as pairs (∂ E , Φ), where Φ ∈ Ω 1,0 (X, End(E)) defines a (not necessarily holomorphic) Higgs field. The cotangent space T * C is an affine space modelled on Ω 0,1 (X, End(E)) × Ω 1,0 (X, End(E)) and its Liouville form is a holomorphic symplectic form ω C for the complex structure I (coming from the complex structure on E → X). Moreover, the natural G C -action on T * C admits a holomorphic moment map µ C : T * C → Lie G * C given by
µ C (∂ E , Φ) := 2i∂ E Φ.
We note that the zero level set of this moment map consists of pairs (∂ E , Φ), where Φ defines a holomorphic Higgs field on the holomorphic bundle E = (E, ∂ E ); that is (E, Φ) is a holomorphic Higgs bundle. We let µ −1 C (0) ss denote the subset of slope semistable holomorphic Higgs bundles and we define the moduli space of Higgs bundles as the holomorphic symplectic reduction H ss C (n, d) := µ −1 C (0) ss //G C equal to the set of S-equivalence classes of semistable G C -orbits in µ −1 C (0) (or equivalently, the set of polystable stable G C -orbits).
In fact, T * C is naturally an infinite dimensional flat hyperkähler manifold, as via the Atiyah-Bott isomorphism C ∼ = A, we can equip T * C with a real symplectic form ω R and associated Kähler metric (coming from the real symplectic form ω R on A in §3.2.4). More precisely, we can identify T * A ∼ = A × Ω 1 (X, End(E, h)) on which the unitary gauge group G naturally acts. We note that there is an isomorphism
T * C ∼ = A × Ω 1 (X, End(E, h)) given by (∂ E , Φ) → (∇ ∂ E ,h , Φ − Φ * ),
where ∇ ∂ E ,h denotes the Chern connection associated to (∂ E , h). In fact, (∂ E , Φ) ∈ T * C determines a GL n (C)-connection ∇ ∂E ,h + Φ + Φ * on the associated principal GL n (C)-bundle. Therefore, we can think of this cotangent bundle as the space of complex connections on E. The real moment map for the induced G-action on T * C is given by
µ R (∂ E , Φ) = −F ∂ E − [Φ, Φ * ],
where F ∂E denote the curvature of the associated Chern connection ∇ ∂E ,h and [α, β] := α ∧ β + β ∧ α is the extension of the Lie bracket to Lie algebra-valued forms.
In particular, we have a hyperkähler moment map µ HK = (µ R , µ C ) for the G-action on T * C. The zero level set of the hyperkähler moment map is the set of solutions of Hitchin's self-duality equations [21]. Any such solution determines an associated GL n (C)-connection which is flat and thus requires d = 0. To deal with vector bundles of non-zero degree, we take the level set at the value (⋆iµ(E)Id E , 0) ∈ Lie G * × Lie G * C ; then consider the hyperkähler reduction M Hit := (µ −1 R (⋆iµ(E)Id E ) ∩ µ −1 C (0))/G, which is a moduli space of solutions to Hitchin's equations (appropriately modified for d = 0) up to gauge equivalence. Then M Hit admits a triple of holomorphic structure I, J and K and a hyperkähler metric on its smooth locus. If n and d are coprime, then M Hit is a smooth hyperkähler manifold. A generalisation of the Kobayashi-Hitchin correspondencen correspondence for vector bundles to Higgs bundles due to Hitchin [21] and Simpson [56] states that a holomorphic Higgs bundle (E, Φ) is slope polystable if and only if (E, Φ) admits a Hermitian metric h such that
−(F ∂E + [Φ, Φ * ]) = ⋆iµ(E)Id E .
Hence, the complex structure I on M Hit gives the moduli space of Higgs bundles. The inclusion T * M ⊂ H is strict in general, as there are unstable vector bundles which can be equipped with a Higgs field for which the associated Higgs pair is stable; for example, this is the case if there are no Higgs subbundles. Indeed we have the following example due to Hitchin [21]. Exercise 4.10. Suppose that X has genus at least 2 and that L is a square root of ω X . Then prove that E = L ⊕ L −1 is unstable as a vector bundle, but admits a Higgs field Φ such that (E, Φ) is stable.
Branes.
Branes are submanifold of hyperkähler manifolds with particularly rich geometry (in the sense, that they are either Lagrangian or holomorphic with respect to a triple of Kähler structures). In this section, we summarise some constructions of branes in the quiver and bundle settings arising from fixed loci of automorphisms on these moduli spaces. We will use the language of branes as in [31] as follows.
Definition 4.11. A brane in a hyperkähler manifold (M, g, I, J, K, ω I , ω J , ω K ) is a submanifold which is either holomorphic or Lagrangian with respect to each of the three Kähler structures on M . A brane is called of type A (respectively B) with respect to a given Kähler structure if it is Lagrangian (respectively holomorphic) for this Kähler structure. We note that the brane-type depends on choosing a triple of Kähler structures (although often there is a natural choice). All triples of hyperkähler structures can be related using hyperkähler rotations.
4.4.1.
Branes in hyperkähler quiver varieties. Starting from a quiver Q, moduli spaces of representations of the doubled quiver Q (satisfying some relations) have a natural algebraic symplectic structure and, over k = C, a natural hyperkähler structure, provided these varieties are smooth (cf. §4.2). The study of branes in Nakajima quiver varieties was initiated in [14], where the authors use involutions such as complex conjugation, multiplication by −1 and transposition, to construct different branes. In [24] we construct branes associated to quiver automorphisms in the following sense.
= (σ V : V → V, σ A : A → A) is a (1) covariant automorphism of Q if σ A (a) : σ V (t(a)) → σ V (h(a)) for all a ∈ A, (2) contravariant automorphism of Q if σ A (a) : σ V (h(a)) → σ V (t(a)) for all a ∈ A.
Under certain compatibility conditions of an automorphism σ of Q with the dimension vector d and stability parameter θ, we show this automorphism determines an automorphism of M θ−ss d (Q) and we describe the components of the fixed locus. In the hyperkähler setting, for an automorphism of a doubled quiver Q we then describe the geometry of this fixed locus acting on the associated hyperkähler reduction in the language of branes.
Theorem 4.14 ( [24]). Let σ be an involution of Q such that σ(a * ) = σ(a) * for all a ∈ A. For choices of d, θ and η that are σ-compatible, σ induces an involution on H := µ −1 (η)// χ θ GL d . If θ is generic with respect to d, then H is a smooth hyperkähler manifold and the fixed locus has the following brane type
if σ(A) ⊂ A if σ(A) ⊂ A * H σ BBB BAA H σ•τ ABA AAB
where τ : C → C denote complex conjugation.
In particular, we see that all four types of branes (BBB, BAA, ABA and AAB) can be constructed as the fixed locus of an involution. In fact, we can also construct BBB-branes as fixed loci of a subgroup of quiver automorphisms of order higher than 2. Moreover, we provide a decomposition of these fixed loci using group cohomology and give moduli-theoretic description of each of the components appearing in these decompositions. For a quiver involution σ (or more generally a group of quiver automorphisms), the fixed loci components are described in terms of twisted equivariant quiver representations [24], and for complex conjugation τ , the components of the fixed locus are described in terms of real or quaternionic quiver representations [25].
4.4.2.
Branes in Higgs moduli spaces. The gauge theoretic construction of moduli spaces of Higgs bundles naturally generalises from the general linear group to any complex reductive group G. In this way, one obtains moduli spaces H G of G-Higgs bundles which inherit a hyperkähler structure on their smooth locus. We let I, J and K denote the complex structures as above, such that I corresponds to the original complex structure on X and gives the moduli space of Higgs bundles. Branes in H G have been constructed in [3,4,7,8] as fixed points sets of involutions on H G associated to anti-holomorphic involutions on G and X. In [3,4,7,8], some components of the fixed loci have been given moduli-theoretic descriptions: H σG contains a moduli space of G σX -Higgs bundles, H σX contains a component corresponds to representations of the orbifold fundamental group of (X, σ X ), and components of H σG•σX can be described as moduli spaces of pseudo-real Higgs bundles. Baraglia and Schaposnik [4] conjecture that under Langlands duality, which relates the moduli spaces H G and H G L of Higgs bundles for G and its Langlands dual group G L , the BAA-brane H σG G ⊂ H G corresponds to a BBB-brane H H ⊂ H G L , where H < G L is a complex subgroup (the so-called Nadler group) corresponding to the involution σ G .
Counting indecomposable objects and Betti numbers
For both quiver representations and vector bundles, there is a surprising link between the counts of absolutely indecomposable objects over finite fields and the Betti numbers of the (complex) hyperkähler moduli spaces described above. This was first discovered for indivisible dimension vectors on quivers without loops by Crawley-Boevey and Van den Bergh [12], and was motivated by a conjecture of Kac concerning the nonnegativity of the coefficients in the polynomial A Q,d (q) counting absolutely indecomposable d-dimensional F q -representations of Q. The proof of Kac's positivity conjecture for arbitrary Q and d was given by Hausel, Letellier and Rodriguez-Villegas [18].
In these works, the key idea is to provide a cohomological interpretation of the coefficients of A Q,d (q). In [12], this cohomological interpretation for indivisible dimension vectors is as the Betti numbers of hyperkähler quiver varieties associated to the doubled quiver. In [18], for an arbitrary Q and d, by attaching legs to each vertex in Q, they obtain as associated quiverQ d and indivisible dimension vectord. The generic algebraic symplectic reduction for this extended quiver is smooth, and its compactly supported cohomology admits an action by a finite group generated by the reflections at the new vertices. They interpret the coefficients of the Kac polynomials as the dimensions of the sign isotypical component of this cohomology by making use of an arithmetic Fourier transform. Furthermore, they give similar cohomological interpretations of the refined Donaldson-Thomas invariants of quivers.
This work on quiver representations inspired Schiffmann [52] to formulate and prove an analogous statement for bundles in the coprime setting, which lead to formulae for the Betti numbers of moduli spaces of Higgs bundles and eventually gave a proof of the conjectures of Hausel and Rodriguez-Villegas [19] on these Betti numbers.
In this section, we focus of the proof of this result in the quiver setting following the arguments of Crawley-Boevey and Van den Bergh. After this proof, we discuss the parallel argument in the bundle setting.
The statement in the quiver setting. Let Q be a quiver and d be a dimension vector. Motivated by questions in representation theory of quiver representations, Kac studied the properties of the count of absolutely indecomposable quiver representations over finite fields [28,30].
Definition 5.1. Let q be a prime power. Then a quiver representation W over F q is absolutely indecomposable if W ⊗ Fq F q is an indecomposable quiver representation. Let A Q,d (q) denote the number of isomorphism classes of absolutely indecomposable representations of Q over F q with dimension vector d. (1) If W is absolutely indecomposable, then W is indecomposable.
(2) The converse holds if d is an indivisible dimension vector.
Kac proved that A Q,d (q) is a polynomial in q with integer coefficients, and conjectured that the coefficients are natural numbers (see §5.5 below). In order to formulate the result required for the proof of this conjecture for quivers without loops and indivisible dimension vectors given Crawley-Boevey and Van den Bergh [12], we recall that there is an algebraic moment map µ : Rep d (Q) → gl d for the GL d -action on the space of representations of the doubled quiver over any field k. The zero level set of the moment map defines relations R 0 on the doubled quiver Q such that µ −1 (0) = Rep d (Q, R 0 ) is the space of representations of the preprojective algebra. Choose a generic stability parameter θ with respect to d; then θ-semistability and θ-stability (and θ-geometric stability) coincide for d-dimensional k-representations of Q (and also for the double quiver Q). The associated algebraic symplectic reduction
X 0 := µ −1 (0)// χ θ G d = M θ−ss d (Q, R 0 )
is a moduli space of θ-stable d-dimensional representations of (Q, R 0 ). Moreover, as semistability coincides with stability and all stable representations are simple, X 0 is a smooth algebraic variety which inherits an algebraic symplectic structure from Rep d (Q). If k = C, then X 0 is a (non-compact) hyperkähler manifold such that T * M θ−ss d (Q) ⊂ X 0 . Theorem 5.3 (Crawley-Boevey and Van den Bergh [12]). Let Q be a quiver without loops and d be an indivisible dimension vector. For a generic stability parameter θ with respect to d and for a finite field F q of sufficiently large prime characteristic, we have
A Q,d (q) = e i=0 dim H 2e−2i (X 0 (C), C)q i where e = 1 2 dim X 0 = dim M θ−ss d (Q).
In particular, A Q,d (q) is a polynomial in q with coefficients in N.
A summary of the strategy of the proof. Let us first outline the main steps involved in the proof.
Step 1: Deforming the moment map fibre to produce a cohomologically trivial family.
We will construct a family X → A 1 over any field k whose special fibre over 0 is X 0 and whose general fibre is isomorphic to X := M θ−ss d (Q, R θ ) = µ −1 (θ)// χ θ G d by taking X := µ −1 (L)// χ θ GL d for the line L ⊂ g d ⊂ gl d joining 0 and θ. Working over k = C, we use the hyperkähler structure on Rep d (Q) to show that this family is topologically trivial (and so the singular cohomology of X 0 and X are isomorphic). From this we will deduce that X and X 0 have the same point count over a finite field of sufficiently large characteristic (see Step 6).
Step 2: Purity of the special fibre X 0 via the scaling action.
We show that the natural dilation action on Rep d (Q) given by scaling the morphisms over each arrow induces a G m -action on X 0 that is semi-projective; that is, the fixed locus (X 0 ) Gm is projective and the limit of all points in X 0 under the action of t ∈ G m as t → 0 exists. Consequently, one can construct a Białynicki-Birula decomposition of X 0 which gives rise to a description of the cohomology (and other algebro-geometric invariants) of X 0 in terms of its G m -fixed locus, which is smooth and projective. In particular, this enables us to deduce that X 0 is cohomologically pure in Step 3.
Step 3: Purity and point counting over finite fields.
In this step, we explain how the Poincaré polynomial of X and X 0 can be computed by counting points over finite fields. The Weil conjectures and comparison theorems between singular and ℓ-adic cohomology, enable one to calculate the Betti numbers of a smooth projective complex variety Y with good reduction Z mod p by counting the F q -points of Z, where q is a power of p. Unfortunately, X and X 0 are not projective; however, we explain that the same conclusions still hold for a smooth variety Z over F q if Z is pure and has polynomial point count (that is, |Z(F q r )| is a polynomial in q r ). The plan is to apply this to X 0 , which is smooth and pure by Step 2. In the next two steps, we will show that X has polynomial point count over finite fields of sufficiently large characteristic
Step 4: Point counting for the general fibre X and absolutely indecomposable representations.
The goal of this step is to relate the F q -point count |X(F q )|, which is the number of isomorphism classes of θ-stable d-dimensional F q -representations of (Q, R θ ), with the number A Q,d (q) of absolutely indecomposable d-dimensional F q -representations of Q, where q is a power of a sufficiently large prime p. More precisely, we will show that for F q of large characteristic
A Q,d (q) = q −e |X(F q )|
where e := 1 2 dim X. For p sufficiently large, we will show that all points in µ −1 (θ) are θ-stable and the relationship between these two counts follows from work of Crawley-Boevey [11] studying the lifting of Q-representations to (Q, R θ )-representation under the restriction of the projection Rep d (Q) → Rep d (Q) to the level set µ −1 (θ) = Rep d (Q, R θ ). More precisely, Crawley-Boevey proves that the image on F q -points of π : µ −1 (θ) → Rep d (Q) is the set of indecomposable d-dimensional F q -representations of Q and also describes the fibres using selfextension groups of quiver representations.
Step 5: Kac's theorem on absolutely indecomposable quiver representations.
In this step, we survey Kac's work on absolutely indecomposable quiver representations over finite fields. The starting point for this work is a beautiful theorem of Gabriel, which describes the indecomposable complex representations of a quiver whose underlying graph is a Dynkin diagram in terms of the positive roots of the Lie algebra associated to this Dynkin diagram. Kac generalised this work to arbitrary quivers by associating to such a quiver Q (or strictly speaking its underlying graph) a root system ∆ Q ⊂ Z V (for a quiver without loops, this is the root system of an associated Kac-Moody Lie algebra g Q ). More precisely, he shows that absolutely indecomposable quiver representations of dimension d exists over a finite field precisely when d is a positive root of ∆ Q and proves that the count A Q,d (q) is polynomial in q with integer coefficients. One of Kac's conjectures on A Q,d (q) was the non-negativity of the coefficients; the proof of this conjecture follows from [12,18] as we see in the final step.
Step 6: Specialisation and relating the cohomology of the special fibre and general fibre.
Finally we relate various cohomology groups associated to X and X 0 in order to prove the main result. In order to pass between the GIT quotients over the field of complex numbers and various finite fields, we first state a result concerning GIT over the integers and base change. Since the varieties Rep d (Q) and GL d , as well as the moment map µ, are defined over the integers, the family X → A 1 is also defined over the integers. The key result we need is that over an open subset of Spec Z the construction of these GIT quotients commutes with base change and the family X → A 1 is smooth.
Using the (topological) triviality of the family X → A 1 over C and the comparison theorem together with Deligne's base change result for direct images, we obtain isomorphisms between the compactly supported ℓ-adic cohomology of the base changes of X 0 and X to F p for p ≫ 0. By the Grothendieck-Lefschetz trace formula, we deduce that for a finite field F q of sufficiently large characteristic p, the point counts of X and X 0 coincide
|X 0 (F q )| = |X(F q )|.
There is a more direct proof of this equality due to Najakima which utilises the Białynicki-Birula decompositions on X and appears an appendix in [12]; however, we have chosen to present the original proof of Crawley-Boevey and Van den Bergh in Step 1, as it utilises the hyperkähler structure in a rather ingenious way.
Over a finite field F q of characteristic p ≫ 0, the F q -variety X 0 is pure and smooth and has polynomial point count equal to q e A Q,d (q); hence, this polynomial is the ℓ-adic Poincaré polynomial of X 0 × Fq F p for p ≫ 0 and ℓ = p. Since X 0 is the mod q reduction of the complex variety X 0,C , we then deduce Theorem 5.3 from the comparison theorem and Poincaré duality.
5.1.
Deforming the moment map to produce a cohomologically trivial family. As the stability parameter θ satisfies θ · d = 0, it determines a central element (θI dv ) v∈V ∈ g d , which we also denote by θ. Let L = kθ ⊂ g d denote the line joining θ and 0. Then we consider the fibres of the moment map over points in L; let
X := µ −1 (L)// χ θ G d ,
which we naturally view as a family over L ∼ = A 1 . The special fibre of X over 0 ∈ A 1 is precisely the variety X 0 considered above and the general fibre of X over a non-zero point in A 1 is isomorphic to the variety
X := M θ−ss d (Q, R θ ) = µ −1 (θ)// χ θ G d .
We note that we can construct the family X → A 1 over any field k and also over Spec Z, as the varieties Rep d (Q) and GL d and the morphism µ are all defined over the integers.
Proposition 5.4 ([12, Lemma 2.3.3]).
Over k = C, the family X → A 1 is topologically trivial.
Proof. We recall that Rep d (Q) is hyperkähler and so it has a 2-sphere of Kähler structures, as the multiplicative group H * acts (by right multiplication) on Rep d (Q); this permutes the complex structures and the subgroup SU(2) ∼ = {β ∈ H : ββ † = 1} acts isometrically with respect to the hyperkähler metric. Let us write the hyperkähler moment map for the action of the maximal compact subgroup U d < GL d as a map
µ HK : Rep d (Q) → Im(H) ⊗ R u * d X → i ⊗ µ I (X) + j ⊗ µ J (X) + k ⊗ µ K (X),
where µ I = µ R and µ J + iµ K = µ C = µ. For the H * -action on Im(H) given by β · α = βαβ † , the hyperkähler moment map is H * -equivariant: for β ∈ H * and X ∈ Rep d (Q), we have µ HK (X · β) = βµ HK (X)β † .
We will use the transitivity of the H * -action on Im(H) • := Im(H) − {0} to construct a trivialisation X 0 × C ∼ = X. Since this action is transitive, for fixed α ∈ Im(H) • the action map (−) · α : H * → Im(H) • admits a continuous section s : C → H * over any contractible subset C ⊂ Im(H) • containing α. For any coadjoint fixed point θ ∈ u * d , we obtain a local continuous trivialisation of the hyperkähler moment map
µ −1 HK (α ⊗ θ) × C ∼ = µ −1 HK (C ⊗ θ), (X, c) → X · s(c)
which is U d -equivariant and so gives rise to a continuous isomorphism
µ −1 HK (α ⊗ θ)/U d × C ∼ = µ −1 HK (C ⊗ θ)/U d . We apply this to α = i ∈ C = {i + jC} ⊂ Im(H) • . Then µ −1 HK (α ⊗ θ) ∼ = µ −1 R (θ) ∩ µ −1 C (0) and µ −1 HK (C ⊗ θ) = µ −1 HK ((i + jC) ⊗ θ) = µ −1 R (θ) ∩ µ −1 C (Cθ) = µ −1 R (θ) ∩ µ −1 C (L)
and so we obtain a continuous trivialisation over
C ∼ = C (µ −1 R (θ) ∩ µ −1 C (0))/U d × C ∼ = (µ −1 R (θ) ∩ µ −1 C (L))/U d .
By the Kempf-Ness theorem this gives a homeomorphism
X 0 × C = µ −1 C (0)// χ θ GL d × C ∼ = µ −1 C (L)// χ θ GL d = X
which proves that the family X → C is topologically trivial.
We will apply this result to deduce that over a finite field F q of sufficiently large prime characteristic the F q -varieties X 0 and X have the same point count; an algebraic proof is also given by Nakajima in [12].
5.2.
Purity of the special fibre X 0 via the scaling action. In this section, we consider the GIT quotient X 0 over a field k. We recall that X 0 := µ −1 (0)// χ θ G d is projective over the affine variety Aff(X 0 ) := µ −1 (0)//GL d , which is equal to the spectrum of the ring of GL d -invariants on µ −1 (0) = Rep d (Q, R 0 ). Thus we have a commutative diagram
µ −1 (0) θ−ss / / µ −1 (0) π X 0 p / / Aff(X 0 )
where the map p is projective and the map π denotes the affine GIT quotient. Since θ is generic with respect to d, the k-variety X 0 is smooth (as in the proof of Lemma 5.22 below).
There is a dilating G m -action on Rep d (Q) given by scalar multiplication on the matrices over all arrows with a unique fixed point corresponding to the origin. Moreover, the limit of every point in Rep d (Q) under the action of t ∈ G m as t → 0 exists and is equal to the origin. Hence, this is a semi-projective G m -action in the sense of the following terminology introduced in [20].
Definition 5.5. A G m -action on a smooth quasi-projective variety Z is semi-projective if Z Gm is projective and for all z ∈ Z the limit lim t→0 t · z exists in Z.
Here by this limit existing, we mean that the map G m → Z given by t → t · z extends to a morphism A 1 → Z (such an extension is unique if it exists, as Z is separated).
Example 5.6. The moduli space of semistable Higgs bundles of coprime rank and degree over a smooth projective algebraic curve has a semi-projective G m -action given by scaling the Higgs field. [56].
The key feature of semi-projective G m -actions is that they give rise to a Białynicki-Birula decomposition [6] of Z, which gives a description of the cohomology (and other invariants, such as the Chow groups and motive) of Z in terms of that of its fixed locus. Since the fixed locus is smooth and projective, we will deduce that Z is (cohomologically) pure in §5. 3.
The scaling G m -action on Rep d (Q) commutes with the GL d -action and the algebraic moment map is G m -equivariant with respect to this action and the G m -action on gl d of weight 2. Hence, there is an induced G m -action on µ −1 (0) and its GIT quotients X 0 and Aff(X 0 ) such that the map p : X 0 → Aff(X 0 ) is G m -equivariant. We can then prove that this G m -action on X 0 is semiprojective as in [20].
Proposition 5.7. This scaling action of G m on X 0 is semi-projective.
Proof. This argument is given in [12] and in [20]. We first show that this statement holds for the affine variety Aff(X 0 ). Let x 0 := π(0) ∈ Aff(X 0 ) denote the image of the origin 0 ∈ µ −1 (0) under the affine GIT quotient π. Then x 0 is fixed by the G m -action as π : µ −1 (0) → Aff(X 0 ) is G m -equivariant. In fact, this is the only G m -fixed point in Aff(X 0 ) and all other points x ∈ Aff(X 0 ) satisfy lim t→0 t · x = x 0 , as the same statement holds for µ −1 (0) and thus the G m -action on Aff(X 0 ) = Spec O(µ −1 (0)) GL d induces a grading on O(µ −1 (0)) GL d which is concentrated in non-positive degrees and with weight zero piece isomorphic to k.
Since p is projective and G m -equivariant, the fixed locus X Gm 0 = p −1 (x 0 ) is projective and the flow under the G m -action as t → 0 exists for all points in X 0 . Thus the G m -action on X 0 is semi-projective.
Hence, there is an associated Białynicki-Birula decomposition [6] of X 0 and the flow X 0 → p −1 (x 0 ) under this G m -action defines a homotopy retract. In particular, the cohomology of X 0 can be described in terms of the cohomology of the smooth projective variety p −1 (x 0 ). By Proposition 5.11 below, we deduce that X 0 is (cohomologically) pure.
5.3.
Purity and point counting over finite fields. By the Weil conjectures and comparison theorems between singular and ℓ-adic cohomology, the Betti numbers of a smooth projective complex variety Y , which is defined over a number field and has good reduction Z modulo a prime p, can be calculated by counting points of Z over F q where q is a power of p. In this section, we will explain a generalisation of the above statement to smooth pure varieties.
Example 5.8. Let us consider the point count of P n . Over F q , we have |P n (F q )| = q n+1 − 1 q − 1 = 1 + q + q 2 + · · · + q n and the coefficients are precisely the even Betti numbers of P n .
Let us start by recalling the properties of ℓ-adic cohomology that we will need to define purity. Let p be a prime number and q be a power of p and fix a prime ℓ = p. For a F q -variety Z, we write Z := Z × Fq F q for the base change to the algebraic closure. The (compactly supported) ℓ-adic cohomology groups of Z
H i c (Z, Q ℓ ) := lim ← H i c,ét (Z, Z/l r Z) ⊗ Z l Q l
are finite-dimensional Q ℓ -vector spaces that have many of the properties of the usual (compactly supported) singular cohomology groups defined for varieties over k ⊂ C. Let Z and Y be F q -varieties; then we have the following properties.
• Functoriality: for proper morphisms f :
Z → Y we have H i c (Y , Q ℓ ) → H i c (Z, Q ℓ ). • Künneth isomorphisms: H i c (Y × Z, Q ℓ ) ∼ = H i c (Y , Q ℓ ) ⊗ H i c (Z, Q ℓ ). • Vanishing properties: H i c (Z, Q ℓ ) = 0 only for 0 ≤ i ≤ 2 dim Z. • For a Zariski-locally trivial A n -fibration Y → Z, we have H i c (Y , Q ℓ ) ∼ = H i−2n c (Z, Q ℓ )⊗H 2 c (A 1 , Q l ) ⊗n . • Gysin long exact sequences for closed subvarieties Z ⊂ Y with U := Y − Z · · · → H i c (U , Q ℓ ) → H i c (Y , Q ℓ ) → H i c (Z, Q ℓ ) → H i+1 c (U , Q ℓ ) → · · ·
• Poincaré duality for smooth F q -varieties.
For a indepth treatement of étale cohomology and the Weil conjectures, see the book of Milne [39]. Let F q be a finite field of positive characteristic p. For a F q -variety Z, we let Fr Z : Z → Z denote the relative Frobenius. The fixed points of the relative Frobenius on Z are precisely the set of F q -points in Z and similarly the fixed points of Fr n Z are Z(F q n ). In fact, the number of such points can be computed using the induced Frobenius action on H i c (Z, Q ℓ ). Theorem 5.9 (The Grothendieck-Lefschetz trace formula). Let Z be a smooth variety over a finite field F q of characteristic p > 0. Then for l = p, we have
|Z(F q n )| = 2 dim Z i=0 (−1) i Tr(Fr n Z : H i c (Z, Q l ).
The final part of the Weil conjectures was Deligne's proof of the Riemann hypothesis: for a smooth and projective F q -variety Z all eigenvalues of Fr Z on H i c (Z, Q l ) have absolute value q i/2 (for any choice of embedding Q l ֒→ C). This motivates the following definition of purity.
Definition 5.10. An F q -variety Z is (cohomologically) pure if all eigenvalues of Fr Z on H i c (Z, Q l ) have absolute value q i/2 .
Thus Deligne proved that all smooth projective varieties are pure. We can now give a standard proof of the purity of a smooth quasi-projective variety with a semi-projective G m -action using the Białynicki-Birula decomposition [6]. In particular, this will provide a proof of the purity of the F q -variety X 0 mentioned at the end of §5.2.
Proposition 5.11 ([12, Lemma A.2]). Let Z be a smooth quasi-projective F q -variety with a semi-projective G m -action; then Z is pure.
Proof. The assumptions imply that Z has the following Białynicki-Birula decomposition [6]. Let Z Gm = ∪ j∈J Z j denote the decomposition of the fixed locus into connected components; then there is a decomposition
Z = j∈J Z + j where Z + j := {z ∈ Z : lim t→0
t · z ∈ Z j } and the limit map p j : Z + j → Z j is a Zariski locally trivial affine space fibration. By assumption, the smooth varieties Z j are projective, and thus pure; the same also holds for the smooth strata Z + j , as p j : Z + j → Z j is a Zariski locally trivial affine fibration. Finally, the fact that Z is quasi-projective means that there is a filtration of Z by closed subsets whose successive differences are the strata Z + j , and one can show that the Gysin sequences associated to this filtration of Z split into short exact sequence using the purity of the strata Z + j . Consequently, we deduce that Z is also pure. For certain pure smooth F q -varieties, their ℓ-adic Betti numbers can be described using the following result.
Lemma 5.12 ([12, Lemma A.1]). Let Z be a smooth variety defined over F q which is pure and has polynomial point count over F q r ; that is |Z(F q r )| = P (q r ) for a polynomial P (t) ∈ Z[t]. Then
P (q) = i≥0 dim H 2i c (Z, Q ℓ )q i and in particular P (t) ∈ N[t].
Let us finally note that via the comparison theorem between étale and singular cohomology, one can relate this result concerning ℓ-adic cohomology with the usual singular cohomology. We will return to this statement in the final step.
In fact, we will want to compare the Betti cohomology of a complex variety with the point count of a reduction of this variety to a finite field using a theorem of Katz, which appears as an appendix in [19]. For a complex variety Z C , we can choose a spreading out Z R of Z over a finitely generated Z-algebra R (i.e. Z C = Z R × R C) and let Z be a reduction of Z R to some finite field F q . If Z has polynomial point count P Z (t) ∈ Z[t]; then the E-polynomial of Z C (whose coefficients are the virtual Hodge numbers) is given by E Z C (x, y) = P X (xy). If additionally the compactly supported cohomology of Z is pure, then P Z (q) = E Z (q 1/2 , q 1/2 ) = P c (Z C , q 1/2 ) (where P c denotes the compactly supported Poincaré polynomial).
5.4.
Point count for the general fibre X and absolutely indecomposable representations. In this section, we let p be a prime number and q be a power of p. The goal is to compute |X(F q )|. The first result we need, is that for all sufficiently large primes, all points in µ −1 (θ) are θ-stable by the following lemma.
Lemma 5.13. Let θ be a generic stability parameter with respect to d. Then for a field k = F q of sufficiently large prime characteristic, we have µ −1 (θ) θ−ss = µ −1 (θ) θ−s = µ −1 (θ).
Proof. This follows by Lemma 4.5 as µ −1 (θ) = Rep d (Q, R θ ).
In order to count points of µ −1 (θ) over F q , we will relate such representations of (Q, R θ ) with absolutely indecomposable representations of Q using a theorem of Crawley-Boevey [11,Theorem 3.3] concerning the liftings of indecomposable representations of Q to (Q, R θ ). We recall that there is a natural projection Rep d (Q) → Rep d (Q), whose restriction to the fibre of the moment map over θ we denote by
π : µ −1 (θ) → Rep d (Q).
Theorem 5.14 (Crawley-Boevey [11]). For θ generic with respect to d, the image of π : µ −1 (θ) → Rep d (Q) on F q -points is the set of indecomposable representations. Moreover, the fibre of π over an indecomposable d-dimensional F q -representation W of Q is identified with the dual of the space of self-extensions of W
π −1 (W ) ∼ = Ext 1 Q (W, W ) * . Proof. For W ∈ Rep d (Q)(F q ),
we consider the dual of the exact sequence in Exercise 2.3
0 → Ext 1 Q (W, W ) * → Rep d (Q op ) → gl * d → End(W ) * → 0. Then W lifts to a representation of (Q, R θ ) if and only if θ is in the image of Rep d (Q op ) → gl * d ; that is, v∈V θ v Tr(f v ) = 0 for any f ∈ End(W ). If W = W 1 ⊕ W 2 and f ∈ End(W )
is the projection onto W 1 , then it follows that θ · dim(W 1 ) = 0. Since θ is assumed to be generic with respect to d, we see that only indecomposable representations of Q can lift to (Q, R θ ). To prove that an indecomposable representation lifts, one uses the fact that End(W ) is local for W indecomposable (see Lemma 5.17).
A final technical tool required to relate the point count of X with absolutely indecomposable representations of Q over finite fields of large characteristic is Burnside's formula for the number of orbits under a finite group action. where e := 1 2 dim X. Proof. For primes p ≫ 0 and q = p r , we have that all points in the F q -variety µ −1 (θ) are θ-stable by Lemma 5.13. Hence µ −1 (θ) → X = µ −1 (θ)// χ θ G d is a principal G d -bundle. Furthermore, as the Brauer group of F q is trivial, the rational points of X are isomorphism classes of F q -representations of (Q, R θ ) so
X(F q ) ∼ = µ −1 (θ)(F q )/G d (F q )
and as G d -acts freely on µ −1 (θ), we have
(5.1) |X(F q )| = |µ −1 (θ)(F q )| |G d (F q )| .
We now relate this point count to A Q,d (q) using Theorem 5.14. Since d is indivisible, Lemma 5.17 below. Then by Theorem 5.14, we obtain
A Q,d (q) = 1 |G d (F q )| W ∈Rep d (Q) a.i. (Fq) q −1 | End Q (W )| where we use that Stab G d (W ) ∼ = Aut Q (W )/G m and so | Stab G d (W )| = q −1 | End Q (W )| by(5.2) A Q,d (q) = 1 |G d (F q )| W ∈µ −1 (θ)(Fq) q −1 | End(π(W ))| | Ext 1 (π(W ), π(W ))| .
Since dim π(W ) = d, we have that
d, d Q = dim End Q (π(W )) − dim Ext 1 Q (π(W ), π(W ))
by Exercise 2.3. Therefore, combining (5.1) and (5.2) we obtain It remains for us to describe the endomorphism ring of an absolutely indecomposable representation.
A Q,d (q) = q d,d Q−1 |X(F q )|. Finally, we recall from (2.2) that d, d Q = dim GL d − dim Rep d (Q) and so dim M θ−ss d (Q) = 1 − d, d Q , as G m ∼ = ∆ ⊂ GL d
Lemma 5.17. Let W be an indecomposable F q -representation of Q. Then the following statements hold.
(1) Every endomorphism of W is either nilpotent or invertible.
(2) End Q (W ) is local with nilpotent radical End nil Q (W ). (3) k W := End Q (W )/ End nil Q (W ) is a finite field containing F q . If W is absolutely indecomposable, then k W = F q and
| End Q (W )| | Aut Q (W )| = q q − 1 .
Proof. By the fitting lemma, for an endomorphism f of W we have W = ker(f r ) ⊕ Im(f r ) for some r as W has finite length, and thus either f is nilpotent or invertible. As a corollary, any finite-dimensional algebra which has only 0 and 1 as idempotents, is a local ring with nilpotent radical. This proves the first two statements. For any local ring with nilpotent radical, the quotient by this ideal is a division algebra. Hence k W := End Q (W )/ End nil Q (W ) is a finite division algebra and by Wedderburn's theorem, we deduce that k W is a finite field k W containing F q . This proves the first three statements.
Let n = [k W : F q ] and W ′ = W ⊗ Fq k W ; then as F q is perfect, we have
k W ′ = End Q (W ′ ) End nil Q (W ′ ) ∼ = End Q (W ) End nil Q (W ) ⊗ Fq k W = k W ⊗ Fq k W = k ⊕n W .
Hence W ′ is a direct sum of n pairwise non-isomorphic indecomposable k W -representations. In particular, if W is absolutely indecomposable, then W ′ is indecomposable and thus k W = F q . For an absolutely indecomposable representation W , let p : End Q (W ) → End Q (W )/ End nil Q (W ) ∼ = F q denote the projection; then as End nil
Q (W ) = p −1 (0) and Aut Q (W ) = p −1 (F × q ), we have | End nil Q (W )| | Aut Q (W )| = 1 q − 1 .
The final formula then follows, as | End Q (W )| = | End nil Q (W )| + | Aut Q (W )|.
5.5.
Kac's theorem on absolutely indecomposable quiver representations. The starting point for Kac's work [28,30] is a remarkable discovery of Gabriel, which relates the indecomposable representations of quivers of finite representation type 1 and the positive roots of semisimple Lie algebras. Before stating this theorem, we recall that for a quiver without oriented cycles, the simple objects in Rep(Q, k) are in bijection with the set of vertices V . Hence, the dimension vector induces an isomorphism
dim : K 0 (Rep(Q, k)) → Z V
from the Grothendieck group of this category to the free abelian group generated by V .
Theorem 5.18 (Gabriel). Let Q be a connected quiver without oriented cycles.
(1) Q is of finite type if and only if the underlying graph of Q is simply-laced Dynkin diagram.
(2) In this case, if g Q denotes the corresponding semisimple complex Lie algebra for this Dynkin diagram, then dim : K 0 (Rep(Q, k)) → Z V induces a bijection between the set of isomorphism classes of indecomposable representations of Q and the set of positive roots of g.
A nice exposition of this result is given in [9]. Bernstein, Gelfand and Ponomarev [5] provided a proof of this result which enhances this remarkable link between quiver representations and Lie algebras, by using reflection functors associated to the vertices of Q to construct all indecomposable representations of a quiver Q of finite representation type from simpler ones analogously to the way all positive roots in the corresponding Lie algebra arise from the simple roots by reflections given by elements of the Weyl group.
Kac [28] associates to a quiver Q with n vertices a root system ∆ Q ⊂ Z n and Weyl group W Q that only depend on the underlying graph of Q as follows; in [29] the necessary modifications for quivers with loops is given. For a quiver Q, we recall that the Euler form −, − Q on the lattice Z n defines a matrix
B Q = (b ij ) where b ij := 1 − |a : i → i| if i = j −|a : i → j| if i = j The symmetrised form (−, −) Q has associated symmetric matrix A Q = B + B t Q .
If Q is a quiver without loops, then A Q is a symmetric generalised Cartan matrix. Let {α 1 , . . . , α n } denote the standard basis of Z n and we define the set of fundamental roots Π Q = {α i : a ii = 2} to be the basis vectors corresponding to vertices without loops. Then each fundamental root α i ∈ Π Q determines a reflection r i ∈ Aut(Z n ) defined by r i (α j ) = α j − a ij α i and we define the Weyl group W Q to be the subgroup generated by the fundamental reflections. There is an associated root system ∆ Q = ∆ + Q ∪ −∆ + Q where ∆ + Q is a set of positive roots, which decompose into real and imaginary roots. The real roots are the images of the fundamental roots under the Weyl group; these are the only roots if Q is of finite representation type. For the construction of the imaginary roots, see [29, §1.1]. If Q is a quiver without loops, then there is a (typically infinite-dimensional) Lie algebra g Q called the Kac-Moody Lie algebra associated to the symmetric generalised Cartan matrix A Q Theorem 5.19 (Kac [28,30]).
(1) The number of A Q,d (q) of absolutely indecomposable quiver representations over F q does not depend on the orientation of Q and satisfies A Q,w(d) (q) = A Q,d (q) for w ∈ W Q . (2) The map dim : K 0 (Rep(Q, k)) → Z V induces a surjective map from the set of isomorphism classes of indecomposable representations over k = k of Q onto the set of positive roots of ∆ + Q .
(3) A Q,d (q) is a polynomial in q with integral coefficients.
For the polynomial behaviour of A Q,d (q) it suffices to prove that the number I Q,d (q) of isomorphism classes of indecomposable d-dimensional F q -representations of Q is given by a polynomial in q by using standard reductions involving Galois descent. By the Krull-Schmidt theorem and induction on d, it then suffices to show the count M Q,d (q) of isomorphism classes of d-dimensional F q -representations of Q is polynomial in q. Kac computes M Q,d (q) using Burnside's theorem, where one must sum over all conjugacy classes of GL d for all d by enumerating all possible Jordan normal forms using polynomials (giving the splitting field) and partitions (giving the sizes of the Jordan blocks). He then deduces the polynomial behaviour of M Q,d (q) (and thus A Q,d (q)). Kac proves the independence of the orientation of Q using reflection functors and the fact that indecomposable representations correspond to orbits in Rep d (Q) with unipotent stabiliser group.
Hua [26] provided more explicit formulae for the polynomials A Q,d (q) by considering generating functions for these counts, where one sums over all dimension vectors by introducing formal variables {X v ; v ∈ V }. For each d = (d v ) v∈V , we write X d = v∈V X dv v ; then the Krull-Schmidt theorem for Rep(Q, k) gives a formal identity
(5.3) d∈N V M Q,d (q)X d = d∈N V {0} (1 − X d ) −I Q,d (q) .
For very simple quivers and low dimension vectors, it is possible to directly calculate A Q,d (q).
Exercise 5.20. For each of the following quivers and dimension vectors, calculate the Kac polynomial A Q,d (q):
(1) The Jordan quiver with dimension vector n ∈ N.
(2) The 2-arrow Kronecker quiver with dimension vector d = (1, 1).
5.6.
Specialisation and relating the cohomology of the special fibre and general fibre. In this final step, we will relate various cohomology groups associated to X and X 0 in order to prove the main result. In order to pass between the field of complex numbers and various finite fields, we will need to first state some results concerning GIT over the integers and base change. Indeed the affine space Rep d (Q), the group GL d , and the moment map µ are all defined over the integers, and so we can instead consider the above GIT quotients over Spec Z using Seshadri's GIT over a (Nagata) base ring. Although extensions of base fields commute with taking invariants (and thus taking the semistable set and the formation of the GIT quotient commute with base field extensions), the same is not true over rings. Let us consider the following set up: let R be a finitely generated Z-algebra with a maximal ideal p ⊂ R such that R/p ∼ = F q and fix an embedding R ֒→ C. Then for a variety Z over S := Spec R, we can construct by base change:
(1) an F q -variety Z := Z × Spec R Spec F q (the reduction of Z mod q) and (2) a complex variety Z C := Z × Spec R Spec C. Now suppose that G is a reductive group scheme over S acting on Z with respect to an ample linearisation, then we want to know whether formation of the GIT quotient commutes with these various base changes. In fact, for our purposes, it suffices to understand this for R = Z N = Z[ 1 N ] where N ∈ Z such that p ∤ N and so we can apply the following result.
Lemma 5.21 ([12, Appendix B]). For S := Spec Z N , we let G be a reductive group scheme over S acting on a quasi-projective S-scheme Z with respect to an ample linearisation. Then there is a non-empty open subscheme U ⊂ S over which the formation of the GIT semistable set and GIT quotient commutes with base change; that is, for all points s : Spec k → U , we have Z ss × S k = (Z × S k) ss and (Z//G) × S k ∼ = (Z × S k)//(G × S k).
We note that an open subset U ⊂ S := Spec Z N has the form U = Spec Z M for some N |M , and this means we just need to replace N by a sufficiently large multiple. If moreover p ∤ N , then we can base change from S = Spec Z N to F p when applying the above result. More precisely, for a variety Z over S we have the following base changes
Z Fp / / Z Fp / / Z Z C o o Spec F p / / Spec F p r / / S Spec C o o
and provided N is sufficiently large these base changes all commute with the formation of GIT quotients and semistable sets by the above lemma.
We will also need the following preliminary result concerning the smoothness of X 0 and X over finite fields of large characteristic. Proof. It suffices to prove that f is smooth after base changing to k = Q, as by a spreading out argument it is sufficient to prove that f is smooth over Q (as then the same statement holds over Z after inverting finitely many primes) and smoothness can be checked after any field extension. Since θ is generic, the notions of θ-stability and θ-semistability for d-dimensional k-representations coincide. As any θ-stable k-representation of Q is simple, we see that G d acts freely on Rep d (Q) θ−s . Hence the infinitesimal action at these points is trivial and so it follows that the restriction of the moment map µ to Rep d (Q) θ−s is smooth, as µ lifts the infinitesimal action. Consequently, for the line L = kθ in the Lie algebra of G d , we see that the induced morphism µ −1 (L) θ−s → L is smooth, and as G d acts freely on µ −1 (L) θ−s , the G d -quotient µ −1 (L) θ−s → X is also smooth. Hence, we deduce that f : X → L ∼ = A 1 is also smooth over k.
We are now in a position to complete the proof. The first goal is to relate the point count of X 0 and X in large characteristic.
Proposition 5.23. For a finite field F q of sufficiently large characteristic p, we have
|X 0 (F q )| = |X(F q )|.
Proof. Using the (topological) triviality of the family X → A 1 over C and the comparison theorem together with Deligne's base change result for direct images, we deduce that for p ≫ 0 and ℓ = p, there are isomorphisms
H i c (X × Fq F p , Q ℓ ) ∼ = H i c (X 0 × Fq F p , Q ℓ )
in ℓ-adic cohomology that are compatible with the Frobenius endomorphisms. By applying the Grothendieck-Lefschetz trace formula to both X and X 0 , which are both smooth F q -varieties in large characteristic by Lemma 5.22, we deduce the claim.
Remark 5.24. A more direct proof of this result is given by Nakajima as an appendix in [12], which involves comparing the Białynicki-Birula decomposition on the total space of the family X with the decompositions on the fibres of this family.
Putting all of the above together, we obtain the proof of Crawley-Boevey and Van den Bergh.
Proof of Theorem 5.3. Let F q be a finite field of sufficiently large characteristic p so that the construction of the GIT quotient X commutes with base change and the family X → A 1 is smooth. By Proposition 5.16 and Theorem 5.19, we see that X has polynomial point count given by
|X(F q )| = q −d A Q,d (q)
Provided p ≫ 0, this point count coincides with that of X 0 by Proposition 5.23. Since X 0 is pure by Propositions 5.7 and 5.11, we deduce that the ℓ-adic Poincaré polynomial of the F q -variety X 0 for q = p r and p sufficiently large is given by
(5.4) A Q,d (t) = t −e i≥0 dim H 2i c (X 0 × Fq F p , Q ℓ )t i
where e = 1 2 dim X 0 and ℓ = p is prime. Now consider the family X → A 1 over Spec Z N for sufficiently large N indivisible by p, then by base change we can obtain the F p -variety X 0 × Fq F p and the complex variety X 0,C and these base changes commute with the formation of the GIT quotient. In particular, the complex variety X 0,C is defined over Q and the F p -variety X 0 is a mod p reduction of this complex variety. By smooth base changes results and the comparison theorem [SGA4:3, Exposé XVI, Theorem 4.1], we obtain from (5.4) the corresponding equality for the Poincaré polynomial of the sheaf cohomology of X 0,C with values in the constant sheaf C
A Q,d (q) = q −e i≥0 dim H 2i c (X 0,C , C)q i .
By Poincaré duality for the smooth variety X 0,C of dimension 2e, we deduce
A Q,d (q) = e i=0
dim H 2e−2i (X 0,C (C), C)q i where now we switch from sheaf cohomology to the singular cohomology of the analytic variety X 0,C (C).
5.7.
A brief survey of Schiffmann's results for bundles. Let X be a smooth projective curve over a finite field F q . Then the category of coherent sheaves over X is an abelian category of homological dimension 1 with a group homomorphism
cl : K 0 (Coh(X)) → Z 2 , [F ] → cl(F ) = (rk(F ), deg(F ))
through which the Euler form factors
E, F = rk E rk F (1 − g) + rk E deg F − rk F deg E.
Moreover, this is a Krull-Schmidt category and so there is naturally a notion of (absolutely) indecomposable objects. For coprime rank n and degree d, Schiffmann discovered an analogous relationship between the count A n,d (X) of isomorphism classes of indecomposable vector bundles on X/F q and the Betti cohomology of the moduli space of semistable Higgs bundles on X. In fact, the case of vector bundles involves several technical issues which did not arise in the quiver setting:
(1) For quivers, the count of absolutely indecomposable representations of Q was polynomial in the size of the finite field and was independent of the orientation. Moreover, there was a representation theoretic interpretation: the underlying graph of the quiver determined a root system and Weyl group, under which the polynomial counting absolutely indecomposable representations was invariant. For bundles on curves, one would like to understand the behaviour of this count as X varies in the moduli space of genus g curves over F q and give a representation theoretic interpretation of the associated polynomial.
(2) There are many more vector bundles than quiver representations: while the stack of quiver representations of fixed dimension vector is a finite type stack, the stack of vector bundles of fixed class is only locally of finite type. Hence, to prove the polynomial behaviour of the count of indecomposable bundles, one cannot uses the Krull-Schmidt theorem and count all vector bundles, as this is infinite. (3) Although the moduli space of Higgs bundles admits a gauge theoretic construction as a holomorphic symplectic reduction, we need an algebraic version for working over finite fields. Furthermore, to relate Higgs bundles to indecomposable vector bundles, it is necessary to employ a similar trick to the above trick of Crawley-Boevey and Van den Bergh: one needs to find a suitable family of algebraic symplectic reductions over the affine line that contains the moduli space of Higgs bundles as the special fibre. The description of the behaviour of this count as X varies in the moduli space of genus g curves is acheived by Schiffmann in [52]: he proves that there is a polynomial (depending on (n, d) and the genus g of the curve) in the Weil numbers of a curve over a finite field which gives the counts A n,d (X) for any curve X over a finite field by evaluation at the Weil numbers of X; moreover, by work of Mellit [38], this polynomial is actually independent of the degree d. To state this precisely, we recall that the Weil numbers of a genus g smooth projective curve X/F q are the eigenvalues σ 1 , · · · , σ 2g of the Frobenius acting on H 1 et (X × Fq F q , Q l ). If we fix an embedding Q l ֒→ C, then we can view the Weil numbers of X as a tuple of complex numbers of absolute value q 1 2 and order them as complex conjugate pairs (σ 2i−1 , σ 2i ) which satisfy σ 2i−1 σ 2i = q for all 1 ≤ i ≤ g. This tuple gives rise to a point in the torus T g := {(α 1 , · · · , α 2g ) ∈ G 2n m : σ 2i−1 σ 2i = σ 2j−1 σ 2j ∀1 ≤ i, j ≤ g} and the natural action of W g := S g ⋉ (S 2 ) g takes care of the choices in the above ordering into complex conjugate pairs. Let π : T g (C) → T g (C)/W g denote the quotient map and define σ X := π(σ 1 , · · · , σ 2g ) to be the image of the Weil numbers of X.
Let R g := Q[z 1 , . . . , z 2g : z 2i−1 z 2i = z 2j−1 z 2j ∀1 ≤ i ≤ g] Wg . Then we can evaluate any element in R g at σ X for any genus g smooth projective curve X over a finite field. In genus 0, we set R 0 = Q[q ± ]. Theorem 5.25 (Schiffmann [52]). For a fixed genus g and pair (n, d) ∈ N × Z, there is a unique element A g,n,d ∈ R g such that for any smooth projective geometrically connected curve X of genus g over a finite field, we have A g,n,d (σ X ) = A n,d (X).
Mellit [38] showed this polynomial A g,n,d is actually independent of the degree d and so we can write simply A g,n ; this proof is combinatorial and does not give a geometric reason for this independence. Moreover, these polynomials exists in rank n = 0 and count absolutely indecomposable torsion sheaves.
In fact, Schiffmann also provides a representation theoretic interpretation of W g , T g and R g : the Frobenius Fr X on H 1 et (X × Fq F q , Q l ) ∼ = Q l 2g is an element of the general symplectic group GSp(H 1 et (X × Fq F q , Q l )) where we equip this vector space with the intersection form. The character ring of this general symplectic group is R g and (T g , W g ) are a maximal torus and associated Weyl group of GSp(2g, Q l ).
Schiffmann's proof of this theorem gives rise to explicit (but complicated) formulae for these polynomials; these were later substantially combinatorially simplified by Mellit [38]. The fact that the number of absolutely indecomposable vector bundles over X of fixed class is finite follows from the observation that any sufficiently unstable coherent sheaf is decomposable as its Harder-Narasimhan filtration must split in some place, and so the stack of absolutely indecomposable vector bundles is a constructible substack of a finite type stack. Therefore, A n,d (X) also counts isomorphism classes of aboslutely indecomposable coherent sheaves over X of this class. The standard arguments for quivers involving Galois cohomology and the Krull-Schmidt theorem still apply to reduce the problem to counting all isomorphism classes of coherent sheaves on X of this class, but this number is infinite for n > 0. In fact, we should also point out that this count is not the same as the stacky volume of the stack of coherent sheaves, where one weights the count by the inverses of the size of the automorphism groups (this stacky volume has a very elegant formula involving the Zeta function of the curve [16]). Instead, Schiffmann uses a suitable truncation of the category of coherent sheaves on X given by the subcategory of positive coherent sheaves (i.e. sheaves whose HN subquotients have positive degrees). Similar to the case of quivers, one can perform a unipotent reduction and partition the stack of positive sheaves by Jordan normal types. The representation theoretic interpretation of this polynomial involves a spherical Hall algebra and is ongoing work of Schiffmann and collaborators; for a nice overview of this work, see [53] Furthermore, the coefficients of these polynomials satisfy some form of positivity. For this, Schiffmann and Mozgovoy [52,40] relate the polynomial A g,n (t, . . . , t) ∈ Q[t] with the (compactly supported) Poincaré polynomial of moduli spaces of semistable Higgs bundles over a genus g smooth complex projective curve.
Theorem 5.26 (Schiffmann [52]). Let X C be a smooth complex projective curve of genus g and H ss X C (n, d) denote the moduli space of semistable Higgs bundles over X C of coprime rank and degree. Then i≥0 H i c (H ss X C (n, d), Q)t n = t 2(1+(g−1)n 2 ) A g,n (t, . . . , t).
We note that for coprime (n, d), the notions of absolutely indecomposable and indecomposable coincide. In fact, Schiffmann's proof of this theorem is inspired by the work of Crawley-Boevey and Van den Bergh: it involves relating A g,n to the point count of moduli spaces of semistable Higgs bundles on a curve over a finite field (provided the characteristic is sufficiently large) by fitting this moduli space into a family over A 1 . Indeed the forgetful map from the stack of stable Higgs bundles to the stack of vector bundles does not land in the indecomposable locus (for example, consider Exercise 4.10). Therefore, one needs a slightly perturbed model of the Higgs bundle moduli space to compare with indecomposable vector bundles.
To construct such a family, Schiffmann uses a variant of the construction of the functorial construction of moduli of sheaves due to Álvarez-Cónsul and King [1] which depends on a choice of two polarising line bundles (L 1 , L 2 ) on X (rather than two twists of the same bundle); the choice of two line bundles enables a construction of a family of a algebraic symplectic reductions Y → A 1 over any field k such that Y 0 = H ss X (n, d) and, moreover, the fibre Y 1 can be compared with indecomposable vector bundles. More precisely, Y is constructed as the GIT quotient of the preimage of a line under an algebraic moment map on the cotangent bundle of an affine variety. The affine variety in question arises as a closed subvariety in the representation space of a Kronecker quiver with h 0 (L ∨ 2 ⊗ L 1 ) arrows, where the dimension vector is determined by the class (n, d) and the degrees of the pair of line bundles using the Euler form for sheaves on X (for the detailed construction, see [52, §6.3-6.8]).
After constructing the family Y → A 1 with Y 0 = H ss X (n, d), Schiffmann shows that over any finite field F q of sufficiently large characteristic the following statements hold:
(1) the schemes Y 0 and Y 1 are smooth (which uses standard properties about GIT quotients of free group actions, see [52, Lemma 6.5]), (2) the point counts of Y 0 and Y 1 over F q coincide (which is proved by using a contracting G m -action on Y, see [52, Proposition 6.9]),
(3) the point count of Y 1 is polynomial (this is proved by relating this to the count A d,n (X) by the following formula, for details see [52, Lemma 6.4 and §6.9]):
(5.5) |Y 1 (F q )| = q 1+(g−1)r 2 A d,n (X).
The moduli space of stable Higgs bundles over a finite field is already known to be cohomologically pure (for example, see [20, §1.3]) and so (5.5) also gives an explicit formula for the ℓ-adic Poincaré polynomial of the moduli space of stable Higgs bundles on X. Finally by spreading out a smooth projective curve X Q of genus g defined over Q to some localisation R := Z[ 1 N ] of the integers, one can relate the ℓ-adic Poincaré polynomial of H ss X Q (n, d) with the moduli space H ss XR×Fq (n, d) of the base change of X Q to a finite field F q of large characteristic, and by the comparison theorem one can also relate this to the singular Poincaré polynomial (with C-coefficients) of the base change X C = X R × R C (in the above, we use cohomology with compact supports). Since the diffeomorphism class of the complex variety H ss X C (n, d) is independent of the choice of smooth projective curve X C of genus g (as they are all diffeomorphic to the genus g character variety for GL n ), this enables Schiffmann to relate A g,n with the Poincaré polynomial of H ss X C (n, d) for any X C (see [52, §6.10] for further details).
Definition 2 . 1 .
21A k-representation of Q is a tuple W := ((W v ) v∈V , (ϕ a ) a∈A ) where:
Exercise 2. 3 .
3For two k-representations W := ((W v ) v∈V , (ϕ a ) a∈A ) and W ′ := ((W ′ v ) v∈V , (ϕ ′ a ) a∈A ) of a quiver Q, show that a tuple φ = (φ a ) a∈A ∈ a∈A Hom(W t(a) , W ′ h(a) ) determines a representation of Q e(W, W ′ , φ) := (W ′ v ⊕ W v ) into a short exact sequence of quiver representations 0 → W ′ → e(W, W ′ , φ) → W → 0.Show that this defines a map β :a∈A Hom(W t(a) , W ′ h(a) ) → Ext 1 Q (W, W ′ ) which fits in an exact sequence0 / / Hom Q (W, W ′ ) W t(a) , W ′ h(a) ) β / / Ext 1 Q (W, W ′ ) / / 0, where α is defined by α((f v ) v∈V ) := f h(a) • ϕ a − ϕ ′ a • f t(a) .In particular, deduce that
Example 2 . 10 .
210For any quiver Q, there are simple representations S(v) indexed by the vertices of V , where S(v) w is zero for all w = v and S(v) v = k and all the linear maps are zero. In fact, if Q is a quiver without oriented cycles, then these are the only simple representations (see [9, Proposition 1.3.1]).
is a G m -gerbe over M θ−gs d (Q), we can pull this gerbe back along Spec k → M θ−gs d (Q) to obtain a type map T : M θ−gs d (Q)(k) → Br(k) := H 2 et (k, G m ).
ω R (−, −) := −ImH(−, −); then ω R is a (smooth) symplectic form on the manifold Rep d (Q). In fact, the complex structure on Rep d (Q) is compatible with this form, and so Rep d (Q) is naturally a (flat) Kähler manifold. Moreover, the form ω R is preserved by the action of U d , as H is U d -invariant. Definition 2.27. A moment map for a symplectic action of a compact Lie group K on a smooth symplectic manifold (M, ω) is a smooth map µ R : M → k * := Lie(K) * which is equivariant with respect to the given K-action on M and the coadjoint action of K on k * and lifts the infinitesimal action in the sense that d m µ R (ξ) · B = ω m (B m , ξ) for all m ∈ M and ξ ∈ T m M and B ∈ k, where B m ∈ T m M denotes the infinitesimal action of B on m.
Definition 4.1 (Doubled quiver). The double of a quiver Q = (V, A, h, t) is the quiver Q = (V, A, h, t) where A = A ⊔ A * for A * := {a * : h(a) → t(a)} a∈A .
(a) ) a∈A is the infinitesimal action of B on (X a ) a∈A . This algebraic moment map is a GL d -equivariant morphism satisfying the infinitesimal lifting property d X µ(η) · B = ω(B X , η). The Killing form on the Lie algebra of each general linear group induces an identification gl d ∼ = gl * d , so we can view the moment map as a morphism µ : Rep d (Q) → gl d given by µ(X) = a∈A [X a , X a * ].The group G d = GL d /∆ has Lie algebra g d := {(B v ) v∈V ∈ gl d : v∈V Tr(B v ) = 0} consisting of tuples of matrices with total trace zero. We note that the image of this moment map lies in g d .
Definition 4. 2 .
2Let χ : GL d → G m be a character and let η ∈ gl d be a coadjoint fixed point; then GL d acts on µ −1 (η) by the equivariance property of the moment map. The algebraic symplectic reduction of the
Remark 4. 3 .
3A tuple η = (η v ) v∈V ∈ k V determines an adjoint fixed point η = (η v Id dv ) v∈V ∈ gl d (k).Moreover, we have that µ −1 (η) = Rep d (Q, R η ) is the subvariety of Rep d (Q) of representations satisfying the relations
Lemma 4. 5 .
5Let θ be a generic stability parameter with respect to d; then for a field k of characteristic zero or sufficiently large prime characteristic, we have Rep d (Q, R θ ) = Rep d (Q, R θ ) θ−ss = Rep d (Q, R θ ) θ−gs .
4. 3 .
3Moduli spaces of Higgs bundles. Let X be a smooth projective complex curve and fix a rank n and degree d. Then the moduli space of Higgs bundles can be viewed as a hyperkähler analogue of the moduli space M = M ss X (n, d) of semistable vector bundles of rank n and degree d over X. By the gauge theoretic construction, M is homeomorphic to a symplectic reduction of the unitary gauge group G on the space of unitary connections A. In this section, we upgrade this to a hyperkähler setting by considering the action of G on the cotangent bundle T * A. This will give us a moduli space H = H ss X (n, d) of semistable Higgs bundles which contains the cotangent bundle T * M as a dense open subset.The deformation theory of vector bundles give a description of the tangent spaces to M at an isomorphism class [E] of a stable locally free sheaf:T [E] M ∼ = Ext 1 (E, E) ∼ = H 1 (X, End(E))and by Serre duality, we have T * E M ∼ = H 0 (X, End(E) ⊗ ω X ). The elements in this cotangent space are holomorphic Higgs fields on E in the sense of the following definition.
Remark 4. 9 .
9Let E be a semistable vector bundle corresponding to a point in M := M ss X (n, d); then for any Φ ∈ T *[E] M, we note that (E, Φ) is a semistable Higgs bundle. In fact, we have an inclusion T * M ⊂ H and we can view H as a hyperkähler analogue of M analogously to the notion of a hyperkähler analogue of the GIT quotient of a complex affine space (see[48] and Remark 4.6). We recall that the quiver moduli space M θ−ss d (Q) has a hyperkähler analogue given by the moduli space M θ−ss d (Q, R 0 ) of representations of the doubled quiver satisfying the equations R 0 imposed by zero level set of the complex moment map.
Exercise 4 . 12 .
412By using the quaternionic relations between the 3 complex structures, show that there are 4 possible types of branes: BBB, BAA, ABA and AAB
Definition 4 . 13 .
413For a quiver Q = (V, A, h, t), a pair of automorphisms σ
Theorem 4.15 ([3,4,7,8]). Let H := H G be a smooth Higgs moduli space and σ G : G → G and σ X : X → X be anti-holomorphic involutions. Then there are induced involutions σ G and σ X on H such that (1) H σG is a BAA-brane, (2) H σX is a ABA-brane, (3) H σG•σX is a AAB-brane.
Exercise 5 . 2 .
52For an F q -representation W of Q of dimension d prove the following.
Lemma 5 .
515 (Burnside's formula). Let G be a finite group acting on a finite set Y , then|Y /G| := 1 |G| g∈G |Y g | = 1 |G| y∈Y | Stab G (y)|.Now we can state and prove the main result of this section.Proposition 5.16 (Crawley-Boevey and Van den Bergh). Let d be indivisible and θ be generic with respect to d. Then for a prime p ≫ 0 and q a power of p, we have A Q,d (q) = q −e |X(F q )|
F q -representations of dimension d are indecomposable if and only if they are absolutely indecomposable. We let Rep d (Q) a.i. denote the constructible subset of absolutely indecomposable d-dimensional representations of Q. By definition of A Q,d (q), we have A Q,d (q) := | Rep d (Q) a.i. (F q )/G d (F q )| and by Burnside's formula (Lemma 5.15) this equals
acts trivially. Since X is an algebraic symplectic reduction of the action on Rep d (Q) at a regular value, we have dim X = 2 dim M θ−ss d (Q). Thus 1 − d, d Q = 1 2 dim X, which completes the proof.
Lemma 5 . 22 .
522Let θ be generic with respect to d, then there is a non-empty open subset U ⊂ Spec Z over which the morphism f : X → A 1 is smooth.
The symmetrised Euler form is defined by (d, d ′ ) Q := d, d ′ Q + d ′ , d Q and the associated Tits quadratic form is defined by q Q (d) := d, d Q . By Exercise 2.3, the Euler form relates the dimensions of the Hom and Ext groups:dim W, dim W ′ Q = dim Hom Q (W, W ′ ) − dim Ext 1 Q (W, W ′ ). In fact, we can view this as the Euler characteristic of Hom • Q (W, W ′ ) as all higher Ext groups vanish for quiver representations by Exercise 2.6 below.Remark 2.5.
A quiver Q is of finite representation type if there are only finitely many isomorphism classes of indecomposable representations of Q.
A functorial construction of moduli of sheaves. L Álvarez-Cónsul, A King, Invent. Math. 1683L. Álvarez-Cónsul and A. King. A functorial construction of moduli of sheaves. Invent. Math., 168(3):613-666, 2007.
The Yang-Mills equations over Riemann surfaces. M F Atiyah, R Bott, Phil. Trans. R. Soc. Lond. Ser. A. 308M. F. Atiyah and R. Bott. The Yang-Mills equations over Riemann surfaces. Phil. Trans. R. Soc. Lond. Ser. A, 308(1505):523-615, 1982.
Higgs bundles and (A,B,A)-branes. D Baraglia, L P Schaposnik, Commun. Math. Phys. 331D. Baraglia and L. P. Schaposnik. Higgs bundles and (A,B,A)-branes. Commun. Math. Phys., 331:1271-1300, 2014.
Real structures on moduli spaces of Higgs bundles. D Baraglia, L P Schaposnik, Adv. Theor. Math. Phys. 203D. Baraglia and L. P. Schaposnik. Real structures on moduli spaces of Higgs bundles. Adv. Theor. Math. Phys., 20(3):525- 551, 2016.
Coxeter functors, and Gabriel's theorem. Uspehi Mat. I N Bernstein, I M Gelfand, V A Ponomarev, Nauk. 282I. N. Bernstein, I. M. Gelfand, and V. A. Ponomarev. Coxeter functors, and Gabriel's theorem. Uspehi Mat. Nauk, 28(2):19- 33, 1973.
Some theorems on actions of algebraic groups. A Białynicki-Birula, Ann. of Math. 982A. Białynicki-Birula. Some theorems on actions of algebraic groups. Ann. of Math. (2), 98:480-497, 1973.
Pseudo-real principal G-bundles over a real curve. I Biswas, O Garcí A Prada, J Hurtubise, J. Lond. Math. Soc. 932I. Biswas, O. Garcí a Prada, and J. Hurtubise. Pseudo-real principal G-bundles over a real curve. J. Lond. Math. Soc. (2), 93(1):47-64, 2016.
Anti-holomorphic involutions of the moduli spaces of Higgs bundles. I Biswas, O García-Prada, J. Éc. polytech. Math. 2I. Biswas and O. García-Prada. Anti-holomorphic involutions of the moduli spaces of Higgs bundles. J. Éc. polytech. Math., 2:35-54, 2015.
Representations of quivers. M Brion, Geometric methods in representation theory. I. Paris24Sémin. Congr.M. Brion. Representations of quivers. In Geometric methods in representation theory. I, volume 24 of Sémin. Congr., pages 103-144. Soc. Math. France, Paris, 2012.
W Crawley-Boevey, Lectures on representations of quivers. available at. W. Crawley-Boevey. Lectures on representations of quivers. available at www.math.uni-bielefeld.de/ wcrawley/quivlecs.pdf, 1992.
Geometry of the moment map for representations of quivers. W Crawley-Boevey, Compositio Math. 1263W. Crawley-Boevey. Geometry of the moment map for representations of quivers. Compositio Math., 126(3):257-293, 2001.
Absolutely indecomposable representations and Kac-Moody Lie algebras. W Crawley-Boevey, M Van Den, Bergh, Invent. Math. 1553With an appendix by Hiraku NakajimaW. Crawley-Boevey and M. Van den Bergh. Absolutely indecomposable representations and Kac-Moody Lie algebras. Invent. Math., 155(3):537-559, 2004. With an appendix by Hiraku Nakajima.
A new proof of a theorem of Narasimhan and Seshadri. S K Donaldson, J. diff. geom. 18S. K. Donaldson. A new proof of a theorem of Narasimhan and Seshadri. J. diff. geom., 18:269-277, 1983.
E Franco, M Jardim, S Marchesi, arXiv:1504.05883Branes in the moduli space of framed instantons. E. Franco, M. Jardim, and S. Marchesi. Branes in the moduli space of framed instantons. arXiv: 1504.05883, 2015.
V Ginzburg, arXiv:0905.0686Lectures on Nakajima's quiver varieties. V. Ginzburg. Lectures on Nakajima's quiver varieties. arXiv: 0905.0686, 2009.
Chevalley groups over function fields and automorphic forms. G Harder, Ann. of Math. 1002G. Harder. Chevalley groups over function fields and automorphic forms. Ann. of Math. (2), 100:249-306, 1974.
On the cohomology groups of moduli spaces of vector bundles on curves. G Harder, M S Narasimhan, Math. Ann. 21275G. Harder and M. S. Narasimhan. On the cohomology groups of moduli spaces of vector bundles on curves. Math. Ann., 212:215-248, 1974/75.
Positivity for Kac polynomials and DT-invariants of quivers. T Hausel, E Letellier, F Rodriguez-Villegas, Ann. of Math. 1772T. Hausel, E. Letellier, and F. Rodriguez-Villegas. Positivity for Kac polynomials and DT-invariants of quivers. Ann. of Math. (2), 177(3):1147-1168, 2013.
Mixed Hodge polynomials of character varieties. T Hausel, F Rodriguez-Villegas, Invent. Math. Nicholas M. Katz1743T. Hausel and F. Rodriguez-Villegas. Mixed Hodge polynomials of character varieties. Invent. Math., 174(3):555-624, 2008. With an appendix by Nicholas M. Katz.
Cohomology of large semiprojective hyperkähler varieties. T Hausel, F. Rodriguez Villegas, Astérisque. 370T. Hausel and F. Rodriguez Villegas. Cohomology of large semiprojective hyperkähler varieties. Astérisque, 370:113-156, 2015.
The self-duality equations on a Riemann surface. N J Hitchin, Proc. London Math. Soc. 553N. J. Hitchin. The self-duality equations on a Riemann surface. Proc. London Math. Soc. (3), 55(1):59-126, 1987.
Hyper-Kähler metrics and supersymmetry. N J Hitchin, A Karlhede, U Lindström, M Roˇcek, Comm. Math. Phys. 1084N. J. Hitchin, A. Karlhede, U. Lindström, and M. Roˇcek. Hyper-Kähler metrics and supersymmetry. Comm. Math. Phys., 108(4):535-589, 1987.
Stratifications associated to reductive group actions on affine spaces. V Hoskins, Q. J. Math. 653V. Hoskins. Stratifications associated to reductive group actions on affine spaces. Q. J. Math., 65(3):1011-1047, 2014.
V Hoskins, F Schaffhauser, arXiv:1612.06593Group actions on quiver varieties and applications. V. Hoskins and F. Schaffhauser. Group actions on quiver varieties and applications. arXiv:1612.06593, 2016.
Rational points of quiver moduli spaces. V Hoskins, F Schaffhauser, arXiv:1704.08624V. Hoskins and F. Schaffhauser. Rational points of quiver moduli spaces. arXiv:1704.08624, 2017.
Counting representations of quivers over finite fields. J Hua, J. Algebra. 2262J. Hua. Counting representations of quivers over finite fields. J. Algebra, 226(2):1011-1033, 2000.
The geometry of moduli spaces of sheaves. Cambridge Mathematical Library. D Huybrechts, M Lehn, Cambridge University PressCambridgesecond editionD. Huybrechts and M. Lehn. The geometry of moduli spaces of sheaves. Cambridge Mathematical Library. Cambridge University Press, Cambridge, second edition, 2010.
Infinite root systems, representations of graphs and invariant theory. V G Kac, Invent. Math. 561V. G. Kac. Infinite root systems, representations of graphs and invariant theory. Invent. Math., 56(1):57-92, 1980.
Some remarks on representations of quivers and infinite root systems. V G Kac, Representation theory, II (Proc. Second Internat. Conf., Carleton Univ. BerlinSpringer832V. G. Kac. Some remarks on representations of quivers and infinite root systems. In Representation theory, II (Proc. Second Internat. Conf., Carleton Univ., Ottawa, Ont., 1979), volume 832 of Lecture Notes in Math., pages 311-327. Springer, Berlin, 1980.
Root systems, representations of quivers and invariant theory. V G Kac, Invariant theory. Montecatini; BerlinSpringer996V. G. Kac. Root systems, representations of quivers and invariant theory. In Invariant theory (Montecatini, 1982), volume 996 of Lecture Notes in Math., pages 74-108. Springer, Berlin, 1983.
Electric-magnetic duality and the geometric langlands program. A Kapustin, E Witten, Commun. Number Theory Phys. 11A. Kapustin and E. Witten. Electric-magnetic duality and the geometric langlands program. Commun. Number Theory Phys., 1(1):1-236, 2007.
The length of vectors in representation spaces. G Kempf, L Ness, Algebraic Geometry. Berlin / HeidelbergSpringer732G. Kempf and L. Ness. The length of vectors in representation spaces. In Algebraic Geometry, volume 732 of Lecture Notes in Mathematics, pages 233-243. Springer Berlin / Heidelberg, 1979.
Moduli of representations of finite dimensional algebras. Quart. A D King, J. Math. 45A. D. King. Moduli of representations of finite dimensional algebras. Quart. J. Math., 45:515-530, 1994.
Cohomology of Quotients in Symplectic and Algebraic Geometry. F Kirwan, Princeton University PressNumber 31 in Mathematical NotesF. Kirwan. Cohomology of Quotients in Symplectic and Algebraic Geometry. Number 31 in Mathematical Notes. Princeton University Press, 1984.
Semistable sheaves in positive characteristic. A Langer, Ann. of Math. 1592A. Langer. Semistable sheaves in positive characteristic. Ann. of Math. (2), 159(1):251-276, 2004.
Semisimple representations of quivers. L , Le Bruyn, C Procesi, Trans. Amer. Math. Soc. 3172L. Le Bruyn and C. Procesi. Semisimple representations of quivers. Trans. Amer. Math. Soc., 317(2):585-598, 1990.
Reduction of symplectic manifolds with symmetry. J Marsden, A Weinstein, Rep. Math. Phys. 5J. Marsden and A. Weinstein. Reduction of symplectic manifolds with symmetry. Rep. Math. Phys., 5:121-130, 1974.
Poincare polynomials of moduli spaces of higgs bundles and character varieties. A Mellit, arXiv:1707.04214no puncturesA. Mellit. Poincare polynomials of moduli spaces of higgs bundles and character varieties (no punctures). arXiv: 1707.04214, 2017.
Lectures on etale cohomology. Available at www.jmilne.org/math. J S Milne, J. S. Milne. Lectures on etale cohomology. Available at www.jmilne.org/math/, 2013.
S Mozgovoy, O Schiffmann, arXiv:1705.04849Counting higgs bundles and type a quiver bundles. S. Mozgovoy and O. Schiffmann. Counting higgs bundles and type a quiver bundles. arXiv: 1705.04849, 2017.
Geometric Invariant Theory. D Mumford, J Fogarty, F Kirwan, Springerthird editionD. Mumford, J. Fogarty, and F. Kirwan. Geometric Invariant Theory. Springer, third edition, 1993.
Instantons on ale spaces, quiver varieties, and kac-moody algebras. H Nakajima, Duke Math. J. 76H. Nakajima. Instantons on ale spaces, quiver varieties, and kac-moody algebras. Duke Math. J., 76:365-416, 1994.
Stable and unitary vector bundles on a compact Riemann surface. M Narasimhan, C Seshadri, Ann. of Math. 822M. Narasimhan and C. Seshadri. Stable and unitary vector bundles on a compact Riemann surface. Ann. of Math., 82(2):540-567, 1965.
A stratification of the null cone via the moment map (with an appendix by D. Mumford). L Ness, Amer. J. Math. 1066L. Ness. A stratification of the null cone via the moment map (with an appendix by D. Mumford). Amer. J. Math., 106(6):1281-1329, 1984.
Characteristic classes of stable bundles of rank 2 over an algebraic curve. P E Newstead, Trans. Am. Math. Soc. 169P. E. Newstead. Characteristic classes of stable bundles of rank 2 over an algebraic curve. Trans. Am. Math. Soc., 169:337- 345, 1972.
Introduction to moduli problems and orbit spaces. P E Newstead, Tata Institute of Fundamental Research Lectures on Mathematics and Physics. Tata Institute of Fundamental Research. 51by the Narosa Publishing HouseP. E. Newstead. Introduction to moduli problems and orbit spaces, volume 51 of Tata Institute of Fundamental Research Lectures on Mathematics and Physics. Tata Institute of Fundamental Research, Bombay; by the Narosa Publishing House, New Delhi, 1978.
Construction of Hilbert and Quot schemes. (Fundamental algebraic geometry. Grothendieck's FGA explained). N Nitsure, Math. Surveys Monogr. 123N. Nitsure. Construction of Hilbert and Quot schemes. (Fundamental algebraic geometry. Grothendieck's FGA explained). Math. Surveys Monogr., 123:105-137, 2005.
Hyperkahler analogues of Kahler quotients. N Proudfoot, U.C. BerkeleyPhD thesisN. Proudfoot. Hyperkahler analogues of Kahler quotients. PhD thesis, U.C. Berkeley, 2004.
The Harder-Narasimhan system in quantum groups and cohomology of quiver moduli. M Reineke, Invent. Math. 1522M. Reineke. The Harder-Narasimhan system in quantum groups and cohomology of quiver moduli. Invent. Math., 152(2):349-368, 2003.
Every projective variety is a quiver Grassmannian. M Reineke, Algebr. Represent. Theory. 165M. Reineke. Every projective variety is a quiver Grassmannian. Algebr. Represent. Theory, 16(5):1313-1314, 2013.
Lectures on Hall algebras. O Schiffmann, Geometric methods in representation theory. ParisIISémin. Congr.O. Schiffmann. Lectures on Hall algebras. In Geometric methods in representation theory. II, volume 24 of Sémin. Congr., pages 1-141. Soc. Math. France, Paris, 2012.
Indecomposable vector bundles and stable Higgs bundles over smooth projective curves. O Schiffmann, Ann. of Math. 1832O. Schiffmann. Indecomposable vector bundles and stable Higgs bundles over smooth projective curves. Ann. of Math. (2), 183(1):297-362, 2016.
Kac polynomials and lie algebras associated to quivers and curves. O Schiffmann, arXiv:1802.09760O. Schiffmann. Kac polynomials and lie algebras associated to quivers and curves. arXiv: 1802.09760, 2018.
Géométrie algébrique et géométrie analytique. J P Serre, Ann. Inst. Fourier. 6J. P. Serre. Géométrie algébrique et géométrie analytique. Ann. Inst. Fourier, 6:1-42, 1956.
Space of unitary vector bundles on a compact Riemann surface. C S Seshadri, Ann. of Math. 852C. S. Seshadri. Space of unitary vector bundles on a compact Riemann surface. Ann. of Math. (2), 85:303-336, 1967.
Higgs bundles and local systems. C T Simpson, Inst. Hautes Études Sci. Publ. Math. 75C. T. Simpson. Higgs bundles and local systems. Inst. Hautes Études Sci. Publ. Math., 75:5-95, 1992.
Moduli of representations of the fundamental group of a smooth projective variety. C T Simpson, Inst. Hautes Etudes Sci. Publ. Math. 79C. T. Simpson. Moduli of representations of the fundamental group of a smooth projective variety. Inst. Hautes Etudes Sci. Publ. Math., 79:47-129, 1994.
On the existence of Hermitian-Yang-Mills connections in stable vector bundles. K Uhlenbeck, S.-T Yau, Frontiers of the mathematical sciences. 39SComm. Pure Appl. Math.K. Uhlenbeck and S.-T. Yau. On the existence of Hermitian-Yang-Mills connections in stable vector bundles. Comm. Pure Appl. Math., 39(S, suppl.):S257-S293, 1986. Frontiers of the mathematical sciences: 1985 (New York, 1985).
E-mail address: hoskins@math. fu-berlin.de3Berlin, GermanyFreie Universität BerlinRaum 011Freie Universität Berlin, Arnimallee 3, Raum 011, 14195 Berlin, Germany. E-mail address: [email protected]
|
[] |
[
"BeeHIVE: Behavioral Biometric System based on Object Interactions in Smart Environments",
"BeeHIVE: Behavioral Biometric System based on Object Interactions in Smart Environments"
] |
[
"Klaudia Krawiecka [email protected] \nUniversity of Oxford Oxford\nUnited Kingdom\n",
"Simon Birnbach [email protected] \nUniversity of Oxford Oxford\nUnited Kingdom\n",
"Simon Eberz [email protected] \nUniversity of Oxford Oxford\nUnited Kingdom\n",
"Ivan Martinovic [email protected] \nUniversity of Oxford Oxford\nUnited Kingdom\n"
] |
[
"University of Oxford Oxford\nUnited Kingdom",
"University of Oxford Oxford\nUnited Kingdom",
"University of Oxford Oxford\nUnited Kingdom",
"University of Oxford Oxford\nUnited Kingdom"
] |
[] |
The lack of standard input interfaces in the Internet of Things (IoT) ecosystems presents a challenge in securing such infrastructures. To tackle this challenge, we introduce a novel behavioral biometric system based on naturally occurring interactions with objects in smart environments. This biometric leverages existing sensors to authenticate users without requiring any hardware modifications of existing smart home devices. The system is designed to reduce the need for phone-based authentication mechanisms, on which smart home systems currently rely. It requires the user to approve transactions on their phone only when the user cannot be authenticated with high confidence through their interactions with the smart environment.We conduct a real-world experiment that involves 13 participants in a company environment, using this experiment to also study mimicry attacks on our proposed system. We show that this system can provide seamless and unobtrusive authentication while still staying highly resistant to zero-effort, video, and in-person observation-based mimicry attacks. Even when at most 1% of the strongest type of mimicry attacks are successful, our system does not require the user to take out their phone to approve legitimate transactions in more than 80% of cases for a single interaction. This increases to 92% of transactions when interactions with more objects are considered.
| null |
[
"https://arxiv.org/pdf/2202.03845v2.pdf"
] | 246,652,598 |
2202.03845
|
b299f9802fd543316a5bd328fc74291f2fdb1e48
|
BeeHIVE: Behavioral Biometric System based on Object Interactions in Smart Environments
Klaudia Krawiecka [email protected]
University of Oxford Oxford
United Kingdom
Simon Birnbach [email protected]
University of Oxford Oxford
United Kingdom
Simon Eberz [email protected]
University of Oxford Oxford
United Kingdom
Ivan Martinovic [email protected]
University of Oxford Oxford
United Kingdom
BeeHIVE: Behavioral Biometric System based on Object Interactions in Smart Environments
The lack of standard input interfaces in the Internet of Things (IoT) ecosystems presents a challenge in securing such infrastructures. To tackle this challenge, we introduce a novel behavioral biometric system based on naturally occurring interactions with objects in smart environments. This biometric leverages existing sensors to authenticate users without requiring any hardware modifications of existing smart home devices. The system is designed to reduce the need for phone-based authentication mechanisms, on which smart home systems currently rely. It requires the user to approve transactions on their phone only when the user cannot be authenticated with high confidence through their interactions with the smart environment.We conduct a real-world experiment that involves 13 participants in a company environment, using this experiment to also study mimicry attacks on our proposed system. We show that this system can provide seamless and unobtrusive authentication while still staying highly resistant to zero-effort, video, and in-person observation-based mimicry attacks. Even when at most 1% of the strongest type of mimicry attacks are successful, our system does not require the user to take out their phone to approve legitimate transactions in more than 80% of cases for a single interaction. This increases to 92% of transactions when interactions with more objects are considered.
INTRODUCTION
Projections indicate that by the end of 2021 smart environments will account for over 35% of all households in North America and over 20% in Europe [26]. The growing number of smart devices that are incorporated into such environments leads to a wider presence of a variety of sensors. These sensors can be leveraged to improve the security of smart environments by providing essential input about user activities. In many environments, the control over specific devices or financial transactions should only be available for an authorized group of users. For example, smart windows in a child's bedroom should not open when the parent is not present, and the child should not be able to order hundreds of their favorite candy bars using a smart refrigerator. Similarly, not all office workers should have access to a smart printer's history, nor should the visitors in a guesthouse be able to change credentials on smart devices that do not belong to them. But while there is a need for Figure 1: An overview of the BeeHIVE system. As the user interacts with the printer, sensors embedded in smart objects surrounding the user and the printer record these interactions. Physical signals generated from the user's movements are picked up by sensors such as accelerometers, pressure sensors and microphones, and are used to profile them. The system authenticates the user before allowing them to perform certain actions, such as payments.
authentication, smart devices offer limited interfaces for implementing security measures. This can be mitigated by requiring that the user initiates or approves every transaction through a privileged companion app running on the user's smartphone. However, this can be very cumbersome as the user needs to have their phone at hand and thus negates many advantages that smart environments offer in the first place.
On-device sensors such as microphones, passive infrared (PIR) sensors, and inertial measurement units (IMUs) have been extensively used to recognize different activities performed by users in the area of Human Activity Recognition (HAR) [15]. Prior work has focused on using one type of input data to authenticate users, such as voice, breath, or gait [10,20,25,27].
In order to make attacks more difficult, several systems have been proposed that rely on diverse types of inputs [1,9,13,17,19,23]. While these approaches are promising, they often do not utilize the full potential of co-located heterogeneous devices in smart environments. In this paper, we propose the BeeHIVE system that uses sensor data collected during day-to-day interactions with physical arXiv:2202.03845v2 [cs.CR] 18 Feb 2022 objects to implicitly authenticate users without requiring users to change smart home hardware or adapt their behavior. This system can be used to complement phone-based authentication mechanisms that require users to explicitly approve transactions through privileged apps. By using BeeHIVE in conjunction with a phonebased authentication mechanism as a fallback, smart environments can become more seamless and unobtrusive for users without sacrificing their security.
We conducted a 13-person experiment in a company environment to explore the effectiveness of imitation attacks against our model. The proposed technique is assessed in three modes of operation to use (1) features from sensors placed on the object with which the user interacts, (2) features only from sensors on co-located objects, and (3) features from both on-device and co-located sensors. Overall, the outcome of our analysis proves that the system achieves desirable security properties, regardless of the amount of smart office users or the environment configuration.
We make the following contributions in the paper:
• We propose a novel biometric based on interactions with physical objects in smart environments. • We collect a separate 13-person dataset in a company setting to study video-based and in-person imitation attacks. • We make all data and code needed to reproduce our results available online.
BACKGROUND AND RELATED WORK
Existing biometric authentication systems that utilize data collected from mobile and smart devices are generally categorized into singlebiometric or multi-biometric approaches [1,9]. The systems from the first category collect inputs of a specific type (e.g., sounds, images, acceleration readings) and search for unique patterns. On the other hand, multi-biometric systems combine the data extracted from multiple sources to create unique signatures based on different sensor types. They provide more flexibility and lift a number of limitations posed by single-biometric systems, including dependency on certain types of equipment and environmental conditions. Moreover, they are less prone to mimicry attacks due to the complexity of spoofing multiple modals simultaneously [32].
Single-biometric systems
The vast majority of existing commercial and non-commercial systems used in smart environment contexts [4,7,20,25] primarily rely on voice recognition to authenticate users. Since these systems are often vulnerable to voice spoofing and hijacking attacks [8,11,33,34], research efforts shifted towards hardening voice recognition systems by leveraging anti-spoofing mechanisms like proximity detection or second factors [7,20].
With the recent development of new types of Internet of Things and wearable devices, the possibility of using unconventional biometric traits has emerged. For instance, Chauhan et al. observed that microphones can be used to extract breathing acoustics when the user is present in a smart environment [10]. Similarly, the builtin accelerometers in mobile and IoT devices have been used to characterize gait or human body movements to facilitate authentication [5,21,22,27]. These approaches were the first steps taken to explore the full potential of smart environments to turn contextual and behavioral data into biometric traits for seamless authentication.
Multi-biometric systems
To improve adaptability and reduce the inaccuracy of single-biometric systems, various multi-biometric systems have been proposed [1,9,13,17,19,23]. One approach is to combine two biometric traits to fingerprint users [13,19,23]. For example, Olazabal et al. [23] proposed a biometric authentication system for smart environments that uses the feature-level fusion of voice and facial features. These solutions, however, still require users to actively participate (e.g. by shaking devices or repeating specific hand wave patterns) in the authentication process and rely on the presence of specific sensors in the smart environment. To address such limitations, the MUBAI system [1] employs multiple smart devices to extract various behavioral and contextual features based on well-known biometric traits such as facial features and voice recognition.
Interaction-based biometric systems
Interaction-based biometric systems have emerged from the observation that physical interactions with devices can uniquely identify users. Such systems have been widely discussed for mobile platforms [29]. Typically, on-device sensors are employed to measure touch dynamics or user gestures [18,28,29]. For example, users can be profiled based on how they pick up their phones or how they hold them [3]. Similar techniques have been used in smart environments; however, most of the existing solutions not only require the user to actively participate in the authentication process but also rely on a specific setup. Our goal is to introduce a biometric system that continuously and seamlessly authenticates the users while they are interacting with the devices around them without restrictions on sensor placement.
2.3.1 SenseTribute. Closest to our work is SenseTribute [14], which performs occupant identification by extracting signals from physical interactions using two on-device sensors-accelerometers and gyroscopes. Its main objective is to attribute physical activities to specific users. To cluster such activities, SenseTribute uses supervised and unsupervised learning techniques, and segments and ensembles multiple activities.
There is a palpable risk in real-world smart environments that users will attempt to execute actions that they are not authorized for. This requires means for not just identification, but also authentication. Therefore-in contrast to SenseTribute, which focuses on user identification-the main objective of our system is user authentication, for which we conduct a more extensive experiment evaluating various types of active attacks. In office and home environments, it is easy for anyone to observe interactions made by authorized users, and it is natural that, for example, kids may seek to imitate their parents. Going beyond previous work, we therefore evaluate the robustness of our system against mimicry attacks based on real-time observation or video recordings.
Furthermore, SenseTribute expects all objects to be equipped with sensors. However, this is not always a realistic assumption, as sensors are often deployed only near (but not on) interaction points. Thus, we propose a system that uses nearby sensors present in co-located IoT devices to authenticate user interactions.
SYSTEM DESIGN
The heterogeneous nature of smart devices makes it possible to sample different types of user interactions. The main purpose of this work is to show that such interactions with various objects in smart environments are distinctive and can be used to profile users. The expansion of smart devices, and hence smart environments, will soon make such methods necessary to quickly authorize certain activities, including payments or management of smart devices. Figure 1 shows an overview of the system design. The proposed BeeHIVE system is meant to complement existing app-based authentication mechanisms used to secure current smart home platforms. Our system authenticates the user through their interactions with the smart environment and only requires the user to approve transactions through the app as a fallback if it cannot authenticate the user with confidence itself. In this way, BeeHIVE can be used to reduce the reliance on these app-based authentication mechanisms without compromising on the security of the smart home platform.
Design goals
In order to inform the system design and evaluation methodology, we define the following design goals: Unobtrusiveness. The system should not require users to perform explicit physical actions for the purpose of authentication nor require them to modify their usual behavior.
Low false accept rate. As the system is designed to be used alongside app-based authentication, it should prioritize low false accept rates to avoid significantly weakening the security of the overall smart environment system.
Low friction. The system should provide a seamless experience to the user wherever possible. This means that false reject rates should be kept low to reduce the need of falling back on the usual app-based authentication of the underlying smart environment platform. However, this should not come at the cost of higher false accept rates.
No restrictions on sensor placement. The system should use data from existing sensors without making restrictions on their placement or orientation. This ensures that the system can be applied to existing deployments purely through software. In addition, the system should not require sensors on each object but instead use sensors on other nearby devices.
Robustness to imitation attacks. Due to the ease of observation in home environments, the system's error rates should not increase significantly even when subjected to imitation attacks.
System model
In this work, we consider smart environments where objects such as fridges or cupboards are augmented by smart devices that monitor their state and provide access to enhanced functionality. People naturally interact with many of these smart objects during their daily activities. Each activity consists of a set of intermediate tasks.
For instance, to prepare a meal, a user has to walk to the fridge and open it to collect ingredients. The user then has to walk to the cupboard to pick up the plates. Behavioral data of these tasks are measured with different types of sensors with which smart devices are frequently equipped. As some objects might not have any suitable sensors attached to them, we also consider nearby sensors to profile object interactions. This is particularly true for physical objects without smart capabilities (e.g., cupboards or drawers). In order to illustrate these different possible deployment settings, we consider three system configurations:
• On-object, where sensors are mounted directly on the object • Off-object, where only co-located sensor data are considered • Combined, which uses sensor data of both the device on the object as well as from co-located devices
We use sequences of interactions to increase confidence in system decisions. This way, the user can be better authenticated if they perform several tasks in succession. As a simplification, we focus on authenticating one user at a time and do not consider multiple users interacting with objects simultaneously. It is important to note that in our system a failed authentication does not mean that the user is barred from making transactions. Instead, they can simply not benefit from the seamless authentication provided by our system and are required to use their phone to approve the requested transaction.
Deployment scenarios
The need to authenticate users arises because physical access to smart objects does not imply authorization to use them. We consider scenarios in which children or visitors may abuse the trust of their parents or hosts to initiate sensitive operations through smart devices that are unwanted by the owners of said devices. These operations can include making payments to other people, ordering goods online, changing the configuration of smart devices, or accessing sensitive information stored on these devices. For example, a child might want to exploit the restocking mechanism of the fridge to order their favorite sweets while a visitor might unintentionally or intentionally access the viewing history of the smart TV and learn intimate details about their hosts.
Other possible settings include offices where smart devices are accessible to staff and visitors alike. Often these smart devices allow users to complete administrative tasks through them, such as reordering supplies or accessing the print job history of smart printers. But access to this functionality should be restricted to authorized personnel. In these cases, an implicit authentication of the person executing these tasks as done by our system can avoid cumbersome external authentication methods.
Adversary model
An adversary's ( ) main objective is to convince the smart environment that they are a legitimate user ( ). Such a misclassification can result in permitting to execute on-device financial transactions or any other types of sensitive operations on behalf of . We assume that has physical access to the environment, but is otherwise an unprivileged user such as a child or a visitor. Moreover, cannot tamper with the smart devices by, for example, connecting to the debug port to flash the device firmware. We also assume that smart devices and the user's smartphone are not compromised; thus, they can be considered a reliable data source. Based on these assumptions, we also exclude the possibility of the attacker interrupting the training phase, which could result in the generation of incorrect biometric signatures of authorized users. In order to achieve their goal, may attempt to mimic the behavior of to generate a matching biometric fingerprint. Successful mimicry attacks on various biometric systems have been previously demonstrated [16]. In our scenarios, we consider three types of such attacks: (1) zero-effort attackers who interact with the environment naturally without attempting to change their behavior, (2) in-person attacks in which can observe legitimate users interacting with IoT devices in person, and (3) video-based attacks in which possesses a video recording of the user interacting with the IoT devices in a smart environment. While in-person attacks give a possibility to inspect 's interactions more closely and potentially capture more details, recordings can provide additional time to learn 's behavior.
EXPERIMENTAL DESIGN
In order to evaluate the feasibility of authenticating users seamlessly based on their interactions with smart devices, we conducted an experiment in a smart office environment with thirteen participants. This experiment is further used to study attackers that attempt to copy the behavior of the legitimate user to execute mimicry attacks.
Data collection
For our experiment, we collected data from a wide range of typical smart home interactions using sensors similar to those already present in most smart environments. Since raw sensor data in smart devices are typically inaccessible for developers, we deploy Raspberry Pis equipped with the same types of sensors to simulate such an environment and study object interactions. We use a total of ten Raspberry Pis equipped with magnetic contact switches, USB microphones (recording sound pressure levels), and ICM20948 IMUs (providing an accelerometer, a gyroscope, and a magnetometer) to collect the data for the experiments. The Raspberry Pis are fitted to typical home appliances (e.g., fridge or coffee machine) and kitchen furniture (e.g., drawers or cupboards). The magnetic contact switches are used in place of a typical type of smart office device (i.e., a door/window contact sensor) and they provide the ground truth for the occurrence of interactions with smart objects (e.g., the opening of a kitchen cupboard augmented with a contact sensor). The IMUs measure the motion sensor data from the interaction (i.e., acceleration, gyroscopic motion, and orientation) and are being polled through the 2 interface of the Raspberry Pis. The inputs from the USB microphones are only used to calculate sound pressure levels, but no actual audio data is being stored. See Figure 2 for an example deployment of one of our measurement devices. The Raspberry Pis are connected to a smartphone running a wireless hotspot. The data is securely streamed to a remote server and is additionally stored locally on the devices. A mobile app running on the smartphone is used for labeling and timestamping each run of the experiment and provides time synchronization.
Recruitment of participants
We rented an office space and invited 13 employees of the same company. We conducted this experiment in adherence to local Covid-19 restrictions and social distancing was observed at all times. All the participants in our study were compensated for their time and effort. This project has been reviewed by and received clearance by the responsible research ethics committee at our university, reference number CS_C1A_20_014-1.
Mimicry attacks
Following Covid-19 regulations, this experiment is conducted remotely in the office kitchen of a hotel company. In this experiment, we use 8 devices and the device setup is completed by the participants. They are given a set of our Raspberry Pi sensor boards and they have to set up the devices themselves, according to the provided step-by-step user manual. An overview of the deployment and the room layout are shown in Figure 3. As object interactions, we consider in this experiment: 4 cupboards, 1 mini oven, 1 pull-out drawer, 1 microwave, and 1 coffee machine. Apart from the coffee machine, all of these interactions involve the opening and closing of the doors of the interaction point. To get the ground truth for the coffee machine interaction, the user first opens a lid on top of the coffee machine which is outfitted with a magnetic contact switch. The user then proceeds with pressing buttons on the coffee machine, before they end the interaction by closing the lid on top of the machine again.
Each of the participants performs 20 runs of interactions. Then, one of the participants is randomly chosen as the legitimate user and victim of the attack. The rest of the participants are split into two groups of six attackers who can observe the user's interactions with the smart environment. The first group can only observe the victim in-person, whereas the second group has access to video recordings of previous object interactions which they can study in their own time.
The participants from both groups of attackers then execute the same interactions as the victim, carefully trying to mimic the victim's behavior. The attackers from the first group (i.e., the inperson group) have to perform this attack on the same day when the observation took place. The participants in the second attack group can watch video recordings of the victim from different angles overnight and only have to execute the attack on the following day.
METHODS
In this paper, we formally define a task as a physical interaction initiated by the user with an object . Each task can be modeled as a time series = { 1 , 2 , . . . , }, which is constructed from the data collected by on-device sensors, including microphones, accelerometers, gyroscopes, and magnetometers. The variable represents a physical signal generated by the user while they interact with the smart object at time in form of a vector of sensor values. Depending on the combination of sensors on the devices, they can collect diverse inputs. For example, a smart refrigerator equipped with Inertial Measurement Units (IMUs) can collect acceleration values as vectors of ⟨ , , ⟩ when the user opens or closes its door. However, a smart coffee machine may be equipped with both an IMU and a microphone, which results in collecting more input data. Figure 4 presents the system overview and explains its processing pipeline. Base-learners are weak classifiers that are combined to form an ensemble to facilitate the decision-making process. When the user performs a sequence of tasks 1 to on several smart objects, the system extracts the features for these tasks from on-object sensors as well as sensors in proximity. Next the features become an input to the base-learners corresponding to those tasks-resulting in predictions 1 to . These predictions can either indicate a probability that an observed sample belongs to a certain class or a concrete label from the set of labels = 1 , 2 , . . . , , depending on the framework configuration. Finally, the meta-learner gathers all predictions made by all the base-learners and decides on the final prediction in the second-level prediction layer. This way, a smart environment can benefit from the heterogeneous character of smart devices and their built-in sensors by performing a decision-level fusion to improve the classification accuracy.
Preprocessing
The multitude of 's interactions in a smart environment translates into physical signals that are received by the sensors of smart devices. Figure 5 presents the sensor readings when 1 interacts with the narrow cabinet during the experiment. While (a) shows the signal that the gyroscope sensor of the cabinet has captured, (b) reveals what has been registered by a co-located sensor. Co-located sensors are all sensors in proximity to an object that can capture physical signals originating from interaction with this object. The microphone on the wide cupboard recorded two events -opening and closing the door of the cabinet. These movements are part of the task performed on smart object . The start and end of are time-stamped by the contact sensors and denoted as 0 and 1 respectively (marked with red dotted lines in Figure 5). The signals from are segmented by the values of 0 − 1 and 1 + 1 before proceeding to the feature extraction phase. The time-series signals are converted to values characteristic of ℎ sensor types. As a result, for each , a set of corresponding matrices 1 , 2 , ..., ℎ exists that contains vectors of length of different sensor values between 0 and 1 . Thus, with ∈ 1, 2, . . . , ℎ and the sensor component where ∈ 1, 2, . . . , , we refer to a single matrix as with columns 1 , 2 . . . , . The number of columns for is determined by the sensor components (e.g., three axes of a gyroscope sensor 1 gives 1 = 3). For a smart object with two corresponding sensors, two such matrices will be generated. The variable represents a component-specific input value for a column generated by a physical signal received by sensor . These matrices are then passed as input to the feature extraction function.
Feature extraction
For each physical interaction with an object , the system extracts ℎ matrices with = ℎ =1 columns of time series data segments of particular sensor components of this object and co-located objects. Based on these columns, the features are extracted. More formally, for each smart object the system extracts a set of features
= { ( [ ] ) : 1 ≤ ≤ } [ ] = 0 . . . 1(1)
represents a set of feature values extracted from a component (e.g. an -axis of an accelerometer) in of length . denotes the total number of different functions that extract features.
The features retrieved from physical signals are listed in Table 1. The statistical functions are computed from each column of , which contains sensor values extracted between 0 − 1 and 1 + 1. We add windows of a second to account for signals that originate from the starting and ending movements. These features are categorized into two groups: time-domain features and frequencydomain features. The majority of the extracted features originate from the time domain because such features are typically wellsuited for systems that process large volumes of data due to their low computational complexity. These features help to analyze the biomechanical effect of a given interaction on physical signals and identify characteristics of movements [24]. For microphone data, we extract sound pressure levels (SPLs) instead of actual audio recordings. Thus, statistical functions are applied to SPL values.
Feature selection
For each smart object, the system selects a subset of extracted features using a filtering method. This method focuses on verifying whether features are relevant by analyzing their association with the target variable. The univariate feature selection method used in this work relies on statistical tests to investigate the relationship between variables. The features selected from various sensors are aggregated and become an input to an object-specific base-classifier. This is a necessary step in our framework to improve the model performance as well as reduce the computational complexity.
Mutual information.
Mutual information (MI) is used to examine the distinctiveness of a set of features and to test the null hypothesis ( 0 ) that negates the existence of a relationship between a feature and an associated target variable. This method can capture statistical dependencies between variables, explaining whether one variable can provide relevant information about the other one [6]. By accepting 0 , we assume that the extracted feature is not relevant, indicating that it is independent of the target variable. On the other hand, rejecting 0 suggests that the variables can be dependent, so the feature should be considered relevant. In practice, the null hypothesis is rejected or accepted after examining the resulting non-negative MI scores. The higher the score, the more significant the feature may be. The zero score indicates independence between the variables. As there can be many relevant features, the system has been configured to choose only the top 20 of such features ranked by the highest scores for each object.
Multi-sensor fusion
As stated in Section 3.2, we explore and evaluate the potential of heterogeneous smart environments to authenticate users while they carry out their daily activities. Every node in a smart environment extracts different sets of characteristics from user interactions due to their placement, purpose, and composition of built-in sensors. Various fusion approaches exist that can boost the detection accuracy and system effectiveness in multi-sensor environments [2]. Among these fusion techniques, we focus on decision-level methods which allow the introduction of multiple classifiers, base-learners, that independently undertake a classification task. This gives a certain degree of autonomy to individual base-learners trained on specific smart object interactions. Moreover, such an approach allows us to select the most effective feature sets, classifiers and their hyper-parameters for each of the base-learners. As shown in Figure 4, after each first-level base-classifier makes a prediction, the second-level meta-classifier determines the final outcome. The efficiency and effectiveness of various fusion techniques at the decision level have been extensively studied in the area of Human Activity Recognition (HAR) [2]. While our focus is on user authentication, we hypothesize that similar approaches can be just as effective in our case. As such, we compare two ensemble learning techniques that use fundamentally different classification methods but show promise for good performance in our multi-user smart environment scenarios.
Stacking.
A meta-learner is trained using labels obtained from the first-layer base-learners, as its features [31]. We chose stacking as a method of linking heterogeneous classifiers since it typically achieves high accuracy and introduces less variance than other approaches. Combining a multitude of smart objects and their classifiers can be helpful because some of these interactions can classify certain users better than others. The optimal parameters of the meta-classifier are determined during the training phase by using cross-validation on the training dataset to avoid overfitting. Based on the combinations of predictions that the meta-learner receives from the base-classifiers, it computes the most accurate label. Stacking allows combining various classifiers (e.g., k-Nearest Neighbours, Random Forests, Decision Trees, etc.) using different sets of features for each. In our scenario, the biggest advantage of this approach is that the meta-classifier learns which object interactions predict labels more accurately. For instance, after a training phase, a meta-classifier learned that tasks performed on 1 or 3 are more effective in recognizing than 2 . Therefore, it will account for it in the future while making predictions. The varying classification effectiveness of individual object interactions and their sensors was a major factor in deciding whether to include the ensemble learning methods in our system.
Voting.
Voting is another ensemble learning method discussed in this paper. In comparison to stacking, this technique does not require a separate machine learning model to make final predictions. Instead, it uses the deterministic hard voting algorithm to compute the result. In our scenario, voting simply indicates that the most reported class label from the set of predictions will be selected as the outcome .
= mode( )(2)
For example, let's assume that there is a set of predictions = { 1 , 2 , 3 } computed for three smart object interactions. For our first object, a smart fridge, the system computed 1 , which resulted in label 1 belonging to . Then, a smart microwave did not recognize . But for the coffee machine the system calculated 3 that again pointed to 1 . Thus, the final prediction would result in 1 . We decided to assign uniform weights to all base-classifiers and test how different combinations of such classifiers affect the performance of the ensemble system.
Classification tasks
In this paper, we discuss a supervised learning task that focuses on binary classification to evaluate authentication performance of our system. We extract features for each base classifier in three different system configurations, namely On-object, Off-object, and Combined. These base classifiers create the first-level predictions 1 to and on their basis, the meta-classifier generates the second-level prediction . Since both the Support Vector Machine (SVM) and Random Forest (RF) models tend to outperform other classifiers in HAR tasks [12], we chose them as the base classifiers in our task. For each object interaction classifier, a grid search is performed to find the optimal set of hyper-parameters. More details can be found in Tables 2 and 3. For all SVM-based base-learners, the selected features are first standardized. Models are trained and tested using 10-fold cross-validation to avoid information leakage. The resulting classification accuracy is averaged over different folds and used to select the best models. For stacking, as the second layer
EVALUATION 6.1 Distinctiveness of sensor features
In order to judge the distinctiveness of features by different types of sensors, we use relative mutual information (RMI). RMI is defined as
RMI( , ) = ( ) − ( | ) ( )
where H(A) is the entropy of A and H(A|B) denotes the entropy of A conditioned on B.
Here, denotes the ground truth of the user performing the object interaction, whereas is the vector of extracted features. Tables 4 and 5 show the RMI for individual sensors that have been placed on household objects as part of our experiment. These scores represent aggregated maximum values of RMI for a particular sensor on a specific household object , given different configurations of the system. Each of these objects introduces a different way for a user to interact with the smart environment. Analysis of the distinctiveness of the features extracted from these sensors allows us to understand which ones contribute to better classification performance for a specific type of interaction. Each device has been equipped with an accelerometer (ACC), a magnetometer (MAG), a gyroscope (GYRO), and a microphone (MIC). Generally, we observe that the features extracted from GYRO and ACC exhibit high distinctiveness for most of the interaction types. For On-object, the most distinctive features originate from GYRO whereas for Off-object, ACC appears to supply the most distinctive features. We observe that, in many cases, the inputs from co-located objects generate higher RMI scores. On the other hand, the features extracted from MIC appear to have relatively low distinctiveness in comparison to other attributes for the majority of interactions.
Despite its generally low distinctiveness for most interactions, MIC achieves higher RMI values for interactions with the pull-out drawer and is the second most distinctive sensor for the coffee machine when we consider features extracted only from its ondevice sensors. This can be explained as the drawer's contents make sounds continuously, changing based on how far extended the drawer is, whereas for most other events the main sounds were caused by the closing of doors-with little difference between users. Pressing the buttons of the coffee machine on the other hand makes faint sounds which differ between users with regards to the timing of the button presses.
GYRO shows particularly high distinctiveness for most interactions for On-object, with the exceptions of the narrow cabinet and the pull-out drawer. The cabinet used in the experiment has a very stiff door that leads to abrupt openings with little variation between users. While this reduces the effectiveness of the recognition of users by sensors directly placed on the cabinet, such abrupt openings allow co-located sensors to capture stronger vibrations, hence, provide more accurate distinction. The lower RMI values for GYRO for the pull-out drawer can be explained by a lack of rotational movement. Instead, the most distinctive movement characteristics are the sounds and the acceleration which is why MIC and ACC are the most distinctive sensor types for this interaction.
ACC appears to provide the most distinctive features captured by co-located sensors. Interestingly, the vibration signals picked up by the co-located sensors exhibit the highest feature distinctiveness during interactions with the coffee maker. Overall, we notice that Off-object features provide better distinctiveness than the features gathered only by On-object sensors. This suggests that the system can accurately authenticate users by their interactions with objects that do not have sensors directly placed on them.
Authentication performance
In our experiment, we focus on analyzing the system performance against three types of attacks. The first part of the dataset contains the samples from the victim as well as zero-effort attack samples from each of the remaining 12 participants. This dataset is split using 10-fold cross-validation. Each test fold is used to evaluate a group of zero-effort attacks since it contains the samples of attackers' regular interactions with objects. The remaining attack samples are supplied to the zero-effort attack-trained classifier. To compare and evaluate the effectiveness of different types of attacks on the environment, we report False Reject Rates (FRRs) at different thresholds of False Acceptance Rates (FARs). The FAR metric allows us to determine how many attempts the attacker was successful in. On the other hand, FRR specifies how many legitimate samples from a victim have been misclassified as an attack. Note, that rather than completely preventing the user from executing a transaction this merely means that the user will have to approve the transaction explicitly through their phone. First, we examine FRRs for individual smart objects that the user interacts with. Next, we inspect the performance of ensembles of base-classifiers that are responsible for interpreting different interactions with objects. Finally, we compare the performance of voting and stacking meta-classifiers by examining the receiver operating characteristic (ROC) curve for an ensemble of all available object interactions.
In Table 6, we present FRRs at 1% and 10% FAR thresholds averaged across all objects for three types of attacks targeting a dedicated user. Figure 6 shows their averaged ROC curves. Table 7 presents FRRs for individual smart objects in respect to zero-effort attacks without a dedicated victim, i.e., the results are averaged across all users being considered a victim. For each attack, we calculate FRRs and FARs using different system configurations, including On-object, Off-object, and Combined. For Off-object, only the top performing features are selected. For each smart object, Table 8 shows what other objects these features were extracted from.
In the training phase, we only use samples collected during participants' regular interactions with the smart environment. This is because we consider an attacker who has access to the facilities. For example, a malicious co-worker whose typical interaction samples would be known by the system. A zero-effort attack, in which the attacker does not attempt to mimic the behavior of a legitimate user, is an indication of the baseline performance of the system. Other types of attacks involve attackers who either watched the video of the victim interacting with objects or observed the victim personally.
We observe that for authentication using Off-object sensors, we achieve an average false reject rate of less than 3% with a 1% false acceptance rate for zero-effort attacks. FRRs increase to 19% for video-based attacks and to 15% for in-person observation-based attacks, considering the same false acceptance rate. This means that even when defending against strong video-based attacks, the system does not require the user to explicitly approve transactions in more than 80% of cases, as the system can instead authenticate the user through their interactions with the smart environment.
For the FAR of 10%, the FRR for zero-effort attacks drops to less than 1%. Similarly, FRRs for video-based and in-person attacks decrease to 18% and 13% respectively. The On-object configuration exhibits the worst performance among all of the configuration types, resulting in false reject rates of 25% for the zero-effort attacks, 58% and 50% for the other types of attacks. The Combined configuration guarantees better performance than On-object, however, it exhibits worse performance than Off-object due to the inclusion of features extracted from on-device sensors. It is noteworthy that the microwave door and the narrow cabinet classifiers perform significantly worse than others, which impacts the average scores. Since this effect is universal across users, this suggests that poorlyperforming objects should be excluded by the meta-classifier. Table 7 compares the performance of On-object and Off-object configurations across all smart objects. The narrow cabinet and the microwave door exhibit the worst FRRs in the On-object configuration, resulting in false reject rates of 26% and 30% given a 1% false acceptance rate for zero-effort attacks. The FRRs drop to 0.4% and 1% when the model includes features extracted from colocated sensors. Since the Off-object configuration exhibits the best performance, we focus on it for the remainder of this section.
The attackers from the video group could watch the video of the victim performing interactions with objects as often as desired for 24 hours. On the other hand, the attackers who observed the victim in person could follow them closely and look at the exact body and hand movements. To understand this phenomenon, we asked the participants to describe their strategies. The participants from the video-based attack group watched the video three times on average before attempting to mimic the victim. When viewing the video, participants report that they paid attention to the strength with which the victim interacted with the objects, the use of the hands (left or right), the speed of the interaction, and the body position. The participants in the second group, on the other hand, focused mainly on the pace, strength, and rhythm of the interaction. All attackers focused their strategy on mimicking the power and speed with which the victim interacted with objects. Additionally, most of them attempted to spend a similar amount of time per interaction as the victim did. One of the attackers even counted the seconds spent on interactions with each object.
Considering multiple interactions with various objects can further improve the system's performance. Figure 7 shows averaged FRRs at two FAR thresholds of 10% and 1% for different ensembles of objects for the voting and stacking meta-classifiers given the Off-object configuration. We focus on the Off-object configuration here, as it exhibits the best performance out of the three considered configurations, and thus best demonstrates the potential performance gains that can be achieved. This could be further improved by adjusting the weights, i.e., assigning smaller ones to interactions that exhibit worse performance. Generally, allowing the system to consider more interactions before authenticating the user results in better performance.
Overall, the voting method outperforms the stacking meta-classifier in our scenario. This method is also computationally less complex since it does not involve training another classifier with the predictions of the base-classifiers. The voting meta-classifier achieves a false reject rate of less than 1% with an FAR of 1% whereas the stacking classifier obtains an FRR of 2% for the zero-effort attacks. The video-based attacks for the stacking classifier achieve an FRR of 32% when considering the ensemble of two unique objects given an FAR of 1%. On the other hand, the voting classifier obtains 8% FRR given the same FAR threshold. This means that for the voting classifier, the system can spare the user an explicit phone-based authentication in 92% of cases. We included only four smart objects in this analysis but considering more unique smart objects results in further improvements of the system performance.
LIMITATIONS
No concurrent device use. In our experiment, we limit interactions with any device to a single user at a time. In the experiments, this was necessary to obtain accurate identity labels to establish the distinctiveness of device interactions. This limitation may lead to two potential problems in practice. If two users are interacting with different devices in the same room simultaneously or in short sequence, this may lead to decisions made using multiple device interactions to be wrong. This can be avoided by only using interactions with the target device (the device requiring authentication) to make the decision. In addition, simultaneous interactions may affect the sensor signatures and make it harder to match fingerprints for either of them. However, simultaneous interactions are easily detected and either accounted for or ignored entirely.
Limited number of users and interactions. Due to time considerations and the unique requirements of the ongoing Covid-19 pandemic, we could only capture device interactions in a single session. This limits our analysis for different levels of FAR and FRR, as the total number of samples and attacker/victim pairs are too low to make a statistically robust analysis of extremely low FAR levels. Given the promising results shown by our current analysis, we plan to collect an additional large-scale dataset in the future.
Consecutive user sessions. In our experiment, sessions for different users were conducted one after the other. In theory, it would be possible for environmental effects to be present during one user's session but not for others, thereby leading to classifiers learning these effects as a proxy for user identity. For example, a sound pressure sensor may pick up increased ambient noise during a user's session. However, the fairly strong increase in FAR caused by imitation attacks (video and in-person) suggests that the classifiers capture (somewhat imitable) true user behavior as it is unlikely users would attempt to match the original environmental conditions during their attack.
In this paper, we have introduced a system to authenticate users in smart environments based on naturally occurring interactions with objects around them. Notably, our system does not require any sensors on the object itself but makes use of sensors placed arbitrarily in the room. We have conducted an experiment in realworld settings with a total of 13 participants, which shows that using these kinds of smart object interactions for authentication is feasible. This is a crucial finding because there is a need for stronger authorization controls in such environments, but many smart devices offer only limited interfaces to implement security features. Therefore, current systems often rely on cumbersome app-based authentication methods that require the user to always have their phone at hand. Our system can complement such phonebased authentication methods and reduce how often a user has to explicitly approve a transaction in the smart home companion app.
We show that our system demonstrates good authentication performance against zero-effort attacks, with less than 1% of transactions requiring external approval at an FAR of 1% when considering a single object interaction.
When attackers attempting to imitate the victim's behavior after observing them in-person or through video footage are considered, the user has to approve more transactions explicitly to maintain a 1% FAR. However, the system can still authenticate more than 80% of transactions unobtrusively when considering video-based attackers, rising to 85% of transactions for in-person attacks. We also show that the system's confidence in the authentication decision can be significantly improved if more than one object interaction is considered. Including more interactions with objects can further increase the authentication success rates to 92% even when considering the strongest attacker.
These promising results and the potential for easy deployment make this behavioral biometric system a good candidate to improve the security of smart environments in a seamless and unobtrusive manner. We make our entire dataset and the code needed to reproduce our results available online to allow researchers to build on our work.
Figure 2 :
2Raspberry Pi attached to a kitchen cupboard. The white magnetic contact switch is used to detect the opening and closing of the cupboard, the USB microphone is used to measure sound pressure levels, and the IMU connected on top of the Raspberry Pi records acceleration, gyroscopic movement and orientation of the interaction.
Figure 3 :
3A simplified layout of the room and the arrangement of the objects 1 − 8 the participants interacted with during the experiment.
Figure 4 :
4The diagram provides an overview of the processing pipeline of a multi-sensor fusion system. The system extracts relevant features from 's interactions with objects 1 to and supplies them to their base-classifiers. Then, the first-level predictions 1 to are fed into a meta-classifier (i.e., a voting or stacking classifier) that computes the final prediction .
interaction of the user 1 with the narrow cabinet as read by the axis of a gyroscope.(b) The same interaction of the user 1 with the narrow cabinet picked up by the microphone of the co-located wide cupboard 2.
Figure 5 :
5As users interact with smart devices, signals from on-device sensors are collected and processed by the system. Signals (a) and (b) are generated by the participant 1 that has interacted with the smart object 4. 2 picked up additional input from the same interaction with object 4 as they were co-located. More specifically, these plots illustrate the movements on the axis of the gyroscope (a) of 4 and RMS values (b) picked up by the microphone of 2. Red dotted lines indicate the start ( 0 ) and end ( 1 ) of the task while the green dashed lines denote ± 1 seconds windows.
Figure 6 :
6The plots above show the ROC curves for three system configurations respectively based on average FARs from single interaction types. Each curve represents a different group of attacks, i.e., zero-effort (blue), in-person (green), and video-based (yellow) attacks.
Figure 7 :
7Averaged False Reject Rates (FRRs) at different False Acceptance Rates (FARs) thresholds calculated based on the performance of different ensembles of unique objects for two meta-classifiers and the Off-object configuration. Each such ensemble is trained and tested separately, then the scores are averaged across the ensembles of the same type (e.g., pairs, triples of unique objects).
Table 1 :
1The decomposition of features that smart devices extract from users' physical interactions.Types
Features
Time-domain features
Min, Max, Mean, Median, Std, Var, Kurtosis,
Skewness, Shape factor, Absolute energy
Mean of central approx. of 2 derivative,
Mean/Sum of abs change, Peaks
Frequency-domain features Fourier entropy
such that
= { : 0 < ≤ * }, where:
Table 2 :
2Search space for Random Forest (RF) hyperparameters. As each base classifier choses their own parameters, the optimal values given here are the most commonly chosen ones.Parameter
Search space
Optimal value
Number of estimators 10, 50, 100, 200 100
Tree depth
2, 4, 5, 6, 7, 8
7
Number of features
√
, log
√
Table 3 :
3Search space for Support-Vector Machine (SVM) hy-
perparameters. As each base classifier choses their own pa-
rameters, the optimal values given here are the most com-
monly chosen ones.
Parameter
Search space
Optimal value
C
0.1, 1, 10, 100
0.1
1., 0.1, 0.01, 0.001
0.01
Kernel function linear, polynomial, rbf, sigmoid rbf
Table 4 :
4Aggregated maximum values of RMI in percentages for the On-object configuration, given different types of ondevice sensors.Object Type
ACC MAG GYRO MIC
Right cupboard door 31.60 40.58 72.32 20.72
Wide cupboard
50.81 47.30 64.45 25.37
Left cupboard door
62.90 39.50 73.90
8.09
Narrow cabinet
30.39 33.28 19.41 15.09
Oven door
32.24 64.46 49.20 16.30
Coffee machine
34.07 41.64 60.48 41.67
Pull-out drawer
35.19 27.21 19.28 44.63
Microwave door
35.93 54.04 41.12 15.10
Table 5 :
5Aggregated maximum values of RMI in percentages for the Off-object configuration, given different types of co-located sensors.Object Type
ACC MAG GYRO MIC
Right cupboard door 78.63 71.49 72.08 17.33
Wide cupboard
85.86 53.72 59.10 28.07
Left cupboard door
77.69 75.53 74.21 17.49
Narrow cabinet
76.23 73.43 61.28 23.31
Oven door
74.67 73.52 60.76 17.93
Coffee machine
98.19 74.01 90.41 42.96
Pull-out drawer
80.61 69.31 56.81 25.50
Microwave door
86.09 54.78 58.37 20.33
model, we chose Logistic Regression due to its simplicity and ease
of interpretation [30].
Table 6 :
6False Reject Rates (FRRs) for interactions with different types of objects in respect to three kinds of attacks given different FAR thresholds. The On-object column presents FRRs for the model with features extracted only from on-device sensors. Off-object shows FRRs considering only features from co-located sensors, whereas Combined reveals FRRs for the model that uses the combined features from the co-located and on-device sensors. The results are averaged across all smart objects in our experiment.FAR
On-object FRR
Off-object FRR
Combined FRR
Zero-effort Video In-person
Zero-effort Video In-person
Zero-effort Video In-person
10%
0.1375
0.3708
0.2250
0.0063
0.1750
0.1250
0.0125
0.1938
0.1500
1%
0.2486
0.5812
0.5087
0.0250
0.1938
0.1500
0.0250
0.2500
0.2875
Table 7 :
7FRRs at two distinctive FAR thresholds for interactions with different types of objects in respect to zero-effort attacks
given On-object and Off-object configurations. These configurations are compared to emphasize the improvement offered
by considering co-located sensors. Presented results are averaged across all users being considered a victim.
Object Type
= 10%
= 10%
= 10%
= 1%
= 1%
= 1%
On-object FRR Off-object FRR
On-object FRR Off-object FRR
Right cupboard door
0.0526
0.0039
0.1401
0.0154
Wide cupboard
0.0577
0.000
0.2231
0.0077
Left cupboard door
0.0369
0.0039
0.1077
0.0039
Narrow cabinet
0.1141
0.0
0.2577
0.0039
Oven door
0.0305
0.0
0.1020
0.0
Coffee machine
0.0154
0.0
0.0731
0.0
Pull-out drawer
0.0385
0.0
0.1180
0.0
Microwave door
0.0987
0.0115
0.2962
0.0192
Table 8 :
8For the Off-object configuration, we list all the colocated objects whose sensors provided the best features for individual object classifiers.Object
Features from
Right cupboard door Left cupboard door
Wide cupboard
Left cupboard door, Coffee machine,
Microwave door
Left cupboard door
Right cupboard door, Wide cupboard,
Oven door, Coffee machine,
Pull-out drawer
Narrow cabinet
Right cupboard door, Wide cupboard,
Left cupboard door, Oven door,
Coffee machine, Pull-out drawer,
Microwave door
Oven door
Right cupboard door, Wide cupboard,
Left cupboard door, Narrow cabinet,
Coffee machine, Pull-out drawer,
Microwave door
Coffee machine
Right cupboard door, Wide cupboard,
Pull-out drawer, Microwave door
Pull-out drawer
Right cupboard door, Wide cupboard,
Left cupboard door, Oven door
Microwave door
Right cupboard door, Wide cupboard,
Narrow cabinet, Coffee machine
Mubai: multiagent biometrics for ambient intelligence. Maria De Andrea F Abate, Daniel Marsico, Genny Riccio, Tortora, Journal of Ambient Intelligence and Humanized Computing. 22Andrea F Abate, Maria De Marsico, Daniel Riccio, and Genny Tortora. Mubai: multiagent biometrics for ambient intelligence. Journal of Ambient Intelligence and Humanized Computing, 2(2):81-89, 2011.
Multi-sensor fusion for activity recognition-a survey. A Antonio, Ramon F Aguileta, Oscar Brena, Erik Mayora, Luis A Molino-Minero-Re, Trejo, Sensors. 19173808Antonio A Aguileta, Ramon F Brena, Oscar Mayora, Erik Molino-Minero-Re, and Luis A Trejo. Multi-sensor fusion for activity recognition-a survey. Sensors, 19(17):3808, 2019.
Hold & sign: A novel behavioral biometrics for smartphone user authentication. Buriro Attaullah, Bruno Crispo, Filippo Del Frari, Konrad Wrona, Buriro Attaullah, Bruno Crispo, Filippo Del Frari, and Konrad Wrona. Hold & sign: A novel behavioral biometrics for smartphone user authentication. 05 2016.
Voice authentication and command. B Hugo, Barra, US Patent. 8834Hugo B Barra. Voice authentication and command, September 24 2013. US Patent 8,543,834.
Internet of things data analytics for user authentication and activity recognition. Samera Batool, A Nazar, Muazzam A Saqib, Khan, 2017 Second International Conference on Fog and Mobile Edge Computing (FMEC). IEEESamera Batool, Nazar A Saqib, and Muazzam A Khan. Internet of things data analytics for user authentication and activity recognition. In 2017 Second Inter- national Conference on Fog and Mobile Edge Computing (FMEC), pages 183-187. IEEE, 2017.
Feature selection via mutual information: New theoretical insights. Mario Beraha, Alberto Maria Metelli, Matteo Papini, Andrea Tirinzoni, Marcello Restelli, Mario Beraha, Alberto Maria Metelli, Matteo Papini, Andrea Tirinzoni, and Marcello Restelli. Feature selection via mutual information: New theoretical insights, 2019.
Verifying voice commands via two microphone authentication. Logan Blue, Hadi Abdullah, Luis Vargas, Patrick Traynor, Proceedings of the 2018 on Asia Conference on Computer and Communications Security. the 2018 on Asia Conference on Computer and Communications Security2Logan Blue, Hadi Abdullah, Luis Vargas, and Patrick Traynor. 2ma: Verifying voice commands via two microphone authentication. In Proceedings of the 2018 on Asia Conference on Computer and Communications Security, pages 89-100, 2018.
Hidden voice commands. Nicholas Carlini, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, David Wagner, Wenchao Zhou, 25th {USENIX} Security Symposium ({USENIX} Security 16). Nicholas Carlini, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, David Wagner, and Wenchao Zhou. Hidden voice commands. In 25th {USENIX} Security Symposium ({USENIX} Security 16), pages 513-530, 2016.
Context aware ubiquitous biometrics in edge of military things. A Castiglione, K R Choo, M Nappi, S Ricciardi, IEEE Cloud Computing4A. Castiglione, K. R. Choo, M. Nappi, and S. Ricciardi. Context aware ubiquitous biometrics in edge of military things. IEEE Cloud Computing, 4(6):16-20, 2017.
Breathing-based authentication on resourceconstrained iot devices using recurrent neural networks. Jagmohan Chauhan, Suranga Seneviratne, Yining Hu, Archan Misra, Aruna Seneviratne, Youngki Lee, Computer. 515Jagmohan Chauhan, Suranga Seneviratne, Yining Hu, Archan Misra, Aruna Seneviratne, and Youngki Lee. Breathing-based authentication on resource- constrained iot devices using recurrent neural networks. Computer, 51(5):60-67, 2018.
Your voice assistant is mine: How to abuse speakers to steal information and control your phone. Wenrui Diao, Xiangyu Liu, Zhe Zhou, Kehuan Zhang, Proceedings of the 4th ACM Workshop on Security and Privacy in Smartphones & Mobile Devices. the 4th ACM Workshop on Security and Privacy in Smartphones & Mobile DevicesWenrui Diao, Xiangyu Liu, Zhe Zhou, and Kehuan Zhang. Your voice assistant is mine: How to abuse speakers to steal information and control your phone. In Proceedings of the 4th ACM Workshop on Security and Privacy in Smartphones & Mobile Devices, pages 63-74, 2014.
Multi-view stacking for activity recognition with sound and accelerometer data. Enrique Garcia-Ceja, Carlos E Galván-Tejada, Ramon Brena, Information Fusion. 40Enrique Garcia-Ceja, Carlos E. Galván-Tejada, and Ramon Brena. Multi-view stacking for activity recognition with sound and accelerometer data. Information Fusion, 40:45 -56, 2018.
Multimodal biometrics via discriminant correlation analysis on mobile devices. Mikhail Gofman, Narciso Sandico, Sinjini Mitra, Eryu Suo, Sadun Muhi, Tyler Vu, Proceedings of the International Conference on Security and Management (SAM). the International Conference on Security and Management (SAM)The Steering Committee of The World Congress in Computer Science, Computer . . .Mikhail Gofman, Narciso Sandico, Sinjini Mitra, Eryu Suo, Sadun Muhi, and Tyler Vu. Multimodal biometrics via discriminant correlation analysis on mobile devices. In Proceedings of the International Conference on Security and Manage- ment (SAM), pages 174-181. The Steering Committee of The World Congress in Computer Science, Computer . . . , 2018.
Smart home occupant identification via sensor fusion across on-object devices. Jun Han, Shijia Pan, Kumar Sinha, Hae Young Noh, Pei Zhang, Patrick Tague, ACM Transactions on Sensor Networks (TOSN). 143-4Jun Han, Shijia Pan, Manal Kumar Sinha, Hae Young Noh, Pei Zhang, and Patrick Tague. Smart home occupant identification via sensor fusion across on-object devices. ACM Transactions on Sensor Networks (TOSN), 14(3-4):1-22, 2018.
Neural network ensembles for sensor-based human activity recognition within smart environments. Naomi Irvine, Chris Nugent, Shuai Zhang, Hui Wang, W Y Wing, Ng, Sensors. 2012020Naomi Irvine, Chris Nugent, Shuai Zhang, Hui Wang, and Wing W. Y. NG. Neural network ensembles for sensor-based human activity recognition within smart environments. Sensors, 20(1), 2020.
Augmented reality-based mimicry attacks on behaviour-based smartphone authentication. Hassan Khan, Urs Hengartner, Daniel Vogel, Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services. the 16th Annual International Conference on Mobile Systems, Applications, and ServicesHassan Khan, Urs Hengartner, and Daniel Vogel. Augmented reality-based mimicry attacks on behaviour-based smartphone authentication. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services, pages 41-53, 2018.
Multimodal biometric authentication using teeth image and voice in mobile environment. Dong-Su Kim, Kwang-Seok Hong, IEEE Transactions on Consumer Electronics. 544Dong-Su Kim and Kwang-Seok Hong. Multimodal biometric authentication using teeth image and voice in mobile environment. IEEE Transactions on Consumer Electronics, 54(4):1790-1797, 2008.
Secure pick up: Implicit authentication when you start using the smartphone. Wei-Han Lee, Xiaochen Liu, Yilin Shen, Hongxia Jin, Ruby B Lee, Proceedings of the 22nd ACM on Symposium on Access Control Models and Technologies. the 22nd ACM on Symposium on Access Control Models and TechnologiesWei-Han Lee, Xiaochen Liu, Yilin Shen, Hongxia Jin, and Ruby B Lee. Secure pick up: Implicit authentication when you start using the smartphone. In Proceedings of the 22nd ACM on Symposium on Access Control Models and Technologies, pages 67-78, 2017.
Multimodal biometric authentication in iot: Single camera case study. Nemanja Maček, Igor Franc, Mitko Bogdanoski, Aleksandar Mirković, Nemanja Maček, Igor Franc, Mitko Bogdanoski, and Aleksandar Mirković. Multi- modal biometric authentication in iot: Single camera case study. 2016.
Securing consumer iot in the smart home: Architecture, challenges, and countermeasures. Yan Meng, Wei Zhang, Haojin Zhu, Xuemin Sherman Shen, IEEE Wireless Communications25Yan Meng, Wei Zhang, Haojin Zhu, and Xuemin Sherman Shen. Securing con- sumer iot in the smart home: Architecture, challenges, and countermeasures. IEEE Wireless Communications, 25(6):53-59, 2018.
Lightweight gait based authentication technique for iot using subconscious level activities. Pratik Musale, Duin Baek, Bong Jun Choi, IEEE 4th World Forum on Internet of Things (WF-IoT). IEEEPratik Musale, Duin Baek, and Bong Jun Choi. Lightweight gait based authen- tication technique for iot using subconscious level activities. In 2018 IEEE 4th World Forum on Internet of Things (WF-IoT), pages 564-567. IEEE, 2018.
You walk, we authenticate: Lightweight seamless authentication based on gait in wearable iot systems. Pratik Musale, Duin Baek, Nuwan Werellagama, S Simon, Bong Jun Woo, Choi, IEEE Access. 7Pratik Musale, Duin Baek, Nuwan Werellagama, Simon S Woo, and Bong Jun Choi. You walk, we authenticate: Lightweight seamless authentication based on gait in wearable iot systems. IEEE Access, 7:37883-37895, 2019.
Multimodal biometrics for enhanced iot security. Oscar Olazabal, Mikhail Gofman, Yu Bai, Yoonsuk Choi, Noel Sandico, Sinjini Mitra, Kevin Pham, 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC). IEEEOscar Olazabal, Mikhail Gofman, Yu Bai, Yoonsuk Choi, Noel Sandico, Sinjini Mitra, and Kevin Pham. Multimodal biometrics for enhanced iot security. In 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC), pages 0886-0893. IEEE, 2019.
Comparison of different sets of features for human activity recognition by wearable sensors. Samanta Rosati, Gabriella Balestra, Marco Knaflitz, Sensors. 18124189Samanta Rosati, Gabriella Balestra, and Marco Knaflitz. Comparison of different sets of features for human activity recognition by wearable sensors. Sensors, 18(12):4189, 2018.
Voice biometrics: The promising future of authentication in the internet of things. A Saleema, M Sabu, Thampi, Handbook of Research on Cloud and Fog Computing Infrastructures for Data Science. IGI GlobalA Saleema and Sabu M Thampi. Voice biometrics: The promising future of authentication in the internet of things. In Handbook of Research on Cloud and Fog Computing Infrastructures for Data Science, pages 360-389. IGI Global, 2018.
Smart home technologies in europe: A critical review of concepts, benefits, risks and policies. K Benjamin, Dylan D Sovacool, Furszyfer Del Rio, Renewable and Sustainable Energy Reviews. 120109663Benjamin K. Sovacool and Dylan D. Furszyfer Del Rio. Smart home technologies in europe: A critical review of concepts, benefits, risks and policies. Renewable and Sustainable Energy Reviews, 120:109663, 2020.
Accelerometer-based speedadaptive gait authentication method for wearable iot devices. Fangmin Sun, Chenfei Mao, Xiaomao Fan, Ye Li, IEEE Internet of Things Journal. 61Fangmin Sun, Chenfei Mao, Xiaomao Fan, and Ye Li. Accelerometer-based speed- adaptive gait authentication method for wearable iot devices. IEEE Internet of Things Journal, 6(1):820-830, 2018.
Tiltpass: using device tilts as an authentication method. Sara C Sarabadani Amir E Sarabadani Tafreshi, Amirehsan Sarabadani Tafreshi, Tafreshi, Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces. the 2017 ACM International Conference on Interactive Surfaces and SpacesAmir E Sarabadani Tafreshi, Sara C Sarabadani Tafreshi, and Amirehsan Sarabadani Tafreshi. Tiltpass: using device tilts as an au- thentication method. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces, pages 378-383, 2017.
A survey on touch dynamics authentication in mobile devices. Ning Pin Shen Teh, Andrew Zhang, Jin Beng, Ke Teoh, Chen, Computers & Security. 59Pin Shen Teh, Ning Zhang, Andrew Beng Jin Teoh, and Ke Chen. A survey on touch dynamics authentication in mobile devices. Computers & Security, 59:210-235, 2016.
Stacked penalized logistic regression for selecting views in multi-view learning. Information Fusion. Marjolein Wouter Van Loon, Botond Fokkema, Mark Szabo, De Rooij, 61Wouter van Loon, Marjolein Fokkema, Botond Szabo, and Mark de Rooij. Stacked penalized logistic regression for selecting views in multi-view learning. Informa- tion Fusion, 61:113-123, Sep 2020.
Stacked generalization. H David, Wolpert, Neural networks. 52David H Wolpert. Stacked generalization. Neural networks, 5(2):241-259, 1992.
Mimicry attack on strategy-based behavioral biometric. V Roman, Yampolskiy, Fifth International Conference on Information Technology: New Generations (itng 2008). IEEERoman V Yampolskiy. Mimicry attack on strategy-based behavioral biometric. In Fifth International Conference on Information Technology: New Generations (itng 2008), pages 916-921. IEEE, 2008.
Dolphinattack: Inaudible voice commands. Guoming Zhang, Chen Yan, Xiaoyu Ji, Tianchen Zhang, Taimin Zhang, Wenyuan Xu, Proceedings of the BeeHIVE: Behavioral Biometric System based on Object Interactions in Smart Environments 2017 ACM SIGSAC Conference on Computer and Communications Security. the BeeHIVE: Behavioral Biometric System based on Object Interactions in Smart Environments 2017 ACM SIGSAC Conference on Computer and Communications SecurityGuoming Zhang, Chen Yan, Xiaoyu Ji, Tianchen Zhang, Taimin Zhang, and Wenyuan Xu. Dolphinattack: Inaudible voice commands. In Proceedings of the BeeHIVE: Behavioral Biometric System based on Object Interactions in Smart Environments 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 103-117, 2017.
Understanding and mitigating the security risks of voice-controlled thirdparty skills on amazon alexa and google home. Nan Zhang, Xianghang Mi, Xuan Feng, Xiaofeng Wang, Yuan Tian, Feng Qian, arXiv:1805.01525arXiv preprintNan Zhang, Xianghang Mi, Xuan Feng, XiaoFeng Wang, Yuan Tian, and Feng Qian. Understanding and mitigating the security risks of voice-controlled third- party skills on amazon alexa and google home. arXiv preprint arXiv:1805.01525, 2018.
|
[] |
[
"The Dichotomous Nucleon: Some Radical Conjectures for the Large N c Limit",
"The Dichotomous Nucleon: Some Radical Conjectures for the Large N c Limit"
] |
[
"Yoshimasa Hidaka \nDepartment of Physics\nKyoto University\nSakyo-ku606-8502KyotoJapan\n",
"Toru Kojo \nRIKEN/BNL Research Center\nBrookhaven National Laboratory\nUptonNY-11973USA\n",
"Larry Mclerran \nRIKEN/BNL Research Center\nBrookhaven National Laboratory\nUptonNY-11973USA\n\nDepartment of Physics\nBrookhaven National Laboratory\nUptonNY-11973USA\n",
"Robert D Pisarski \nDepartment of Physics\nBrookhaven National Laboratory\nUptonNY-11973USA\n"
] |
[
"Department of Physics\nKyoto University\nSakyo-ku606-8502KyotoJapan",
"RIKEN/BNL Research Center\nBrookhaven National Laboratory\nUptonNY-11973USA",
"RIKEN/BNL Research Center\nBrookhaven National Laboratory\nUptonNY-11973USA",
"Department of Physics\nBrookhaven National Laboratory\nUptonNY-11973USA",
"Department of Physics\nBrookhaven National Laboratory\nUptonNY-11973USA"
] |
[] |
We discuss some problems with the large N c approximation for nucleons which arise if the axial coupling of the nucleon to pions is large, g A ∼ N c . While g A ∼ N c in non-relativistic quark and Skyrme models, it has been suggested that Skyrmions may collapse to a small size, r ∼ 1/f π ∼ Λ −1 QCD / √ N c . (This is also the typical scale over which the string vertex moves in a string vertex model of the baryon.) We concentrate on the case of two flavors, where we suggest that to construct a nucleon with a small axial coupling, that most quarks are bound into colored diquark pairs, which have zero spin and isospin. For odd N c , this leaves one unpaired quark, which carries the spin and isospin of the nucleon. If the unpaired quark is in a spatial wavefunction orthogonal to the wavefunctions of the scalar diquarks, then up to logarithms of N c , the unpaired quark only costs an energy ∼ Λ QCD . This naturally gives g A ∼ 1 and has other attractive features. In nature, the wavefunctions of the paired and unpaired quarks might only be approximately orthogonal; then g A depends weakly upon N c . This dichotomy in wave functions could arise if the unpaired quark orbits at a size which is parametrically large in comparison to that of the diquarks. We discuss possible tests of these ideas from numerical simulations on the lattice, for two flavors and three and five colors; the extension of our ideas to more than three or more flavors is not obvious, though.
|
10.1016/j.nuclphysa.2011.01.008
|
[
"https://arxiv.org/pdf/1004.2261v3.pdf"
] | 115,136,657 |
1004.2261
|
effec90536e9de5cff924e72d0f60af1490512de
|
The Dichotomous Nucleon: Some Radical Conjectures for the Large N c Limit
28 Jan 2011
Yoshimasa Hidaka
Department of Physics
Kyoto University
Sakyo-ku606-8502KyotoJapan
Toru Kojo
RIKEN/BNL Research Center
Brookhaven National Laboratory
UptonNY-11973USA
Larry Mclerran
RIKEN/BNL Research Center
Brookhaven National Laboratory
UptonNY-11973USA
Department of Physics
Brookhaven National Laboratory
UptonNY-11973USA
Robert D Pisarski
Department of Physics
Brookhaven National Laboratory
UptonNY-11973USA
The Dichotomous Nucleon: Some Radical Conjectures for the Large N c Limit
28 Jan 2011Preprint submitted to Elsevier Science 1 February 2011Dense quark matterChiral symmetry breakingLarge N c expansion PACS: 1239Fe1115Pg2165Qr
We discuss some problems with the large N c approximation for nucleons which arise if the axial coupling of the nucleon to pions is large, g A ∼ N c . While g A ∼ N c in non-relativistic quark and Skyrme models, it has been suggested that Skyrmions may collapse to a small size, r ∼ 1/f π ∼ Λ −1 QCD / √ N c . (This is also the typical scale over which the string vertex moves in a string vertex model of the baryon.) We concentrate on the case of two flavors, where we suggest that to construct a nucleon with a small axial coupling, that most quarks are bound into colored diquark pairs, which have zero spin and isospin. For odd N c , this leaves one unpaired quark, which carries the spin and isospin of the nucleon. If the unpaired quark is in a spatial wavefunction orthogonal to the wavefunctions of the scalar diquarks, then up to logarithms of N c , the unpaired quark only costs an energy ∼ Λ QCD . This naturally gives g A ∼ 1 and has other attractive features. In nature, the wavefunctions of the paired and unpaired quarks might only be approximately orthogonal; then g A depends weakly upon N c . This dichotomy in wave functions could arise if the unpaired quark orbits at a size which is parametrically large in comparison to that of the diquarks. We discuss possible tests of these ideas from numerical simulations on the lattice, for two flavors and three and five colors; the extension of our ideas to more than three or more flavors is not obvious, though.
Introduction
The large N c limit of 't Hooft [1] for the description of baryons has been developed by Adkins, Nappi and Witten [2]. In this limit, the nucleon is a topological excitation of the pion field, where the pion field is described by a non-linear sigma model plus a Skyrme term [3]. This topological excitation is described by a stable soliton solution of size r ∼ 1/Λ QCD , which is a Skyrmion; Λ QCD is a mass scale typical of the strong interactions.
The action of the Skyrmion is ∼ N c , and so it contains of order N c coherent pions. In the Skyrme model, the nucleon pion coupling constant is enhanced from its naive value, g πN N ∼ √ N c , which arises from counting the number of quarks inside a nucleon, to become g πN N ∼ N 3/2 c . This is a consequence of the coherent nature of the pions which compose the Skyrmion. By the Goldberger-Treiman relation [4], the axial coupling g A is then of order N c . Such a strong axial coupling generates strong spin-isospin dependent forces, of order N c , out to distances which are large in comparison to the size of the nucleon, ∼ 1/Λ QCD . In the limit of massless pions, these interactions are of infinite range. In Monte-Carlo computations of the nucleon-nucleon force on the lattice, no strong long range tails are seen; indeed, even at intermediate ranges the forces do not appear to be large [5] (Some cautions on the interpretation of lattice results were raised in [6]). In addition, the magnetic moment of the proton would be of order N c , which would also generate strong electromagnetic interactions [7].
Such a description of the nucleon at infinite N c appears to be rather different from what we observe for N c = 3. At finite N c , these problems might be fixed by a fine tuning of parameters. For example, in the Skyrme model description of Ref. [2], the parameter 1/e 2 that controls the strength of the Skyrme term, and which stabilizes the Skyrmion at a non-zero radius, should be of order N c . To provide a phenomenologically viable description of the nucleon for N c = 3, though, it is taken to be 3.3 × 10 −2 .
Another generic problem is the nature of nuclear matter. Some of the channels for the long distance spin-isospin dependent forces are attractive. This means that the ground state of nuclear matter is a crystal and the binding energy is of order N c Λ QCD [8]. On the other hand, ordinary nuclear matter is very weakly bound, with a binding energy δE ∼ 16 MeV [9]. This number seems to be closer to Λ QCD /N c than to N c Λ QCD , the value typical of a Skyrme crystal. Moreover, nuclear matter appears to be in a liquid state, and not a crystal.
An excellent discussion of the properties of the nucleon-nucleon force is found in Ref. [10][11][12]. Many of the relationships derived there are generic relationships between the magnitudes of various forces, and these seem to work quite well. Thus it is somewhat of a mystery why the large N c limit for baryons can work well in some contexts, but provide qualitative disagreement in others.
Yet another problem is the mass splitting between the nucleon and ∆. Consistency conditions at large N c and standard large N c counting indicate that this mass difference is ∼ Λ QCD /N c [10,11]. In QCD, though, it is ∼ 300 MeV, which is ∼ Λ QCD .
A large value of g A also generates problems in writing a chiral effective theory for the nucleon. In the linear sigma model, chiral symmetry implies that there is a large coupling to the sigma meson, g σN N = g πN N ∼ N 3/2 c . Such a large coupling generates self-energy corrections to the nucleon that would be larger than N c . In addition, if the axial coupling is of order N c , self-interactions associated with an axial-vector current should result in a significant contribution to the nucleon mass. If there is some way to lower the axial coupling, which does not greatly increase the mass of the nucleon, then it is plausible that nature would realize this possibility.
Ultimately, large self-energies for the nucleon might destabilize a nucleon of size ∼ 1/Λ QCD . One might be tempted to argue that this cannot happen in QCD, since the action in QCD is of order N c , and a collapsed soliton, with a size other than Λ QCD , should have a mass which is not linear in N c . This would be a strong argument if the nucleon appeared as a purely classical solution of the QCD equations of motion, as a Skyrmionic soliton for example. Following others, however, we suggest that the Skyrmionic soliton may collapse [13][14][15][16][17]. If so, at short distances the nucleons are more naturally described by quarks rather than by coherent pions. The quarks cannot collapse to a small size without paying a price of order N c /R in quark kinetic energy. The relevance of quark descriptions inside of the nucleon was also emphasized in [18].
A key observation in this paper is that such constituent quarks are the main origin of the axial charge g A , which is the source of pion fields. If N c constituent quarks totally have a small axial charge, g A ∼ 1, then the problems related to large coherent pions will be solved.
We suggest such a nucleon wavefunction. Most quarks are bound into colored diquarks [19]. For odd N c , that leaves one unpaired quark. We then put that unpaired quark into a wavefunction which is approximately orthogonal to those of the paired quarks. This can be accomplished by making the spatial extent of the unpaired quark larger than that of the paired diquarks: it is "dichotomous". Putting the additional quark into such a wavefunction costs an energy of order Λ QCD , up to logarithms of N c (as we show later). Such a construction results in small self-energies from the pion-nucleon self-interactions, as a result of g A ∼ 1. It is also clear that long-range nucleon-nucleon interactions are no longer strong. This is a minimal modification of the naive non-relativistic quark model of the nucleon. There quarks are paired into diquarks, save for one quark that carries the quantum numbers of the nucleon. It is usually assumed, however, that all of the quarks, paired or not, have the same spatial wavefunction. This gives g A = (N c + 2)/3, and the problems discussed above [7,20].
A trace of the collapsed Skyrmion might appear at a scale size of order 1/f π . This size corresponds to the intrinsic scale of a quantum pion. Since f π ∼ √ N c Λ QCD at large N c , the size of the nucleon shrinks to zero as N c → ∞. We will also show that such a small size naturally arises in a string vertex model of the nucleon, as the root mean square fluctuations in the position of the string vertex. Of course, the contribution of the string vertex to the mass of the nucleon is of order f π , as most of the mass of the nucleon is generated by a cloud of quarks and quark-antiquark pairs surrounding the collapsed Skyrmion, or string vertex. The picture we develop has some aspects in common with bag models [21], and particularly the hybrid descriptions of Brown and Rho [22,23].
The collapsed Skyrmion we conjecture has some features which are similar to the nucleon in the Sakai-Sugimoto model [24]. They suggest that the Skyrmion, computed in the action to leading order in strong coupling, is unstable with respect to collapse. It is stabilized by ω vector meson interaction, which is of higher order correction in strong coupling. It is argued that the nucleon has a size of order 1/( √ g 2 N c Λ QCD ). The methods used to derive this result are questionable at sizes 1/Λ QCD , but at least this shows that there is a small object in such theories. It is quite difficult for ω exchange or other strong coupling effects to stabilize the nucleon once it acquires a size much less than 1/Λ QCD . The basic problem is that mesons will decouple from small objects due to form factor effects. Without form factors, the ω interaction generates a term ∼ 1/R, which resists collapse; form factors convert this into a factor of ∼ R, which is harmless as R shrinks to zero.
The outline of this paper is as follows: In Sec. 2 we review the sigma model and its predictions for nucleon structure. We show that its predictions for the large N c properties of the nucleon are at variance from the large N c limit predicted for a Skyrmion of size ∼ 1/Λ QCD . In Sec. 3 we discuss the general form of nucleon-nucleon interactions in the sigma model and in the Skyrme model. In Sec. 4 we argue that the Skrymion might collapse to a size scale of order 1/f π [13][14][15][16][17]. In Sec. 5 we discuss the string vertex model of Veneziano [25]. In particular, we argue that the spatial extent of the string vertex is typically of order 1/f π , which is the minimal size for the string vertex. Such a vertex might be thought of as the localization of baryon number. Quarks attached to the ends of strings will nevertheless have a spatial extent of order 1/Λ QCD to avoid paying a huge price in quark kinetic energies. In Sec. 6 we compute the contribution to g A arising from quarks. Using the non-relativistic quark model, we find that if we make the wavefunction of those quarks paired as diquarks, and that of the unpaired quark have a small overlap, then g A is parametrically smaller in N c than the canonical value of g A = (N c + 2)/3. An explicit computation of g A and the magnetic moments for such a variable overlap is carried out in Appendix A and B. In Sec. 7 we present arguments about how, dynamically, such a small overlap might be achieved. In Sec. 8 we summarize our arguments, and discuss how they might be tested through numerical simulations on the lattice.
The Sigma Model
Let us begin by reviewing how the long range nucleon-nucleon interaction depend on N c in the sigma model. The linear sigma model is written in the form
S = d 4 x 1 2 (∂ µ σ∂ µ σ + ∂ µ π a ∂ µ π a ) − µ 2 2 (σ 2 + (π a ) 2 ) + λ 4 (σ 2 + (π a ) 2 ) 2 +ψ(−i/ ∂ + g(σ + iπ · τ γ 5 ))ψ ,(1)
where ψ denotes the nucleon field. Our metric convention is g 00 = −1. The naive arguments of large N c QCD would have the mass term µ of order Λ QCD , the four meson coupling λ ∼ 1/N c , and the pion nucleon coupling g ∼ √ N c .
Upon extremizing the action, we find that M σ ∼ µ, M π = 0, M N ∼ gµ/ √ λ ∼ N c µ. Therefore the typical large N c assignments of couplings are consistent with the nucleon mass being of order N c , the sigma mass of order one, and a weakly coupled pionic and sigma system. Note that the sigma is strongly coupled to the nucleon, consistent with large N c phenomenology.
What about the pion coupling? It is naively of order √ N c but the γ 5 matrix, because of the negative parity of the pion, suppresses pion emission when the momentum of the pion is much less than that of the nucleon. A non relativistic reduction of the pion nucleon interaction gives
gπ a ψτ a γ 5 ψ ∼ g 2M N (∂ µ π a )ψτ a γ µ γ 5 ψ.(2)
This equation means that one pion emission is not of order √ N c at long distances, but of order 1/ √ N c . Thus the potential due to one pion exchange is of order 1/N c , and not of order N c .
One might object that in higher orders this is not true, since one might expect the non-relativistic decoupling of the pions would disappear when one consid-ers two pion exchange. If one considers the diagram in Fig. 1, this contribution is naively of order N c . The sum of the two diagrams cancel to leading order q k when q, k M , making it again naively of order one. However, when the diagram in Fig. 2 is included, which is also of order one in powers of N c , there is a cancellation with the above two diagrams when the momentum of the pions is small compared to µ. When all is said and done, we conclude that for momentum small compared to the QCD scale, the interaction is of order 1/N c . This corresponds to a suppression of 1/ √ N c for each pion emitted.
p p − q p − q − k (a) q k p p − q − k p − k (b)
In fact, Weinberg proves by an operator transformation on the sigma model action, that this cancellation persists to all orders in perturbation in the theory, and that pion emission when soft is always suppressed by 1/ √ N c for each emitted pion [26]. This conclusion about the strength of the nucleon force is consistent with what we know about nuclear matter. Nuclear matter is weakly bound, and has a binding energy which is of order Λ 2 /M N ∼ 1/N c . Such a parametric dependence on N c is seen in nuclear matter computations where pion exchange is augmented by a hard core interaction [27]. The hard core interaction presumably arises when momentum transferred is of order Λ QCD , and interactions become of order one in powers of N c . In nuclear matter computations, the hard core essentially tells the nucleons they cannot go there, and its precise form is not too important.
q k p p − q − k
It is useful to consider the non-linear sigma model, as this is the basis of the Skyrme model treatment. The non-linear sigma model is essentially the infinite sigma particle mass limit of the linear sigma model. It should be valid at distance scales much larger than 1/Λ QCD , which is also the range of validity of the linear sigma model. The action for the non-linear sigma model is
S = d 4 x f 2 π tr ∂ µ U ∂ µ U † + Ψ −i/ ∂ + M U Ψ .(3)
In this equation,
U = e iτ ·π/fπ ,(4)
and
U = e iτ ·πγ 5 /fπ ,(5)
where f π ∼ √ N c Λ QCD , and the nucleon mass M ∼ N c Λ QCD .
Weinberg's trick is to rotate away the interactions in the mass term by a chiral rotation,
U → V −1/2 U V −1/2 = 1.(6)
After this rotation of the nucleon fields, the action becomes,
S = d 4 x f 2 π tr ∂ µ U ∂ µ U † + Ψ 1 i γ µ (∂ µ + γ 5 V 1/2 ∂ µ V −1/2 ) + M Ψ . (7)
We do not need to know the explicit form of V to extract the essential physics. The point is that the expansion in powers of the pion-nucleon interaction involves a factor of 1/ √ N c for each power of the pion field. This arises because the coupling to the pions is a derivative coupling, and to get the dimensions right each power of the derivative times the pion fields must be accompanied by a factor of 1/f π . Notice also that the first term in the expansion of the pion field is 1/f π , and couples to the nucleonic axial-vector current. The nucleonic axial-vector current is one for free fermions, and the interactions in this theory, corresponding to decreasing powers of 1/ √ N c , do not change the parametric dependence upon N c .
While in these models above g A ∼ 1, it is possible to obtain g A ∼ N c by the addition of further terms to the effective Lagrangian. In the linear sigma model, consider adding a term [28,29]
g Λ 2 QCD ψ L Φ † / ∂Φ ψ L + ψ R Φ/ ∂Φ † ψ R .(8)
Here ψ L,R are chiral projections of the nucleon field, and Φ transforms under
SU L (2) × SU R (2)
. This term is non-renormalizable, with the coupling having an overall dimension of inverse mass squared. We take this mass scale to be Λ QCD , so the coupling g is dimensionless.
Taking Φ ∼ f π U , this term generates an axial vector coupling of the pion to the nucleon, and g A ∼ N c g; see, e.g., Eqs. (19.5.48), (19.5.49), and (19.5.50) of Ref. [29].
Thus if we allow the addition of non-renormalizable terms to the linear sigma model, g A can be treated as a free parameter. Our point, however, is that if one takes g A ∼ N c , then at large distances, where the nucleon-nucleon interaction is determined by pion exchange, that the corresponding interactions are strong, ∼ N c . While certainly logically possible, this does not agree with the phenomenology of nucleon scattering, which sees no long range tails which are large in magnitude [5].
The General Structure of the Nucleon-Nucleon Force in the Sigma Model and the Skyrme Model
It is useful to compute the general form of the nucleon-nucleon force in both the Skyrme model and the sigma model. Let us first consider the Skyrme model,
S skyrme = d 4 x f 2 π 16 tr ∂ µ U ∂ µ U † + 1 32e 2 tr [∂ µ U, ∂ ν U ] 2 .(9)
The last term in this equation is the Skyrme term. It has a coefficient 1/e 2 that is assumed to be of order N c , and is positive. There is a topological winding number in the theory, and this winding number can be related to the total baryon number by an anomaly. The nucleon corresponds to the solution with winding number one. The size of the baryon is found to be R baryon ∼ 1/ √ ef π ∼ 1/Λ QCD , and is independent of N c . If the Skyrme term were zero or negative, the solution would collapse to zero size.
The two nuclear force is derived by considering a two Skyrmion solution and computing the energy of separation [30]. Since if we simply redefine scale sizes in the Skyrme action by defining a dimensionless pion field as π = π/f π , and rescaling coordinates by Λ QCD , the Skyrme action becomes explicitly proportional to N c when all dimensional quantities are so expressed. Therefore, the potential between to nucleons is of the form
V skyrme (r) ∼ N c r F skyrme (Λ QCD r).(10)
This is clearly inconsistent with the result one gets from the Weinberg action.
Here the lowest order diagram which contributes at distances much larger than 1/Λ QCD is due to one pion emission. Its strength is of order 1/r(rf π ) 2 ∼ 1/N c . In the Skyrme model this difficulty is evaded by arguing that the strength of the axial coupling is of order N c rather than of order one. Since the derivative of the pion field couples to the axial-vector current, and in the potential, there are two such vertices, one can get a long distance force of order N c . Therefore the strong force due to pion exchange at long distances and the large value of the axial coupling in the Skyrme model are related.
It is useful to understand the nature of the potential in the sigma model, due to higher order pion exchanges. First, let us look at the contributions to the potential. Note that if the vertices were not derivatively coupled, each pion exchange would bring in a factor of 1/r. Due to the derivative coupling at the vertices, there are two derivatives for each exchange. There is also a factor of 1/f 2 π . This means the potential predicted by the non-linear sigma model is of the form
V σ = 1 r V σ (f π r).(11)
Note that this potential has the scale R ∼ 1/f π ∼ 1/( √ N c Λ QCD ), so that it is much smaller than that of the standard nucleonic Skyrmion.
Perhaps it is easier to think about the pion field. In the linear limit when we treat the nucleon field as a point source, the pion field satisfies the equation
−∇ 2 π a = 1 f π ∇ i δ (3) ( r) σ i τ a ,(12)
where σ i is a Pauli matrix. This means that in lowest order, the pion field is of the order of π ∼ f π (1/rf π ) 2 . Higher corrections give
π σ = f π G σ (f π r).(13)
This is to be compared to that for the pion field of the Skyrme model
π Skyrme = f π G Skyrme (Λ QCD r).(14)
We see that in the Skyrme case, that π/f π ∼ 1 for r ∼ 1/Λ QCD while for the sigma model this occurs at the much smaller distance scale r ∼ 1/f π .
The axial vector coupling g A is estimated from the pion behavior at long distance, g A ∼ f 2 π R 2 , where R is the size of the pion could [2]. In the Skyrme
case, g A ∼ f 2 π /Λ 2 QCD ∼ N c , while in the sigma model g A ∼ f 2 π /f 2 π ∼ 1.
There are several subtleties in extracting the result for the Skyrmion case. Note that for any solution with a size scale R 1/Λ QCD , the argument which led to the Skyrme term has broken down. For such solutions, the Skyrme term itself is very small compared to the zeroth order non-linear sigma model contribution at the size scale 1/Λ QCD . This is because in addition to the derivatives, there are four powers of the field, which are very small. Nevertheless, there should be a breakdown of the Skyrmion model at such a scale, arising from QCD corrections of the underlying theory. The sigma model solution sits at a distance scale small compared to where the Skyrmion action is applicable, and one should ask what is the nature of the corrections to the Skyrme model action at such distance scales. In addition, there is good room for skepticism about the Skyrme model treatment of the nucleon. In the Skyrme model, 1/e 2 ∼ N c , but phenomenologically is is of the order 3 × 10 −2 . If we were to naively take parametrically 1/e 2 ∼ 1, the nucleon-nucleon force of the Skyrme model would be parametrically the same as that in the sigma model. The mass would not be correct however as it would be of order f π . In later sections we will see that this picture has features of what we have in mind: a string vertex whose size is 1/f π , surrounded by a cloud of quarks.
What Might Be Wrong with the Skyrme Term?
What might be wrong with the Skyrme model solution other than it it is not consistent with the sigma model? Is there any problem with internal inconsistency? When attempts were made to derive the Skyrme term from QCD, one found a Skyrme term which was generated, but also other terms [13][14][15][16]. When these terms were added together and only terms of fourth order in derivatives were retained, the Skyrmion was found to be unstable to collapse. If this tendency to collapse were maintained to all orders, then the Skyrmion might collapse to sizes much less than the QCD distance scale. One could not describe the nucleon within the conventional assumptions of the Skyrme model. (Strictly speaking, the Skyrme model comes from derivative expansion and keeping only lowest order terms is justified only when size scales R 1/Λ QCD are considered.) One can ask whether or not higher order terms might stabilize the Skyrmion. Following Refs. [13][14][15][16], we postulate that such corrections are generated by a quark determinant in the presence of a background pion field. We might hope that such a description would be valid down to a scale of Skyrmion size of order 1/f π . It is at this scale that high order terms in the pion nucleon sigma model generate quantum corrections which are large. This is also the natural size scale for a pion since pion-pion interactions are of order 1/N c , and even deep inelastic scattering off of a pion is suppressed by 1/N c . Interactions with other mesons are suppressed by 1/N c . If we assume that the quark-pion interaction is parameterized by a vertex that is pointlike to a distance scale of order 1/f π , then this interaction strength is of order g πQQ ∼ 1/ √ N c , then one finds a contribution to the Skyrme term that is leading order in N c . This is because there are N c quark loops. Evaluating the leading term in the derivative expansion of the pion field, there is the Skyrme term plus two others, that have signs that cause the Skyrmion to collapse. (It should be noted that the intrinsic size scale over which quarks are distributed inside the meson is more likely 1/Λ QCD , and the small apparent size of the pion arises from the nature of interactions of these quarks in the large N c limit, rather than their intrinsic scale of spatial distribution.) Subsequent to this [17], it was argued that in chiral soliton models, that the chiral soliton is stable against collapse when the full quark determinant is computed. This happens when there are bound fermions in the presence of a nontrivial background field, and the energy of the bound quarks is included. This suggests that the Skyrmion could be metastable if all orders in the fermion determinant are included.
We shall argue below that the Skyrmion at a size scale R ∼ 1/Λ QCD is absolutely unstable. Sufficiently small Skyrmions, R 1/Λ QCD always collapse and they have an energy parametrically small compared to that of the nucleon.
The quark contribution to the non-linear sigma model action modifies the non-linear sigma model by
δS = N c ln{det(−i/ ∂ + M U )}.(15)
Here the quark mass M is the constituent quark mass. We now use Weinberg's trick to rewrite this as a coupling a to an axial-vector background field that is a pure gauge transform of vacuum:
A µ 5 = 1 i V 1/2 ∂ µ V −1/2 .(16)
Here V 1/2 is a function of the pion field and is a unitary matrix. The determinant becomes
δS = N c ln(det( 1 i γ µ (∂ µ − γ 5 A µ5 ) + M )).(17)
In the limit there M = 0, the fermion determinant is gauge invariant. This means that all functions of A generated by the determinant are gauge invariant and they vanish when evaluated on A µ 5 which is a gauge transformation of vacuum field. Now for fields that are slowly varying, this determinant may be computed by the method of Refs. [13][14][15][16]. This yields the result that in leading order the Skyrmion collapses. We also see that if the Skyrmion is parametrically small compared to the scale size of Λ QCD , we can ignore the mass term in the fermion propagator, and no potential is generated to resist the collapse of the Skyrmion. Since a Skyrmion with size much less than 1/Λ QCD has an energy arising from the non-linear sigma model contribution to the action that is parametrically small compared to N c Λ QCD , the Skyrmion is absolutely unstable.
It should be noted that it would be very difficult to resist the collapse on very general grounds. The collapse is prevented by fields that are singular at short distances. It is very difficult to generate such singular terms on scale sizes much less than 1/Λ QCD since QCD interactions are typically spread out on a distance scale of order 1/Λ QCD . The exception to this is pion self-interactions, which are presumably special because the pion is a Goldstone boson. The reason for a lack of a short distance singularity on scale sizes much less than 1/Λ QCD is that the nucleonic core is color singlet and interactions that would produce a 1/r singularity would need to couple to a non-zero color charge. The evasion to this conclusion arises from quark kinetic energies. If the quarks were confined to a size scale which is very small would generate a 1/R term.
The String Vertex Model and a Collapsed Skyrmion
If the Skyrmion is unstable against collapse, a reasonable conjecture for its minimum size is when quantum corrections to the non-linear sigma model for the pions is large, when R ∼ 1/f π . This limiting size can be understood from the Skyrme model itself. Recall that the energy of a Skyrmion of size R is
E = d 3 x f 2 π 16 tr ∇U ∇U † ∼ f 2 π R.(18)
Here we have ignored the possibility of a Skyrme term, since for small Skyrmions, we have argued there is no such term. The energy of each constituent of the Skyrmion is 1/R, so that the number of quanta in the Skyrmion is
N = f 2 π R 2 .(19)
For a Skyrmion of size R ∼ 1/Λ QCD this is N ∼ N c For the collapsed Skyrmion, where R ∼ 1/f π , N ∼ 1. This is the limit where the quantum nature of the Skyrmion cannot be ignored.
The obvious problem with the collapsed Skyrmion is that it has a size parametrically small compared to 1/Λ QCD . On such size scales, surely quark degrees of freedom are important. Since quarks carry a conserved charge they cannot be collapsed to small sizes without paying a price in kinetic energy E ∼ 1/R, and so to keep the baryon mass from growing larger than N c Λ QCD , the quarks cannot be compressed to smaller than the QCD scale. Therefore if there is some remnant of the collapsed Skyrmion it must include quarks and quarkantiquark pairs at the QCD size scale. It is in these degrees of freedom that the energy of the nucleon must reside. The collapsed Skyrmion can only have an energy of order f π and so does not contribute much to the energy.
How can this picture of the nucleon be consistent with that of the quark model? Imagine that the nucleon is produced by the operator
O B (x) = d 3 x 1 · · · d 3 x N q a 1 (x 1 )U a 1 ,b 1 (x 1 , x) · · · q a N (x N )U a N ,b N (x N , x) b 1 ,··· ,b N .(20)
Here, a path ordered phase along some path that connects the quark operator and the position of the baryon is denoted by U (x, y). This operator is the topological baryon number operator of Veneziano [25]. It is shown pictorially in Fig. 3. In this picture, quarks are joined together by lines of colored flux tubes at a central point. The quark operators are at a distance of order 1/Λ QCD away from the central point. We can identify the central point as the place where the baryon number sits. This is natural if we think about hadronizing mesons along the lines of color flux. This happens by quark-antiquark pairs and so it is ambiguous to think about the baryon number as either centered at the multiple string junction or at the ends of the strings. As far as baryon number is concerned, there is a symmetry between thinking about the baryon as made of quarks or as a topological object is a fundamental dualism of the theory: We can either think about the baryon number as being delocalized on quark degrees of freedom. This is reflected in Cheshire Cat models of the baryon [31,32].
q N q 1 q 2 q 3 q 4 q 5
In fact, it is easy to see that the degree of localization of the strong vertex is the same as that of the collapsed Skyrmion. Let us identify the string vertex position with the average center of mass coordinate of the quarks,
R = 1 N c ( r 1 + · · · + r Nc ).(21)
We work in a frame where r i = 0. The typical dispersion in the position of the center of the string is therefore R 2 ∼ r 2 1 /N c = 1/f 2 π . In the Skyrmion picture, one imagines the collapsed Skyrmion as corresponding to the string junction and having a high average density of baryon number in a localized region, a picture that is dual to the quark model description.
The topological string model generates lines of colored electric flux from the position of the string vertex. This presumably results in linear confinement of the quarks at distances far from the vertex. Close to the vertex, each quark feels a strong color Coulombic interactions that can be computed as the mean field of the color Coulombic fields of the other quarks. The Coulombic energy of all the quarks would be of order N c /R times the 't Hooft coupling, and the kinetic energy for relativistic quarks would be of order N c /R. The quarks sit at R ∼ 1/Λ QCD in order not to make the nucleon energy larger than N c Λ QCD .
The Quark Distributions
The picture above does not directly resolve the problems associated with a large axial-vector coupling or large matrix elements of the vector isospin currents. To understand what happens, we will take the non-relativistic quark model as a starting point. We will consider a matrix element of the nonrelativistic expression of axial-vector current,qγ 5 γ 3 τ 3 q, which takes the form
N |R 3 |N ≡ N | Nc q=1 I (q) 3 S (q) 3 |N ,(22)
where the operator O (q) acts on q-th quark wavefunction contained in nucleon wavefunctions.
In nonrelativistic limit, spins can characterize the irreducible representation of Hamiltonian, so wavefunctions can be characterized as |color ⊗ |flavor ⊗ |spin ⊗ |space . Since color is totally antisymmetric, we should totally symmetrize spin-flavor-space wavefunction.
The frequently used construction of baryon wavefunctions is to use spin and isospin singlet diquark wavefunctions [33]. We will denote a number of diquark pair as n d . We take a direct product of such a diquark state (and an extra quark if N c odd), then totally symmetrize spin-flavor-space wavefunctions. In this construction, N c = 2n d baryon is a spin-isospin singlet, while spin-isospin quanta of N c = 2n d + 1 baryon is solely determined by an extra quark. There is nothing nontrivial when we compute matrix elements related to spin and isospin operators.
The situation differs for the computation of R 3 . The reason is that both of diquark and nucleon states are not eigenstates of R 3 , in contrast to spin and isospin. Below we will see this explicitly in terms of SU (2N f ) representation of states.
To compute R 3 , it is useful to use representations of the nonrelativistic SU (4) symmetry [34]. The SU (4) algebra is formed by the following fifteen generators
T a = q I (q) 3 , S j = q S (q) j , R aj = q T (q) a S (q) j ,(23)
where j = 1, 2, 3 and a = 1, 2, 3. The Cartan subalgebra of SU (4) is formed by three generators I 3 , S 3 and R 3 ≡ R 33 , and states are characterized by eigenvalues of these generators and the dimension D of the irreducible representations.
We will denote such a state as |I 3 , S 3 , R 3 ; D . In the following we often omit D as far as it brings no confusions.
As a preparation, let us express our spin-isospin singlet diquark state in terms of SU (4) representations. It can be expressed as
|D = 1 2 |u ↑ d ↓ + |d ↓ u ↑ − |u ↑ d ↓ − |d ↓ u ↑ = |0, 0, 1/2 TS − |0, 0, −1/2 TS ,(24)
where the subscript TS means that wavefunctions are totally symmetrized. Note that the diquark wavefuntion has an eigenvalue of definite 3-component of isospin and spin, while it is a mixture of different R 3 eigenstates. This is the origin to make computations of N |R 3 |N nontrivial.
Now we first argue simpler case, N c = 2n d nucleons. Such nucleons are characterized by states with isospin and spin zero, while they are mixture of states with different R 3 eigenvalues. We assume that spatial wavefunctions are common for all quarks, so that spin-flavor (SF) wavefunctions of our nucleons are obtained by totally symmetrizing a direct product of diquark's spin-flavor wavefunctions:
|N SF 2n d = |0, 0, 1/2 − |0, 0, −1/2 n d TS .(25)
If we use SU (4) expressions and omit subscripts on isospin and spin components (since they are zero),
|N SF 2n d = n d 2 TS − n d C 1 n d 2 − 1 TS + · · · + (−1) n d −n d 2 TS = [n d /2] m=0 (−1) m n d C m n d 2 − m TS + (−1) n d − n d 2 − m TS ,(26)
where [n d /2] equals to n d /2 (n d /2 − 1) for n d even (odd) case. Now it is easy to see
R 3 |N SF 2n d = [n d /2] m=0 (−1) m n d C m × n d 2 − m × n d 2 − m TS + (−1) n d +1 − n d 2 − m TS .(27)
Note that relative sign in the second term is flipped after R 3 operation. This gives N |R 3 |N SF 2n d = 0 due to cancellations for each indices m.
From the above discussion, we saw that the matrix element of R 3 vanishes not because nucleon states have small R 3 , but because cancellations occur among contributions from different eigenstates. Such cancellations are subtle, and strongly depend on the fact that N c is even. Once we consider N c odd baryons by adding one extra quark with the same spatial wavefunction as others, this situation completely changes: terms which avoid cancellations lead to a huge value, N |R 3 |N SF 2n d +1 = (N c + 2)/12. So we now come back to the original problem, a large value of g A .
However, a comparison of N c odd and even baryons suggests us the following way to avoid a large g A . In the above, we always assume all quarks occupy the same spatial wavefunction. Here let us see what happens when we adopt a spatial wavefunction for the unpaired quark which is different from that of quarks paired into diquarks.
We call spatial wavefunction of quarks inside of diquark as A( r), and that of an extra quark as B( r). If we introduce a quantity x which characterizes the overlap between wavefunction A and B,
x ≡ | A|B | = d rA * ( r)B( r) ,(28)
then the expectation value of R 3 in the |p ↑ state is
p ↑ |R 3 |p ↑ SFS = 1 12 (N c − 1)(N c + 6)x 2 + 12 (N c − 1)x 2 + 4 .(29)
The derivation is a little bit cumbersome, so we give it in Appendix A.
Let us see the physical implications of this result. First the reason why x 2 , not x, appears is that a permutation of A and B always makes two A|B . For instance, AAB|ABA = x 2 .
For x = 1, the matrix element reproduces conventional result, (N c + 2)/12, as it should.
On the other hand, for x = 0, or when A and B are completely orthogonal each other, the cancellations analogous to N c = 2n d baryon take place among the diquark part, so that R 3 is merely characterized by R 3 for the leftover quark, 1/4.
To get g A of ∼ N 0 c , x must be of order of 1/N c . Note also that g A is of order N c until the overlap x ∼ 1/ √ N c . This means that in order to reduce g A from ∼ N c , the overlap must be very small. This disparity in wavefunctions suggests the term "dichotomous" baryon.
Why Small Overlap?
Apparently the above small g A baryon are not energetically favored in the shell model picture of quarks. But so far we did not take into account the contributions from fields surrounding valence quarks. They affects the nucleon self-energy via virtual mesonic loops (polarization effects of the media). If the axial charge of the valence quark is ∼ N c , such a large charge induces a big change in the effective mass.
As argued above, we expect that the mass of the baryon is affected by a large value of N c . A leading contribution to the g A dependent self-energy comes from the vertex ∂ µ π a /f π × g AN γ µ γ 5 τ a N , and it generates g 2 A /f 2 π ∼ N c for g A ∼ N c [11] (no additional N c dependence arises from nucleon propagator.) We would expect other vector mesons might generate similar self-energies through coupling to the axial-vector current. From this self-energy dependence and the x 2 dependence of g A , we suggests that self-energy effects generate a term like
H g A ∼ N c | ψ paired | 2 | ψ unpaired | 2 ,(30)
in the effective Hamiltonian for the valence quarks. Let us minimize this effective term. Since there are of order N c paired quarks, deforming their wavefunctions costs a lot of energy. However, deforming the wavefunction of the unpaired quark only costs an energy of order Λ QCD , while the gain in
H g A is ∼ N c .
This deformation is most easily accomplished by having the unpaired quarks exist in the region outside of the paired quarks. The paired quark wavefunction in a string model falls as exp (−κ(rΛ QCD ) 3/2 ) at large distances. If the unpaired quark is excluded, due to its hard core interaction with the paired quarks, from a size scale r ≤ ln 2/3 (N c )/Λ QCD , then g A can be reduced.
How large is the reduction in g A ? If the self-energy is of order g 2 A /N c , then we would expect that when g A ∼ √ N c , the trade off in energy associated with deforming the unpaired quark wavefunction is balanced by self-energy effects. Such a reduction most likely allows for a phenomenologically acceptable large N c limit.
Magnetic moments have been computed in Appendix B, and are proportional to g A . Magnetic interactions will be of order α em g 2 A , so that for sufficiently large N c ∼ 1/α em , these effects would also work toward reducing g A to a value of order 1.
It is also possible that a large value of g A might mean even more singular self-energy terms for large N c resulting is a greater reduction of g A . For example, there might in principle be effects that contribute to the energy that correspond to higher powers than linear in N c when g A ∼ N c . These terms will tend to further reduce the parametric dependence of g A upon N c . We have not been able to find a compelling argument for such effects from strong interactions, though.
We turn next to a discussion of the splitting between the nucleon and the ∆. In a conventional non-relativistic quark model, there is a SU (2N f ) symmetry which requires the N − ∆ masses to be equal to ∼ Λ QCD . This degeneracy is split by color hyperfine interaction,
i =j V ss ( r ij ) ∼ λ N c i =j S i · S j M i M j δ( r ij );(31)
λ = g 2 N c , and the M i ∼ 1 are constituent quark masses. Assuming all quark masses are the same, the expectation value for a state with spin S is
i =j V ss ( r ij ) ∼ λ N c S(S + 1) − 3 4 N c × |φ relative ( 0)| 2 M 2 .(32)
Masses are split by the first term. If the difference in spins is ∼ 1, as for the nucleon and the ∆, the mass splitting is ∼ 1/N c . This agrees with the Skyrme model, identifying the ∆ as the first spin excitation of the nucleon. More general arguments can be found in [11,35].
In contrast, there is no SU (2N f ) symmetry in the model of a Dichotomous Baryon. The masses of the nucleon and the ∆ are not nearly degenerate, but split ∼ Λ QCD . This arises from polarization effects via the axial coupling of the ∆, g ∆A .
Consider what a dichotomous ∆, with I 3 = S 3 = 3/2, is like. This can be obtained by breaking apart one diquark pair:
|D −→ |D = 1 √ 2 |u ↑ u ↑ + |u ↑ u ↑ = |1, 1, 1/2 TS .(33)
Suppose that these |u ↑ occupy the same spatial wavefunctions as those in the diquark pairs. Then g ∆A is ∼ N c , and ∆ has a large vacuum polarization of ∼ N c . As with the unpaired quark in the nucleon, this can be avoided by putting both u quarks into a spatial wavefunction which is orthogonal to that of the paired quarks. This costs an excitation energy ∼ Λ QCD , not ∼ Λ QCD /N c .
If M ∆ − M N ∼ Λ QCD and g ∆N A ∼ 1, the width of the ∆ is
Γ ∆ ∼ g 2 ∆N A f 2 π M 2 ∆ − M 2 N M ∆ 3 ∼ g 2 ∆N A N c Λ QCD .(34)
Thus whether g ∆N A is ∼ 1 or ∼ √ N c , the ∆ remains a narrow resonance at large N c . We note that in QCD, the ∆ is not broad, Γ ∆ ∼ 118MeV [36]. By the Adler-Weisberger relation [37], g A is of the same order as g ∆N A [38].
Summary, and tests on the lattice
In this paper we considered the properties of baryons for a large number of colors. It is certainly possible that our analysis only applies for an unphysically large value of N c .
The question of how g A grows with N c has important implications for nuclear physics, though. The central conundrum of nuclear physics is that the binding energy of nuclear matter is much smaller than any other mass scale in QCD. This is usually explained as the result of a nearly exact cancellation between repulsion, from the exchange of ω-mesons, and attraction, from σ-exchange.
In QCD, the σ meson is light, ∼ 600 MeV, and very broad. This may be because for three colors, the σ meson is really a state involving two quark anti-quark pairs [39].
For a large number of colors, though, the lightest scalar meson only has a single quark anti-quark pair, and is probably heavy, with a mass significantly greater than that of the ω-meson [40]. In this case, there cannot be any approximate cancellation between ω and σ exchange: for distances greater than the inverse mass of the ω, there is just repulsion, ∼ N c .
For distances greater than ∼ log(N c )/m ω , then, the only interaction is due to pions. This can be analyzed in chiral perturbation theory [27], where two pion exchange gives a result ∼ g 2 A . If g A ∼ N c , this looks too large, but because of the cancellations of Dashen and Manohar [11], the nucleon-nucleon potential is only ∼ N c .
If, however, the present analysis is correct, and g A does not grow with N c , then the nucleon-nucleon potential is much smaller, ∼ 1/N c . Thus having a value of the axial coupling which does not grow with N c may help understand why nuclear matter is so weakly bound. The price we have to pay is that we may have to give up the elegant contracted spin-flavor SU (2N f ) symmetry which was derived under the assumption of g A ∼ N c [11].
One glaring shortcome of our analysis is that we only consider two light flavors. An extension of our diquark based construction to three flavor cases is not straightforward. While diquarks behave as singlets in two flavors, they are anti-triplet in the SU (3) flavor representations. It means that diquarks have some charges under the SU (3) flavor symmetry, which should be cancelled to reduce Goldstone boson clouds. Thus we have to carefully combine diquarks, or instead it may be better to look for other basic ingredients which play similar roles as diquarks in two flavor theories.
In this paper, we implicitly took a view that the strange quark is heavy enough to suppress kaon clouds, so we did not try to reduce the nucleon-kaon axial couplings. An obvious problem is that this treatment badly breaks the SU (3) flavor symmetry, which explain mass splittings among octet or decouplet baryons by regarding the strange quark mass as a perturbation. To reduce these gaps, we plan to study the SU (3) chiral limit, which will be discussed elsewhere.
On the other hand, the questions which we raise for two flavors can be addressed, at least indirectly, by numerical simulations on the lattice. We thus conclude by discussing these results, and suggest further study.
The spectrum of baryons has been studied on the lattice [41]. While the simulations are for quark masses with pions which are significantly heavier than the physical pion mass, there are striking differences from the observed spectrum of baryons in QCD. In particular, there are several states which are present in QCD, but not on the lattice with heavy quarks. Notably, this includes the Roper resonance N (1440), as well as other states. This is a puzzle for a non-relativistic, constituent quark model.
In this paper we have not considered baryon excitations, and so have not addressed the problem of the Roper resonance, or other similar states. We find it intriguing, however, that at present results from the lattice appear to demonstrate that some states in the baryon spectrum are very sensitive to the chiral limit.
Lattice simulations have also measured the axial charge of the baryon [42]. These results show that even for very heavy pion masses, m π ∼ .7 GeV, that the axial charge of the nucleon is much smaller than the value of the constituent quark model, g A ∼ 1.2. For lighter pion masses, the axial charge decreases to a value near one. Again, such a sensitivity of the axial charge to the chiral limit is unexpected in a non-relativistic, constituent quark model.
Besides simulations with three colors, it would also be of interest to simulate baryons for five colors [43]. (It is necessary to take the number of colors to be odd, so that the lightest nucleon has nonzero spin.) Even in the quenched approximation, it would be interesting to know if the axial charge is close to the value in the non-relativistic quark model, = 7/3, or to unity.
A Computation of g A
The purpose of this appendix is to reproduce a well-known results of R 3 = 1/4 × (N c + 2)/3 for N c odd nucleons which is composed of nonrelativistic quarks occupying the same space wavefunction. We will also extend this result to the case with different space wavefunctions.
If we assume that all quarks have the same space wavefuntion, we have only to completely symmetrize spin-flavor wavefunctions to satisfy the Pauli's principle. Here we will consider |p ↑ which, in our consturction, takes the form: Using N c = 2n d + 1, we reproduce the well-known result, p ↑ |R 3 |p ↑ SF / p ↑ |p ↑ SF = (N c + 2)/12.
Next we little bit extend the results to the case where all diquark wavefunctions occupy the same space wavefunction, A( r), while extra quark occupies a different space wavefunction, B( r). In such a case, it is no longer useful to separate treatments of spin-flavor and space. Rather we will totally symmetrize spin-flavor-space (SFS) wavefunctions with explicitly expressing space dependence in such a way that |u ↑, A , |u ↑, B ,.., and so on.
Here we introduce a quantity x which characterizes the overlap between wavefunction A and B,
x ≡ | u ↑, A|u ↑, B |, (A.7)
We will consider the following nucleon states We distinguish |u ↑, A and |u ↑, B , so that, compared to previous case, the degeneracy factor D decreases by a factor 1/(n d − m + 1) while a number of independent states C increases by a factor (n d − m + 1).
Now to see how nonzero overlap of A and B arises, let us take the innnerproduct of braket in (A.9):
u ↑, A; · · · ; u ↑, B| + ... |u ↑, A; · · · ; u ↑, B + ... The first term comes from diagonal matrix elements, while second term comes from offdiagonal terms. (Perhaps the simplest way to determine a coefficient of x 2 is to see that x = 1 reproduce the previous results (A.4) ).
Remaining calculations are just a repetition of the previous calculations. A normalization factor is p ↑ |p ↑ SFS = (2n d + 1)!
B Magnetic Moments
The purpose of this section is to give a relationship between R 3 and magnetic moments:
µ N = N | u µ u S (u) 3 + d µ d S (d) 3
|N .
(B.1)
We assume |N for spin ↑ case in the following. By introducing isospin projector, the sum of (u, d) indices can be extended, so we can rewrite the sum in terms of total spin and R 3 operators,
µ N = N | q µ u S (q) 3 1 2 + I (q) 3 + q µ d S (q) 3 1 2 − I (q) 3 |N = µ u + µ d 2 N | q S (q) 3 |N + (µ u − µ d ) N | q R (q) 3 |N = (µ u + µ d ) ± g A (µ u − µ d ) 4 , (B.2)
where + (−) signs for protons (neutrons).
Assuming consistuent masses of (u, d) quarks are almost same, and using Q = I 3 + B/2 = I 3 + 1/2N c , we denote quark magnetic moments Qμ ≡ Qe/2M q :
µ u = N c + 1 2N cμ , µ d = − N c − 1 2N cμ . (B.3)
Therefore proton and neutron magnetic moments are µ p,n =μ 4
1 N c ± g A . (B.4)
For conventional baryon wavefunctions, N c = 3 and g A = 5/3, which gives µ n /µ p = −2/3 (exp:−0.685). For our wavefunction with x 2 = 0, g A = 1 and µ n /µ p = −1/2 (−1) for N c = 3 (∞).
Fig. 1 .
1a: One of the two pion exchanges. b: The crossed diagram.
Fig. 2 .
2Two pions produced by sigma exchange.
Fig. 3 .
3The topological baryon number operator of Veneziano.
thanks Tom Cohen and Dmitri Diakonov for heated discussions on this subject, and Ismail Zahed for critical observations. He gives enormous thanks to Jean-Paul Blaizot and Maciej Nowak, with whom he had many discussions in the early stages of this project; he also thanks hospitality of the Theoretical Physics Division of CEA-Saclay, where this work was initiated. We also thank Y. Aoki, K.Hashimoto, D. K. Hong, D. Kaplan, M. Karliner, K. Kubodera, S. Ohta, M. Rho, S. Sasaki, and M. Savage for discussions and comments. T. Kojo is supported by Special Posdoctoral Research Program of RIKEN; he also thanks the Asia Pacific Center for Theoretical Physics for their hospitality during a visit in June, 2010. This manuscript has been authorized under Contract No. DE-AC02-98CH0886 with the US Department of Energy. This research of Y. Hidaka is supported by the Grant-in-Aid for the Global COE Program "The Next Generation of Physics, Spun from Universality and Emergence" from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan.
|p ↑ SF = |0, 0, 1/2 − |0, 0, −1/2 n d ⊗ |u ↑ TS . (A.1) p ↑ |R 3 |p ↑ SF = (2n d
=
D(n d − m; n d − m; m; m) × |u ↑, A; · · · ; u ↑, B + ... . (A.9)
=
C(n d − m; n d − m; m; m) × 1 + x 2 × (n d − m) . (A.10)
(
n d C m ) 2 × D(n d − m; n d − m; m; m) × 1 + x 2 (n d − m) = (2n d + 1)!(n d !) 2 × (n d + 1)(x 2 n d + 2) 2 , (A.11)which of course reproduces previous results for x = 1. And also the expectation value of R 3 isp ↑ |R 3 |p ↑ SFS = (2n d + 1)! n d m=0 ( n d C m ) 2 × D(n d − m; n d − m; m; m) × 1 + x 2 (n d − m) × (n d /2 − m + 1/4) = (2n d + 1)!(n d !) 2 × (n d + 1)(2n 2 d x 2 + 7n d x 2 + 6) 24 , (A.12)Finally, taking into account the normalization factor, we get p ↑ |R 3 |p ↑ SFS p ↑ |p ↑ SFS = 2n 2 d x 2 + 7n d x 2
As in the text, we will omit third component of spin and isospin of the diquark wavefunction for notational simplicity. The expression isFirst we have to give a correct normalization. It is crucial to count a number of independent states which are contained in maximally symmetrized R 3 eigenstates.For instance, |u ↑ u ↑ u ↑ TS has only one independent state, and degeneracy factor 3! for symmetrization, soOn the other hand, |u ↑ u ↑ d ↑ TS has three independent states and degeneracyNow normalization factor of |p ↑ SF can be computed asNow computation of p ↑ |R 3 |p ↑ SF is straightforward. We have only to multiply an eigenvalue (n d /2 − m + 1/4) when we take the sum of m,
. G Hooft, Nucl. Phys. B. 72461G. 't Hooft, Nucl. Phys. B 72 (1974) 461.
. G S Adkins, C R Nappi, E Witten, Nucl. Phys. B. 228552G. S. Adkins, C. R. Nappi and E. Witten, Nucl. Phys. B 228 (1983) 552.
. T H R Skyrme, Proc. Roy. Soc. Lond. A. 262237T. H. R. Skyrme, Proc. Roy. Soc. Lond. A 262 (1961) 237;
. I Zahed, G E Brown, Phys. Rep. 1421I. Zahed and G. E. Brown, Phys. Rep. 142 (1986) 1.
. M L Goldberger, S B Treiman, Phys. Rev. 111354M. L. Goldberger and S. B. Treiman, Phys. Rev. 111 (1958) 354.
. N Ishii, S Aoki, T Hatsuda, Phys. Rev. Lett. 99155PoSN. Ishii, S. Aoki and T. Hatsuda, Phys. Rev. Lett. 99 (2007) 022001; PoS LATTICE2008 (2008) 155.
. S R Beane, NPLQCD CollaborationPhys. Rev. D. 8154505S. R. Beane et al. [NPLQCD Collaboration], Phys. Rev. D 81 (2010) 054505;
. S R Beane, W Detmold, K Orginos, M J Savage, arXiv:1004.2935heplatS. R. Beane, W. Detmold, K. Orginos and M. J. Savage, arXiv:1004.2935 [hep- lat].
. G Karl, J E Paton, Phys. Rev. D. 30238G. Karl and J. E. Paton, Phys. Rev. D 30 (1984) 238.
. M Kutschera, C J Pethick, D G , Phys. Rev. Lett. 531041M. Kutschera, C. J. Pethick and D. G. Ravenhall, Phys. Rev. Lett. 53 (1984) 1041;
. I R Klebanov, Nucl. Phys. B. 262133I. R. Klebanov, Nucl. Phys. B 262 (1985) 133;
. I Zahed, A Wirzba, U G Meissner, C J Pethick, J Ambjorn, Phys. Rev. D. 311114I. Zahed, A. Wirzba, U. G. Meissner, C. J. Pethick and J. Ambjorn, Phys. Rev. D 31 (1985) 1114;
. A S Goldhaber, N S Manton, Phys. Lett. B. 198231A. S. Goldhaber and N. S. Manton, Phys. Lett. B 198 (1987) 231;
. N S Manton, Comm. Math. Phys. 111469N. S. Manton, Comm. Math. Phys. 111 (1987) 469;
. A D Jackson, A Wirzba, N S Manton, Nucl. Phys. A. 495499A. D. Jackson, A. Wirzba and N. S. Manton, Nucl. Phys. A 495 (1989) 499;
. H Forkel, A D Jackson, M Rho, C Weiss, A Wirzba, H Bang, Nucl. Phys. A. 504818H. Forkel, A. D. Jackson, M. Rho, C. Weiss, A. Wirzba and H. Bang, Nucl. Phys. A 504 (1989) 818;
. M Kugler, S Shtrikman, Phys. Lett. B. 208491M. Kugler and S. Shtrikman, Phys. Lett. B 208 (1988) 491;
. Phys. Rev. D. 403421Phys. Rev. D 40 (1989) 3421;
. R A Battye, P M Sutcliffe, Phys. Rev. Lett. 7929Rev. Math. Phys.R. A. Battye and P. M. Sutcliffe, Phys. Rev. Lett. 79 (1997) 363; Rev. Math. Phys. 14 (2002) 29;
. Nucl. Phys. B. 705384Nucl. Phys. B 705 (2005) 384;
. Phys. Rev. C. 7355205Phys. Rev. C 73 (2006) 055205.
Unified Theory of Nuclear Models and Forces. G E For Example, Brown, North-HollandFor example, G. E. Brown, "Unified Theory of Nuclear Models and Forces" (North-Holland, 1971).
. E Witten, Nucl. Phys. B. 16057E. Witten, Nucl. Phys. B 160 (1979) 57.
. R F Dashen, A V Manohar, ibid. 315Phys. Lett. B. 315438R. F. Dashen and A. V. Manohar, Phys. Lett. B 315 (1993) 425; ibid. 315 (1993) 438;
. E E Jenkins, R F Lebed, Phys. Rev. D. 52282E. E. Jenkins and R. F. Lebed, Phys. Rev. D 52 (1995) 282.
. D B Kaplan, A V Manohar, Phys. Rev. C. 5676D. B. Kaplan and A. V. Manohar, Phys. Rev. C 56 (1997) 76;
. D B Kaplan, M J Savage, Phys. Lett. B. 365244D. B. Kaplan and M. J. Savage, Phys. Lett. B 365 (1996) 244.
. R Mackenzie, F Wilczek, A Zee, Phys. Rev. Lett. 532203R. MacKenzie, F. Wilczek and A. Zee, Phys. Rev. Lett. 53 (1984) 2203.
. I Aitchison, C Fraser, E Tudor, J Zuk, Phys. Lett. B. 165162I. Aitchison, C. Fraser, E. Tudor and J. Zuk, Phys. Lett. B 165 (1985) 162.
. I J R Aitchison, C M Fraser, P J Miron, Phys. Rev. D. 33I. J. R. Aitchison, C. M. Fraser and P. J. Miron, Phys. Rev. D 33 (1986) 1994.
. I J R Aitchison, C M Fraser, Phys. Lett. B. 14663I. J. R. Aitchison and C. M. Fraser, Phys. Lett. B 146 (1984) 63.
. G Ripka, S Kahana, Phys. Lett. B. 155327G. Ripka and S. Kahana, Phys. Lett. B 155 (1985) 327.
. D Diakonov, V Y Petrov, P V Pobylitsa, Nucl. Phys. B. 306809D. Diakonov, V. Y. Petrov and P. V. Pobylitsa, Nucl. Phys. B 306 (1988) 809.
. R L Jaffe, F Wilczek, Phys. Rev. Lett. 91232003R. L. Jaffe and F. Wilczek, Phys. Rev. Lett. 91 (2003) 232003;
. A Selem, F Wilczek, hep-ph/0602128A. Selem and F. Wilczek, [hep-ph/0602128].
. A Kakuto, F Toyoda, Prog. Theor. Phys. 662307A. Kakuto and F. Toyoda, Prog. Theor. Phys. 66 (1981) 2307.
. A Chodos, R L Jaffe, K Johnson, C B Thorn, V F Weisskopf, Phys. Rev. D. 93471A. Chodos, R. L. Jaffe, K. Johnson, C. B. Thorn and V. F. Weisskopf, Phys. Rev. D 9 (1974) 3471.
. G E Brown, M Rho, Phys. Lett. B. 82177G. E. Brown and M. Rho, Phys. Lett. B 82 (1979) 177;
. G E Brown, M Rho, V Vento, 84383G. E. Brown, M. Rho and V. Vento, ibid. 84 (1979) 383.
. A W Thomas, S Theberge, G A Miller, Phys. Rev. D. 24216A. W. Thomas, S. Theberge and G. A. Miller, Phys. Rev. D 24 (1981) 216.
. T Sakai, S Sugimoto, Prog. Theor. Phys. 113843T. Sakai and S. Sugimoto, Prog. Theor. Phys. 113 (2005) 843;
. K Hashimoto, T Sakai, S Sugimoto, 1093; ibid. 122427K. Hashimoto, T. Sakai and S. Sugimoto, ibid. 120 (2008) 1093; ibid. 122 (2009) 427.
. G C Rossi, G Veneziano, Nucl. Phys. B. 123507G. C. Rossi and G. Veneziano, Nucl. Phys. B 123 (1977) 507.
. S Weinberg, Phys. Rev. Lett. 18188S. Weinberg, Phys. Rev. Lett. 18 (1967) 188.
. N Kaiser, S Fritsch, W Weise, Nucl. Phys. A. 72447N. Kaiser, S. Fritsch and W. Weise, Nucl. Phys. A 724 (2003) 47.
. D Kaplan, private communicationD. Kaplan, private communication.
S Weinberg, The Quantum Theory of Fields. New YorkCambridge University PressIIS. Weinberg, "The Quantum Theory of Fields" (Cambridge University Press, New York, 1996), Vol. II.
. A Jackson, A D Jackson, V Pasquier, Nucl. Phys. A. 432567A. Jackson, A. D. Jackson and V. Pasquier, Nucl. Phys. A 432 (1985) 567;
. R Mau, M Lacombe, B Loiseau, W N Cottingham, P Lisboa, Phys. Lett. B. 150259R. Vinh Mau, M. Lacombe, B. Loiseau, W. N. Cottingham and P. Lisboa, Phys. Lett. B 150 (1985) 259;
. M Oka, Phys. Rev. C. 36720M. Oka, Phys. Rev. C 36 (1987) 720;
. H Yamagishi, I Zahed, Phys. Rev. D. 43891H. Yamagishi and I. Zahed, Phys. Rev. D 43 (1991) 891;
. V Thorsson, I Zahed, ibid. 45965V. Thorsson and I. Zahed, ibid. 45 (1992) 965.
. S Nadkarni, H B Nielsen, I Zahed, Nucl. Phys. B. 253308S. Nadkarni, H. B. Nielsen and I. Zahed, Nucl. Phys. B 253 (1985) 308.
. S Nadkarni, I Zahed, Nucl. Phys. B. 26323S. Nadkarni and I. Zahed, Nucl. Phys. B 263 (1986) 23.
The quark model. J J J For Example, Kokkedee, W. A. BenjaminFor example, J. J. J. Kokkedee, "The quark model" (W. A. Benjamin, 1969).
Lie algebras in particle physics. H For Example, Georgi, Perseus BooksFor example, H. Georgi, "Lie algebras in particle physics" (Perseus Books, 1999).
. C Carone, H Georgi, S Osofsky, Phys. Lett. B. 322227C. Carone, H. Georgi and S. Osofsky, Phys. Lett. B 322 (1994) 227.
. C Amsler, Particle Data GroupPhys. Lett. B. 6671C. Amsler et al. (Particle Data Group), Phys. Lett. B 667 (2008) 1.
. S L Adler, Phys. Rev. 140736S. L. Adler, Phys. Rev. 140 (1965) B736;
. W I Weisberger, Phys. Rev. 1431302W. I. Weisberger, Phys. Rev. 143 (1966) 1302.
. F D Mazzitelli, L Masperi, Phys. Rev. D. 35368F. D. Mazzitelli and L. Masperi, Phys. Rev. D 35 (1987) 368;
. M Uehara, Prog. Theor. Phys. 80768M. Uehara, Prog. Theor. Phys. 80 (1988) 768;
. M Uehara, A Hayashi, S Saito, Prog. Theor. Phys. 85181M. Uehara, A. Hayashi and S. Saito, Prog. Theor. Phys. 85 (1991) 181;
. M Soldate, Int. J. Mod. Phys. E. 1301M. Soldate, Int. J. Mod. Phys. E 1 (1992) 301;
. W Broniowski, Nucl. Phys. A. 580429W. Broniowski, Nucl. Phys. A 580 (1994) 429.
. M G Alford, R L Jaffe, Nucl. Phys. B. 578367M. G. Alford and R. L. Jaffe, Nucl. Phys. B 578 (2000) 367
. J R Pelaez, Phys. Rev. Lett. 92102001J. R. Pelaez, Phys. Rev. Lett. 92 (2004) 102001;
. J R Pelaez, G Rios, ibid. 97242002J. R. Pelaez and G. Rios, ibid. 97 (2006) 242002;
. R L Jaffe, Prog. Theor. Phys. Supp. 168127R. L. Jaffe, Prog. Theor. Phys. Supp. 168 (2007) 127.
. H W Lin, Hadron Spectrum CollaborationPhys. Rev. D. 7934502H. W. Lin et al. [Hadron Spectrum Collaboration], Phys. Rev. D 79 (2009) 034502;
. C Gattringer, C Hagen, C B Lang, M Limmer, D Mohler, A Schafer, ibid. 7954501C. Gattringer, C. Hagen, C. B. Lang, M. Limmer, D. Mohler and A. Schafer, ibid. 79 (2009) 054501;
. J M Bulava, ibid. 7934505J. M. Bulava et al., ibid. 79 (2009) 034505;
. J Bulava, arXiv:1004.5072ibid. 8214507hep-latJ. Bulava et al., ibid. 82 (2010) 014507; [arXiv:1004.5072 [hep-lat]].
) 034505; note, however, that the axial charge may be especially sensitive to finite size effects in the chiral limit: see. G P Engel, C B Lang, M Limmer, D Mohler, A L Schafer ; R, Jaffe, Phys. Lett. B. 82105G. P. Engel, C. B. Lang, M. Limmer, D. Mohler and A. Schafer, ibid. 82 (2010) 034505; note, however, that the axial charge may be especially sensitive to finite size effects in the chiral limit: see, e.g., R. L. Jaffe, Phys. Lett. B 529 (2002) 105.
. T Yamazaki, RBC+UKQCD CollaborationPhys. Rev. Lett. 100171602T. Yamazaki et al. [RBC+UKQCD Collaboration], Phys. Rev. Lett. 100 (2008) 171602;
. H W Lin, T Blum, S Ohta, S Sasaki, T Yamazaki, Phys. Rev. D. 7814505H. W. Lin, T. Blum, S. Ohta, S. Sasaki and T. Yamazaki, Phys. Rev. D 78 (2008) 014505;
. T Yamazaki, ibid. 79114505T. Yamazaki et al., ibid. 79 (2009) 114505.
. S Ohta, private communicationS. Ohta, private communication.
|
[] |
[
"Strategy to find the two Λ(1405) states from lattice QCD simulations",
"Strategy to find the two Λ(1405) states from lattice QCD simulations"
] |
[
"A Martínez Torres \nYukawa Institute for Theoretical Physics\nKyoto University\n606-8502KyotoJapan\n",
"M Bayar \nDepartamento de Física Teórica and IFIC\nCentro Mixto\nInstitutos de Investigación de Paterna\nUniversidad de Valencia-CSIC\nAptdo22085, 46071ValenciaSpain\n\nDepartment of Physics\nKocaeli University\n41380IzmitTurkey\n",
"D Jido \nYukawa Institute for Theoretical Physics\nKyoto University\n606-8502KyotoJapan\n\nJ-PARC Branch\nKEK Theory Center\nInstitute of Particle and Nuclear Studies\nHigh Energy Accelerator Research Organization (KEK)\n203-1, 319-1106ShirakataTokai, IbarakiJapan\n",
"E Oset \nDepartamento de Física Teórica and IFIC\nCentro Mixto\nInstitutos de Investigación de Paterna\nUniversidad de Valencia-CSIC\nAptdo22085, 46071ValenciaSpain\n"
] |
[
"Yukawa Institute for Theoretical Physics\nKyoto University\n606-8502KyotoJapan",
"Departamento de Física Teórica and IFIC\nCentro Mixto\nInstitutos de Investigación de Paterna\nUniversidad de Valencia-CSIC\nAptdo22085, 46071ValenciaSpain",
"Department of Physics\nKocaeli University\n41380IzmitTurkey",
"Yukawa Institute for Theoretical Physics\nKyoto University\n606-8502KyotoJapan",
"J-PARC Branch\nKEK Theory Center\nInstitute of Particle and Nuclear Studies\nHigh Energy Accelerator Research Organization (KEK)\n203-1, 319-1106ShirakataTokai, IbarakiJapan",
"Departamento de Física Teórica and IFIC\nCentro Mixto\nInstitutos de Investigación de Paterna\nUniversidad de Valencia-CSIC\nAptdo22085, 46071ValenciaSpain"
] |
[] |
Theoretical studies within the chiral unitary approach, and recent experiments, have provided evidence of the existence of two isoscalar states in the region of the Λ(1405). In this paper we use the same chiral approach to generate energy levels in a finite box. In a second step, assuming that these energies correspond to lattice QCD results, we devise the best strategy of analysis to obtain the two states in the infinite volume case, with sufficient precision to distinguish them. We find out that using energy levels obtained with asymmetric boxes and/or with a moving frame, with reasonable errors in the energies, one has a successful scheme to get the two Λ(1405) poles.
|
10.1103/physrevc.86.055201
|
[
"https://arxiv.org/pdf/1202.4297v2.pdf"
] | 53,552,494 |
1202.4297
|
a2149c5508bf2d5382d7e7cb7258d1421fa59d04
|
Strategy to find the two Λ(1405) states from lattice QCD simulations
20 Feb 2012
A Martínez Torres
Yukawa Institute for Theoretical Physics
Kyoto University
606-8502KyotoJapan
M Bayar
Departamento de Física Teórica and IFIC
Centro Mixto
Institutos de Investigación de Paterna
Universidad de Valencia-CSIC
Aptdo22085, 46071ValenciaSpain
Department of Physics
Kocaeli University
41380IzmitTurkey
D Jido
Yukawa Institute for Theoretical Physics
Kyoto University
606-8502KyotoJapan
J-PARC Branch
KEK Theory Center
Institute of Particle and Nuclear Studies
High Energy Accelerator Research Organization (KEK)
203-1, 319-1106ShirakataTokai, IbarakiJapan
E Oset
Departamento de Física Teórica and IFIC
Centro Mixto
Institutos de Investigación de Paterna
Universidad de Valencia-CSIC
Aptdo22085, 46071ValenciaSpain
Strategy to find the two Λ(1405) states from lattice QCD simulations
20 Feb 2012(Dated: February 21, 2012)
Theoretical studies within the chiral unitary approach, and recent experiments, have provided evidence of the existence of two isoscalar states in the region of the Λ(1405). In this paper we use the same chiral approach to generate energy levels in a finite box. In a second step, assuming that these energies correspond to lattice QCD results, we devise the best strategy of analysis to obtain the two states in the infinite volume case, with sufficient precision to distinguish them. We find out that using energy levels obtained with asymmetric boxes and/or with a moving frame, with reasonable errors in the energies, one has a successful scheme to get the two Λ(1405) poles.
I. INTRODUCTION
The history of the Λ(1405) as a composite state of meson baryon, dynamically generated from the meson baryon interaction, is rather long, starting from the works of Refs. [1,2]. Early works using the cloudy bag model also reached similar conclusions [3]. The advent of chiral unitary theory, combining chiral dynamics and unitarity in coupled channels, brought new light into this issue and the Λ(1405) was one of the cleanest examples of states dynamically generated within this approach [4][5][6]. Hints that there could be two states rather than one had also been reported using the cloudy bag model [7] and the chiral unitary approach [8][9][10]. A qualitative step forward was done in Ref. [11], where two different versions of the approach were used, the two poles remained, and their origin was investigated. It was found that in an SU (3) symmetric theory there were two degenerate octets and a singlet of dynamically generated resonances, but with the breaking of SU(3) the degeneracy was removed, one octet with isospin I = 0 moved to become the Λ(1670) and the other one moved close to the singlet, producing two poles close by in the region of the Λ(1405). One of the poles appears at energies around 1420 MeV, couples mostly toKN and has a small width of around 30 MeV. The other pole is around 1395 MeV, couples mostly to πΣ and is much wider, around 120 or 250 MeV depending on the model. After the work of Ref. [11], all further works on the chiral unitary approach have corroborated the two poles, with remarkable agreement for the pole at higher energy and larger variations for the pole at lower energies [12][13][14][15][16][17][18][19].
Suggestions of experiments to confirm this finding were made, and it was shown that one should not expect to see two peaks in the cross sections, but rather different shapes in different reactions. In this sense, a suggestion was made to look for the Λ(1405) peak in the K − p → γπΣ reaction [20], where the γ would be radiated from the initial state, making the K − p system to lose energy and go below threshold and then excite the high energy state of the Λ(1405), to which it couples most strongly. This reaction was not made, although it is planned for JPARC [21], but a similar one, where the photon was substituted by a pion, was implemented in Ref. [22] studying the K − p → π 0 π 0 Σ 0 reaction at p K = 514 MeV/c -750 MeV/c. A neat and narrow peak was seen at √ s = 1420 MeV, which was analyzed in Ref. [23] and interpreted in terms of the high energy pole of the Λ(1405). More recently it was noticed that old data on the K − d → πΣ n reaction from Ref. [24] produced a peak in the πΣ spectrum around √ s = 1420 MeV, with also a small width. These data were well reproduced in Ref. [25] within the chiral unitary approach and multiple scattering, and once again it was shown that it gave support to the existence of the second pole of the Λ(1405). It was shown there that the reaction proceeded with kaons in flight but not for stopped kaons, because the background from single scattering was too large in this latter case, obscuring the signal of the resonance that stems from double scattering. Even then, it was shown in Ref. [26] that kaons from the DAFNE facility, coming from the decay of the φ, would also be suited to search for this resonance if neutrons were measured in coincidence in order to reduce the background. The search for reactions where the Λ(1405) is produced has continued, showing that, as predicted, different reactions have different shapes. In this sense there have been recent photoproduction experiments [27,28] and proton induced experiments [29,30] where the shapes are indeed different and the peaks appear at lower energies, around 1405 MeV, as the nominal mass. There are also theoretical studies for these reactions where the peaks appear around these energies, and the larger contribution of the lower energy state that couples mostly to πΣ is mostly responsible for it [31][32][33][34][35].
In as much as chiral dynamics is a good representation of QCD at low energies, the predictions of the chiral unitary approach on the Λ(1405) stand on firm ground.
Yet, it would also be very interesting to have these predictions confirmed with lattice QCD simulations. In this sense, the determination of hadron spectra is one of the challenging tasks of Lattice QCD and many efforts are being devoted to this problem [36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54], some of them in particular to the search of the Λ(1405) [55][56][57][58][59][60]. A review on the Lambda(1405) and attempts to see it from different points of view is given in Ref. [61]. In some works the "avoided level crossing" is usually taken as a signal of a resonance, but this criteria has been shown insufficient for resonances with a large width [62][63][64]. Sometimes, the lattice spectra at finite volumes is directly associated to the energies of the states in infinite volume invoking a weak volume dependence of the results, as done recently searching for the Λ(1405) resonance [65]. A more accurate method consists on the use of Lüscher's approach, for resonances with one decay channel. The method allows to reproduce the phase shifts for the decay channel from the discrete energy levels in the box [66,67]. This method has been recently simplified and improved in Ref. [64] by keeping the full relativistic two body propagator (Lüscher's approach keeps the imaginary part of this propagator exactly but makes approximations on the real part) and extending the method to two or more coupled channels. The method has also been applied in Ref. [68] to obtain finite volume results from the Jülich model for meson baryon interaction, including spectra for the Λ(1405) with finite volume, and in Ref. [69], to study the interaction of the DK and ηD s system where the D s * 0 (2317) resonance is dynamically generated from the interaction of these particles [70][71][72][73]. The case of the κ resonance in the Kπ channel is also addressed along the lines of Ref. [64] in Ref. [74].
In the work of Ref. [64], the inverse problem of getting phase shifts and resonances from lattice results using two channels was addressed, paying special attention to the evaluation of errors and the precision needed on the lattice results to obtain phase shifts and resonance properties with a desired accuracy. Further work along these lines is done in Ref. [74]. The main problem encountered is that the levels obtained from the box of a certain size range do not cover all the desired energy region that one would like to investigate. Several suggestions are given in order to produce extra levels, like using twisted boundary conditions or asymmetric boxes [64]. These are, however, not free of problems since it is unclear whether a full twisting can be done in actual QCD simulations including sea quarks, and the asymmetric boxes have the problem of the possible mixing of different partial waves. Another alternative is to evaluate levels for a system in a moving frame as done in Ref. [53], but this also poses problems of mixing in principle. The generalization of Lüscher's approach to the moving frame is done in Refs. [75][76][77][78][79], and it provides a convenient framework for lattice calculations since new levels can be obtained without enlarging the size of the box, with an economy in computational time. It is then quite convenient to carry out simulations using effective theories in a finite volume, preparing the grounds for future lattice calculations, trying to find an optimal strategy on which configurations to evaluate in order to obtain the desired observables in the infinite volume case.
The case of extracting the Λ(1405) parameters is specially challenging, particularly because two resonance must be found which are not too far from each other, which means that extra precision will be demanded to the lattice results. Furthermore, the two poles are not to be seen in the πΣ phase shifts, since, as mentioned before, different amplitudes give different weight to the two poles and the πΣ phase shifts provide insufficient information. The other reason is that the chiral unitary approach tells us that the two states couple strongly toKN and πΣ, so the use of the two channels in the analysis is mandatory and the use of one channel like in Luscher approach is bound to produce incorrect results. In view of this we face the problem using the two channels explicitly in the analysis and produce amplitudes in the coupled channels from where we can extract the pole positions in the complex plane by means of an analytical continuation of these amplitudes. Even then, the problem is subtle because using standard periodical boundary conditions, and a wide range of lattice volumes, there is a gap of energies in the levels of the box precisely for the energies where one finds the poles. Because of this problem one is then forced to use either asymmetric boxes or discretization in the moving frame in order to find eigenvalues of the box in the desired region. In the present paper we face all these problems and come out with some strategies that we find better suited to determine the position of the two Λ(1405) poles.
II. FORMALISM
In the chiral unitary approach the scattering matrix in coupled channels is given by the Bethe-Salpeter equation in its factorized form
T = [1 − V G] −1 V = [V −1 − G] −1 ,(1)
where V is the matrix for the transition potentials between the channels and G is a diagonal matrix with the i th element, G i , given by the loop function of two propagators, a pseudoscalar meson and a baryon, which is defined as
G i = i2M i d 4 p (2π) 4 1 (P − p) 2 − M 2 i + iǫ 1 p 2 − m 2 i + iǫ ,(2)
where m i and M i are the masses of the meson and the baryon, respectively, and P the four-momentum of the global meson-baryon system.
The loop function in Eq. (2) needs to be regularized and this can be accomplished either with dimensional regularization or with a three-momentum cutoff. The equivalence of both methods was shown in Refs. [8,81].
In dimensional regularization the integral of Eq. (2) is evaluated and gives for meson-baryon systems [8,82]
G i (s, m i , M i ) = 2M i (4π) 2 a i (µ) + log m 2 i µ 2 + M 2 i − m 2 i + s 2s log M 2 i m 2 i + Q i ( √ s) √ s log s − (M 2 i − m 2 i ) + 2 √ sQ i ( √ s) + log s + (M 2 i − m 2 i ) + 2 √ sQ i ( √ s) − log −s + (M 2 i − m 2 i ) + 2 √ sQ i ( √ s) − log −s − (M 2 i − m 2 i ) + 2 √ sQ i ( √ s) ,(3)
where s = E 2 , with E the energy of the system in the center of mass frame, Q i the on shell momentum of the particles in the channel i, µ a regularization scale and a i (µ) a subtraction constant (note that there is only one degree of freedom, not two independent parameters).
In other works one uses regularization with a cutoff in three momentum, once the p 0 integration is analytically performed [83], and one gets
G i = | p|<pmax d 3 p (2π) 3 2M i 2ω 1 ( p) ω 2 ( p) ω 1 ( p) + ω 2 ( p) E 2 − (ω 1 ( p) + ω 2 ( p)) 2 + iǫ , ω 1,2 ( p) = m 2 1,2 + p 2 ,(4)
with m 1 , m 2 corresponding to m i and M i of Eq. (2). When one wants to obtain the energy levels in the finite box, instead of integrating over the energy states of the continuum, with p being a continuous variable as in Eq. (4), one must sum over the discrete momenta allowed in a finite box of side L with periodic boundary conditions. We then have to replace G by G = diag ( G 1 , G 2 ) (in two channels), where
G j = 2M i L 3 | p|<pmax p 1 2ω 1 ( p) ω 2 ( p) ω 1 ( p) + ω 2 ( p) E 2 − (ω 1 ( p) + ω 2 ( p)) 2 , p = 2π L n, n ∈ Z 3(5)
This is the procedure followed in Ref. [64]. The eigenenergies of the box correspond to energies that produce poles in the T matrix, Eq. (1), which correspond to zeros of the determinant of 1 − V G,
det(1 − VG) = 0 .(6)
For the case of two coupled channels Eq. (6) can be writ-ten as
det(1 − VG) = 1 − V 11G1 − V 22G2 + (V 11 V 22 − V 2 12 )G 1G2 = 0 .(7)
The problem of theKN interaction with its coupled channels and the Λ(1405) was addressed in Ref. [6] using the cut off method, but more recently it has been addressed using dimensional regularization [8,82]. For this reason we will also use the dimensional regularization method for the finite box, which was developed in Ref. [69]. The change to be made is also very simple, the G function of dimensional regularization of Eq. (3) has to be substituted by
G(E) = G D (E) + lim pmax→∞ 1 L 3 pmax pi I(p i ) − p<pmax d 3 p (2π) 3 I(p)(8)
where I(p) is given by
I(p) = 2M i 2ω 1 ( p) ω 2 ( p) ω 1 ( p) + ω 2 ( p) E 2 − (ω 1 ( p) + ω 2 ( p)) 2 + iǫ .(9)
We will also consider the case where the meson-baryon system moves with a fourmomentum P = (P 0 , P ) in the box. In this case we still have to define the integrals and the sums in the CM frame, where p max is defined, but the momenta of the two particles must be discretized in the box, where the system moves with momentum P . We follow the approach of Refs. [79,80] and use the boost transformation from the moving frame, with the variables p 1,2 , to the CM frame with the variables p *
1,2 p * 1,2 = p 1,2 + M I P 0 − 1 p 1,2 · P | P | 2 − p * 0 1,2 P 0 P .(10)
where M 2 I = P 2 = P 0 2 − P 2 , the subindexes 1, 2, represent the meson, baryon particles and p * 0 1,2 are the CM energies of the particles given by
p * 0 1,2 = M 2 I + m 2 1,2 − m 2 2,1 2M I .(11)
Then we must do the substitution in Eq. (8) for the evaluation of the energies in the box, with p * i given in terms of p i by means of Eq. (10). Since p 1 and p 2 = P − p 1 must both satisfy the periodic boundary conditions, this forces P to be also discretized and thus we can only use values of P such that
lim pmax→∞ 1 L 3 pmax pi I(p i ) −→ 1 L 3 | p * |<pmax p M I P 0 I(p * i ), p = 2π L n, n ∈ Z 3(12)P = 2π L N , N ∈ Z 3 .(13)
III. RESULTS
A. Energy levels in the box
In this section we show the energy levels obtained from the solution of Eq. (6) as a function of the side length of the box, L, and for different physical cases: using periodical boundary conditions in a: (1) symmetric box, (2) asymmetric box and (3) symmetric box but in a moving frame, i.e., with non-zero value for the total center of mass momentum P (Eq. (13)).
Periodical boundary conditions in a symmetric box
In Fig. 1 we show the first six energy levels related to the system formed by the coupled channelsKN , πΣ, ηΛ and KΞ, which generate a double pole structure for the Λ(1405) and a pole for the Λ(1670) [11]. These levels are obtained by solving Eq. (6) using the chiral model of Ref. [11] and imposing periodical boundary conditions in a symmetric box of side length L (measured in units of m −1 π ).
As it can be seen in Fig. 1, the gap between the levels 0, 1 and specially between levels 1 and 2 is considerable, giving rise to the presence of only two levels in the energy region of interest, i.e., the energy range in which the two poles of the Λ(1405) are found (1390 − 1430 MeV). This fact shows the difficulty that one can face to extract information about the poles of the Λ(1405) in an infinite volume considering these energy levels as reference.
Periodical boundary conditions in an asymmetric box
To see if we can obtain more energy levels in the region of the Λ(1405), it is also possible to solve Eq. (6) but in an asymmetric box. To do this we just need to substitute L 3 by L x L y L z and the momentum p of Eq. (12) by p = (2π)(n x /L x , n y /L y , n z /L z ). In Fig. 2 we show the first three energy levels determined in a box of side lengths L x = L y = L and L z = zL, and we vary z between 0.5L and 2.5L. In this way, we get more energy levels in the region of interest, which can provide different information about the system and the poles of the Λ(1405).
Periodical boundary conditions in a moving frame
Another method to try to get more energy levels around the pole positions of the Λ(1405) and thus, different information about the dynamics of the system under consideration, consist in imposing periodical boundary conditions in a symmetric box of side length L but considering the system in a moving frame, i.e., with non zero center of mass momentum P . In Fig. 3 we show the results found in this case for the first three levels obtained and for different values of the vector N (see Eq. (13)). As it can be seen, the use of different values of P gives rise to a splitting of the levels. In particular, the splitting of level 1 is precisely in the energy region of interest, 1390 − 1450 MeV. This is different to the case of the asymmetric box, where level 2 is required in order to have energy levels around 1420-1450 MeV, as it can be seen in Fig. 2.
B. The inverse problem: getting the Λ(1405) poles from the energy levels of the box
In the following we refer to the problem of determining the pole positions of the Λ(1405) in the infinite volume using the energy levels shown in Figs. 1, 2, 3 as if they were provided to us by a lattice calculation. In our formalism, we can simulate lattice-like data considering points related to the energy levels of Figs. 1, 2, 3 and assigning to them a typical error of ± 10 MeV. We call the data generated in this form "synthetic" lattice data and the problem of getting the poles of the Λ(1405) from these data points "the inverse problem".
To solve the inverse problem we consider a potential with the same energy dependence than the chiral potential used to generate the energy levels shown in Figs. 1, 2, 3. In its non-relativistic version, this potential is given by [6]
V ij = − C ij 4f 2 (E i + E j ),(14)
with C ij coefficients depending on the channel considered, f the pion decay constant and E i (E j ) the center of mass energy of the meson in the initial (final) state.
Using that for a particular channel l
E l = E 2 + m 2 l − M 2 l 2E ,(15)
with m l and M l the masses of the meson and baryon which constitute the channel l, respectively, we can write Eq. (14) as
V ij = − C ij 4f 2 E + 1 2E m 2 i + m 2 j − (M 2 i + M 2 j )(16)
Choosing a region of energies around a certain value of E, E 0 , the inverse function of E can be expanded as a function of E − E 0 to a good extend. Particularizing E 0 to the value given by the sum of the kaon and nucleon masses, i.e., m K + M N , we can write the potential in Eq. (16) as
V ij = a ij + b ij [E − (m K + M N )].(17)
The value of the coefficients a ij and b ij can be obtained comparing Eq. (17) with Eq. (16) and substituting 1/E by its Taylor expansion around E 0 = m K + M N .
To solve the inverse problem, we use the energy levels obtained from Eq. (6) with the potential of Eq. (17) but treating a ij and b ij as parameters which are determined by fitting the corresponding solutions for the energy levels to the "synthetic" lattice data considered. Since this potential has the same energy dependence than the chiral potential, the best fit we can perform will have as minimum value for the χ 2 the result χ 2 min = 0. However, other possible potentials, giving rise to solutions compatible with the error assumed in the data points, can be also found as an answer for the inverse problem. These solutions can be obtained by generating random numbers for the parameters a ij and b ij close to those of the minimum such that χ 2 χ 2 min + 1. It is important to notice that the loop functionG, used in Eq. (6), needs to be regularized and, thus, depends on a cut-off or a subtraction constant. Consequently, so do the fitted parameters, but the T matrix obtained from Eq. (1) and the observables related to it should be independent of this regularization parameter. This means that the inverse method cannot depend on the cut-off or subtraction constant assumed in the evaluation of thẽ G function. For the case of one channel, it is possible to show analytically this independence in the choice of the cut-off or subtraction constant [64,69], but if more channels are involved it can only be seen numerically by changing the cut-off or subtraction constant in a reasonable physical range [64,69].
In the next sections we show the results found for the inverse problem. To accomplish this we have considered different sets of points extracted from the energy levels shown in Figs. 1, 2, 3 and fitted them from the solution that Eq. (6) produces with the potential of Eq. (17). To solve Eq. (6) we have taken into account two coupled channels, πΣ (which we named channel 1) andKN (or channel 2), which are the most relevant channels to describe the properties of the Λ(1405). This implies, as it can be seen in Eq. (7), that we have to determine three potentials, V 11 , V 12 (V 21 = V 12 ) and V 22 or equivalently 6 parameters a 11 , a 12 , a 22 , b 11 , b 12 and b 22 . Once the parameters and, thus, the potentials, are known, we can use them to solve Eq. (1) and determine the pole positions of the Λ(1405) in an infinite volume.
Periodical boundary conditions in a symmetric box
In Fig. 4 we show the results of the energy levels reconstructed from the best fits to the "synthetic" lattice data considered from Fig. 1. These data consist in 10 points for levels 0 and 1 obtained in a symmetric box of side length L, varying L in the range 1.5 m −1 π to 3.34 m −1 π , assigning an error of ± 10 MeV to the eigenenergies of the box (from now on, we will always assume an error of ± 10 for the different "synthetic" data that we will use). The shadowed band in the figure corresponds to the random choices of parameters satisfying the condition χ 2 χ 2 min + 1. Using the potentials obtained from the fit and the loop function G in infinite volume, we can solve Eq. (1) and calculate the two-body T matrix in the unphysical sheet, which allows us to determine the pole position of the Λ(1405) associated to the band of solutions shown in Fig. 4. As a result we get a double pole structure for the Λ(1405), with one pole in the region 1385-1433 MeV and half width between 93-137 MeV (which we call pole 1) and another one in the energy region 1416-1427 MeV and half width in the range 11-20 MeV (which we call pole 2). If we compare these results with the ones of the chiral model [11], 1390-i66 MeV and 1426-i16 MeV, respectively, we find a big dispersion in the determination of the real part of the first pole of the Λ(1405). This shows that the information which one can extract from the "synthetic" data considered in Fig. 4 is not sufficient to determine with more precision the poles associated with the Λ(1405).
A way to delimit the poles of the Λ(1405) with more precision from lattice data could consist in going to higher volumes, since for large volumes the results in the box should be very close to those of an infinite volume. With this idea in mind, we can generate "synthetic" data points for the levels 0 and 1 of Fig. 1, but in a larger range of L than the one considered in Fig. 4. The data points, as well as the results from the fits, are shown in Fig. 5. Similarly, if we use now the potentials associated with the band of solutions shown in Fig. 5 to solve Eq. (1) and calculate the T matrix in the unphysical sheet, we get again two poles in the complex energy plane associated with the Λ(1405): one in the region 1390-1433 MeV, with half width between 70-100 MeV, and other at 1410-1421 MeV with half width 17-30 MeV. Comparing them with the previous results, we find that the consideration of a bigger box has improved slightly the width associated with the first pole of the Λ(1405), however, we continue having a similar energy dispersion for the real part of the pole. We could also try using different levels than the ones employed in Figs. 4 and 5 to see if we can get more reliable information from them. In Fig. 6 we consider a "synthetic" data obtained from levels 1 and 2 of Fig. 1. We have taken into account 5 points for level 1 in a range of L between 1.5 m −1 π and 3.9 m −1 π and 4 points for level 2 for values of L inside 2 m −1 π to 3.9 m −1 π . This is because for level 2 the points for values of L below 2 m −1 π are influenced by the ηΛ and KΞ channels and, thus, it is not possible to make a fit to them considering only the πΣ andKN channels, as we do. We can use now the potentials associated with the different fits shown in Fig. 6 to calculate the pole positions of the Λ(1405) in infinite volume by means of Eq. (1). In this case, we continue getting a double pole structure for the Λ(1405), but this time one pole is at (1375−1430)−i(70−85) MeV and the other one is at (1412 − 1427) − i (21 − 34) MeV. The position of the second pole remains basically the same as in the two previous cases. However, the use of "synthetic" points generated from levels 1 and 2 instead than from levels 0 and 1 has restricted more the imaginary part of the first pole, although we continue getting a similar energy dispersion for the real part of the pole position.
Finally, we could consider all the energy levels present These results show that the information which can be extracted from "synthetic" data constructed from the energy levels obtained in a symmetric box of volume L 3 is not enough to determine with precision the pole positions of the Λ(1405), a fact which is basically related to the absence of energy levels, and thus information about the dynamics of the system, in the region between 1400-1500 MeV, as it can be seen in Fig. 1.
Periodical boundary conditions in an asymmetric box
We consider now the case of an asymmetric box of side lengths L x = L y = L and L z = zL to solve the inverse problem. In this case, we generate a set of 20 data points extracted from levels 0, 1 and 2 shown in Fig. 2. In particular, we use 5 points for level 0 calculated with z = 2.5, 10 points for level 1 (5 for the case z = 0.5 and 5 more for z = 2.0) and 5 points for level 2 obtained with z = 2.0. In this way we ensure the presence of some energy levels in the region of the Λ(1405), as it can be seen in Fig. 8.
The solution of the Bethe-Salpeter equation in an infinite volume, Eq. (1), using the potentials related to the band of solutions plotted in Fig. 8 1405), which is now quite close to the chiral result (1390-i66 MeV). However, the second pole appears at higher energies as compared to the case of a symmetric box and sometimes is far from the chiral solution (1426 − i16 MeV), being even close to theKN threshold.
Periodical boundary conditions in a moving frame
We can also study the information which can be extracted for the poles of the Λ(1405) using the levels ob- FIG. 9. Fits to the "synthetic" data extracted from the energy levels 0 and 1 of Fig. 3, which correspond to the case of a symmetric box, but with the particles being in a moving frame.
tained when we consider the system in a symmetric box, but in a moving frame, to generate "synthetic" lattice data. In this case, we consider levels 0 and 1 of Fig. 3 determined for 5 different values of the center of mass momentum (the ones shown in the legend of Fig. 3) and two points in each of these curves. In particular, we take points at L = 1.757 m −1 π and L = 2.014 m −1 π , obtaining then 20 data points. The results are shown in Fig. 9. In Fig. 10 we show the results for the pole positions of the Λ(1405) obtained from the different data set considered in this work. As it can be seen in Fig. 10, out of the different data sets considered to solve the inverse problem, the cases of an asymmetric box and of a symmetric box but in a moving frame seem to be more suited to get the two poles of the Λ(1405) with more precision.
IV. CONCLUSIONS
We have made a study of theKN interaction with its coupled channels in a finite box and found the levels obtained as a function of the box size. We have done this for standard periodic conditions and symmetric boxes, for asymmetric boxes and for symmetric boxes but with the particles in a moving frame. The aim of the work has been to solve the inverse problem in which, assuming that the levels in the box would correspond to "QCD lattice results" we want to determine the pole positions in the complex plane for the two Λ(1405) states provided by the chiral unitary approach and supported by several experiments.
We found that the problem is not trivial, and even the use of a large number of energies of the box corresponding to different levels and volumes with standard periodic conditions cannot provide the mass and width of the states with the accuracy of the chiral unitary approach and present experiments. For this reason we investigated other possible strategies and found that the use of asymmetric boxes and levels coming from the particles in moving frames helped considerably to narrow down the uncertainties in the determination of the mass and width of these resonances. The choices of levels and energies made for this analysis should be a guiding tool for future QCD lattice evaluations, showing the number of levels needed, the errors that should be demanded in the determination of the energies of the box and the type of asymmetric boxes or total momenta of the pair of particles in the moving frames. Having this information before hand is of tremendous value given the time consuming runs of actual QCD lattice runs.
FIG. 1 .
1Energy levels in a symmetric box of side length L.
FIG. 2 .
2Energy levels in an asymmetric box of side length Lx = Ly = L and Lz = zL, with z = 0.5 − 2.5 in steps of 0.5.
FIG. 3 .
3Energy levels in a symmetric box of side length L with the system having a center of mass momentum given by Eq.(13).
FIG. 4 .
4First two energy levels as function of the box side length L, reconstructed from fits to the "synthetic" data ofFig. 1 in a range of L between 1.5 m −1 π and 3.34 m −1 π using the potential of Eq. (17). The band corresponds to different choices of parameters within errors.
FIG. 5 .
5Same than in Fig. 4 but for a range of L between 1.5 m −1 π and 4.93 m −1 π .
FIG. 7 .
7Fits to the levels 0, 1 and 2 of Fig. 1 constructed from the potential of Eq. (17). in Fig. 1 below 1600 MeV to generate data points to check if the consideration of more levels can restrict more the energy region at which the pole positions of the Λ(1405) are found. Following this idea, in Fig. 7 we consider a set of 14 points extracted from levels 0, 1 and 2 of Fig. 1. Similar to the previous results, the consideration of data points associated to three energy levels puts a restriction on the imaginary part of the first pole of the Λ(1405), which in this case is in the range 54-68 MeV (closer to the chiral solution, 66 MeV). However the dispersion on the real part continues basically equal, 1400-1428 MeV. For the second pole we get (1408-1425)-i(29-40) MeV.
FIG. 8 .
8Fits to the "synthetic" data extracted from the energy levels 0, 1 and 2 ofFig. 2in an asymmetric box of side lengths Lx = Ly = L and Lz = zL. The data points considered are generated from level 0 for z = 2.5, level 1 for z = 0.5 and z = 2.0 and level 2 for z = 2.0.
shows the presence of a double pole structure for the Λ(1405) with pole positions at (1383 − 1407) − i(57 − 69) MeV and (1425 − 1434) − i(25 − 35) MeV. Thus, using this new set of data points, there is an improvement in the determination of the first pole of the Λ(
From the solution of the best fits, we can use the potentials obtained to solve Eq. (1), getting then two poles for the Λ(1405): one at (1388 − 1418) − i(59 − 77) MeV and other at (1412 − 1427) − i(16 − 34) MeV.
FIG. 10 .
10Pole positions of the Λ(1405) reconstructed from the different set of "synthetic" data generated for the different cases considered in this work. The shaded symbols corresponds to the positions obtained for the first pole of the Λ(1405), while the empty symbols are related to the second pole of the Λ(1405).
ACKNOWLEDGMENTSWe would like to thank Michael Döring for useful discussions. This work is partly supported by DGICYT con-
/090, and the EU Integrated Infrastructure Initiative Hadron Physics 3 Project under Grant Agreement no. 283286. This work is supported in part by the Grant for Scientific Research (No. 22105507 and No. 22540275) from MEXT of Japan. A part of this work was done in the Yukawa International Project for Quark-Hadron Sciences (YIPQS). The work of A. M. T. is supported by the Grant-in-Aid for the Global COE Program "The Next Generation of Physics, Spun from Universality and Emergence" from the Ministry of Education. FIS2011-28853-C02-01Culture, Sports, Science and Technology. MEXT) of Japanthe Generalitat Valenciana in the program PrometeoFIS2011-28853-C02-01, the Generalitat Valenciana in the program Prometeo, 2009/090, and the EU Inte- grated Infrastructure Initiative Hadron Physics 3 Project under Grant Agreement no. 283286. This work is supported in part by the Grant for Scientific Research (No. 22105507 and No. 22540275) from MEXT of Japan. A part of this work was done in the Yukawa International Project for Quark-Hadron Sciences (YIPQS). The work of A. M. T. is supported by the Grant-in-Aid for the Global COE Program "The Next Generation of Physics, Spun from Universality and Emergence" from the Min- istry of Education, Culture, Sports, Science and Technol- ogy (MEXT) of Japan.
. R H Dalitz, S F Tuan, Annals Phys. 10307R. H. Dalitz and S. F. Tuan, Annals Phys. 10, 307 (1960).
. R H Dalitz, T C Wong, G Rajasekaran, Phys. Rev. 1531617R. H. Dalitz, T. C. Wong and G. Rajasekaran, Phys. Rev. 153, 1617 (1967).
. E A Veit, B K Jennings, R C Barrett, A W Thomas, Phys. Lett. B. 137415E. A. Veit, B. K. Jennings, R. C. Barrett and A. W. Thomas, Phys. Lett. B 137, 415 (1984).
. N Kaiser, P B Siegel, W Weise, nucl-th/9505043Nucl. Phys. A. 594325N. Kaiser, P. B. Siegel and W. Weise, Nucl. Phys. A 594, 325 (1995) [nucl-th/9505043].
. N Kaiser, T Waas, W Weise, hep-ph/9607459Nucl. Phys. A. 612297N. Kaiser, T. Waas and W. Weise, Nucl. Phys. A 612, 297 (1997) [hep-ph/9607459].
. E Oset, A Ramos, nucl-th/9711022Nucl. Phys. A. 63599E. Oset and A. Ramos, Nucl. Phys. A 635, 99 (1998) [nucl-th/9711022].
. P J FinkJr, G He, R H Landau, J W Schnick, Phys. Rev. C. 412720P. J. Fink, Jr., G. He, R. H. Landau and J. W. Schnick, Phys. Rev. C 41, 2720 (1990).
. J A Oller, U G Meissner, hep-ph/0011146Phys. Lett. B. 500263J. A. Oller and U. G. Meissner, Phys. Lett. B 500, 263 (2001) [hep-ph/0011146].
. D Jido, A Hosaka, J C Nacher, E Oset, A Ramos, hep-ph/0203248Phys. Rev. C. 6625203D. Jido, A. Hosaka, J. C. Nacher, E. Oset and A. Ramos, Phys. Rev. C 66, 025203 (2002) [hep-ph/0203248].
. C Garcia-Recio, J Nieves, E Ruiz Arriola, M J Vicente, Vacas, hep-ph/0210311Phys. Rev. D. 6776009C. Garcia-Recio, J. Nieves, E. Ruiz Arriola and M. J. Vicente Vacas, Phys. Rev. D 67, 076009 (2003) [hep-ph/0210311].
. D Jido, J A Oller, E Oset, A Ramos, U G Meissner, nucl-th/0303062Nucl. Phys. A. 725181D. Jido, J. A. Oller, E. Oset, A. Ramos and U. G. Meiss- ner, Nucl. Phys. A 725, 181 (2003) [nucl-th/0303062].
. C Garcia-Recio, J Nieves, L L Salcedo, Phys. Rev. D. 7434025C. Garcia-Recio, J. Nieves and L. L. Salcedo, Phys. Rev. D 74 (2006) 034025 .
. T Hyodo, S I Nam, D Jido, A Hosaka, Phys. Rev. C. 6818201T. Hyodo, S. I. Nam, D. Jido and A. Hosaka, Phys. Rev. C 68, 018201 (2003) .
. B Borasoy, R Nissler, W Weise, arXiv:hep-ph/0505239Eur. Phys. J. A. 2579B. Borasoy, R. Nissler and W. Weise, Eur. Phys. J. A 25, 79 (2005) [arXiv:hep-ph/0505239].
. J A Oller, J Prades, M Verbeni, arXiv:hep-ph/0508081Phys. Rev. Lett. 95172502J. A. Oller, J. Prades and M. Verbeni, Phys. Rev. Lett. 95, 172502 (2005) [arXiv:hep-ph/0508081].
. J A Oller, arXiv:hep-ph/0603134Eur. Phys. J. A. 28J. A. Oller, Eur. Phys. J. A 28, 63 (2006) [arXiv:hep-ph/0603134].
. B Borasoy, U G Meissner, R Nissler, arXiv:hep-ph/0606108Phys. Rev. C. 7455201B. Borasoy, U. G. Meissner and R. Nissler, Phys. Rev. C 74, 055201 (2006) [arXiv:hep-ph/0606108].
. T Hyodo, D Jido, A Hosaka, arXiv:0803.2550Phys. Rev. C. 7825203nucl-thT. Hyodo, D. Jido and A. Hosaka, Phys. Rev. C 78, 025203 (2008) [arXiv:0803.2550 [nucl-th]].
. L Roca, T Hyodo, D Jido, arXiv:0804.1210Nucl. Phys. A. 80965hep-phL. Roca, T. Hyodo and D. Jido, Nucl. Phys. A 809, 65 (2008) [arXiv:0804.1210 [hep-ph]].
. J C Nacher, E Oset, H Toki, A Ramos, nucl-th/9902071Phys. Lett. B. 461299J. C. Nacher, E. Oset, H. Toki and A. Ramos, Phys. Lett. B 461, 299 (1999) [nucl-th/9902071].
. R Hayano, private communicationR. Hayano, private communication.
. S Prakhov, Crystall Ball CollaborationPhys. Rev. C. 7034605S. Prakhov et al. [Crystall Ball Collaboration], Phys. Rev. C 70, 034605 (2004).
. V K Magas, E Oset, A Ramos, Phys. Rev. Lett. 9552301V. K. Magas, E. Oset and A. Ramos, Phys. Rev. Lett. 95, 052301 (2005) .
. O Braun, H J Grimm, V Hepp, H Strobele, C Thol, T J Thouw, D Capps, F Gandini, Nucl. Phys. B. 1291O. Braun, H. J. Grimm, V. Hepp, H. Strobele, C. Thol, T. J. Thouw, D. Capps and F. Gandini et al., Nucl. Phys. B 129, 1 (1977).
. D Jido, E Oset, T Sekihara, arXiv:0904.3410Eur. Phys. J. A. 42257nucl-thD. Jido, E. Oset and T. Sekihara, Eur. Phys. J. A 42, 257 (2009) [arXiv:0904.3410 [nucl-th]].
. D Jido, E Oset, T Sekihara, arXiv:1008.4423Eur. Phys. J. A. 4742nucl-thD. Jido, E. Oset and T. Sekihara, Eur. Phys. J. A 47, 42 (2011) [arXiv:1008.4423 [nucl-th]].
. M Niiyama, H Fujimura, D S Ahn, J K Ahn, S Ajimura, H C Bhang, T H Chang, W C Chang, arXiv:0805.4051Phys. Rev. C. 7835202hep-exM. Niiyama, H. Fujimura, D. S. Ahn, J. K. Ahn, S. Ajimura, H. C. Bhang, T. H. Chang and W. C. Chang et al., Phys. Rev. C 78, 035202 (2008) [arXiv:0805.4051 [hep-ex]].
. K Moriya, CLAS CollaborationarXiv:0911.2705Nucl. Phys. A. 835325nucl-exK. Moriya et al. [CLAS Collaboration], Nucl. Phys. A 835, 325 (2010) [arXiv:0911.2705 [nucl-ex]].
. I Zychor, M Buscher, M Hartmann, A Kacharava, I Keshelashvili, A Khoukaz, V Kleber, V Koptev, arXiv:0705.1039Phys. Lett. B. 660167nucl-exI. Zychor, M. Buscher, M. Hartmann, A. Kacharava, I. Keshelashvili, A. Khoukaz, V. Kleber and V. Koptev et al., Phys. Lett. B 660, 167 (2008) [arXiv:0705.1039 [nucl-ex]].
. J Siebenson, HADES CollaborationarXiv:1009.0946PoS. 201052nucl-exJ. Siebenson et al. [HADES Collaboration], PoS BORMIO 2010, 052 (2010) [arXiv:1009.0946 [nucl-ex]].
. J C Nacher, E Oset, H Toki, A Ramos, nucl-th/9812055Phys. Lett. B. 45555J. C. Nacher, E. Oset, H. Toki and A. Ramos, Phys. Lett. B 455, 55 (1999) [nucl-th/9812055].
. T Hyodo, A Hosaka, E Oset, A Ramos, M J Vicente, Vacas, nucl-th/0307005Phys. Rev. C. 6865203T. Hyodo, A. Hosaka, E. Oset, A. Ramos and M. J. Vicente Vacas, Phys. Rev. C 68, 065203 (2003) [nucl-th/0307005].
. T Hyodo, A Hosaka, M J Vacas, E Oset, nucl-th/0401051Phys. Lett. B. 59375T. Hyodo, A. Hosaka, M. J. Vicente Vacas and E. Oset, Phys. Lett. B 593, 75 (2004) [nucl-th/0401051].
. S Nam, J. -H Park, A Hosaka, H. -C Kim, arXiv:0806.4029hep-phS. -i. Nam, J. -H. Park, A. Hosaka and H. -C. Kim, arXiv:0806.4029 [hep-ph].
. L S Geng, E Oset, arXiv:0707.3343Eur. Phys. J. A. 34405hep-phL. S. Geng and E. Oset, Eur. Phys. J. A 34, 405 (2007) [arXiv:0707.3343 [hep-ph]].
. Y Nakahara, M Asakawa, T Hatsuda, Phys. Rev. 6091503Y. Nakahara, M. Asakawa, T. Hatsuda, Phys. Rev. D60 (1999) 091503.
. K Sasaki, S Sasaki, T Hatsuda, Phys. Lett. B. 623208K. Sasaki, S. Sasaki and T. Hatsuda, Phys. Lett. B 623 (2005) 208.
. N Mathur, A Alexandru, Y Chen, Phys. Rev. 76114505N. Mathur, A. Alexandru, Y. Chen et al., Phys. Rev. D76 (2007) 114505.
. S Basak, R G Edwards, G T Fleming, Phys. Rev. 7674504S. Basak, R. G. Edwards, G. T. Fleming et al., Phys. Rev. D76 (2007) 074504.
. J Bulava, R G Edwards, E Engelson, Phys. Rev. 8214507J. Bulava, R. G. Edwards, E. Engelson et al., Phys. Rev. D82 (2010) 014507.
. C Morningstar, A Bell, J Bulava, AIP Conf. Proc. 1257779C. Morningstar, A. Bell, J. Bulava et al., AIP Conf. Proc. 1257 (2010) 779.
. J Foley, J Bulava, K J Juge, AIP Conf. Proc. 1257789J. Foley, J. Bulava, K. J. Juge et al., AIP Conf. Proc. 1257 (2010) 789.
. M G Alford, R L Jaffe, Nucl. Phys. B. 578367M. G. Alford and R. L. Jaffe, Nucl. Phys. B 578 (2000) 367.
. T Kunihiro, SCALAR CollaborationS Muroya, SCALAR CollaborationA Nakamura, SCALAR CollaborationC Nonaka, SCALAR CollaborationM Sekiguchi, SCALAR CollaborationH Wada, SCALAR CollaborationPhys. Rev. D. 7034504T. Kunihiro, S. Muroya, A. Nakamura, C. Nonaka, M. Sekiguchi and H. Wada [SCALAR Collaboration], Phys. Rev. D 70 (2004) 034504.
. F Okiharu, arXiv:hep-ph/0507187PoS. 200570F. Okiharu et al., arXiv:hep-ph/0507187. H. Suganuma, K. Tsumura, N. Ishii and F. Okiharu, PoS LAT2005 (2006) 070;
. Prog. Theor. Phys. Suppl. 168168Prog. Theor. Phys. Suppl. 168 (2007) 168.
. C Mcneile, UKQCD CollaborationC Michael, UKQCD CollaborationPhys. Rev. D. 7414508C. McNeile and C. Michael [UKQCD Collaboration], Phys. Rev. D 74 (2006) 014508.
. A Hart, UKQCD CollaborationC Mcneile, UKQCD CollaborationC Michael, UKQCD CollaborationJ Pickavance, UKQCD CollaborationPhys. Rev. D. 74114504A. Hart, C. McNeile, C. Michael and J. Pickavance [UKQCD Collaboration], Phys. Rev. D 74 (2006) 114504.
. H Wada, T Kunihiro, S Muroya, A Nakamura, C Nonaka, M Sekiguchi, Phys. Lett. B. 652250H. Wada, T. Kunihiro, S. Muroya, A. Nakamura, C. Non- aka and M. Sekiguchi, Phys. Lett. B 652 (2007) 250.
S Prelovsek, C Dawson, T Izubuchi, K Orginos, A Soni, Conf. Proc. C 0908171. 094503. S. Prelovsek, T. Draper, C. B. Lang, M. Limmer, K. F. Liu, N. Mathur and D. Mohler7094507Phys. Rev. DS. Prelovsek, C. Dawson, T. Izubuchi, K. Orginos and A. Soni, Phys. Rev. D 70 (2004) 094503. S. Prelovsek, T. Draper, C. B. Lang, M. Limmer, K. F. Liu, N. Mathur and D. Mohler, Conf. Proc. C 0908171 (2009) 508; Phys. Rev. D 82 (2010) 094507
. H. -W Lin, Hadron Spectrum CollaborationPhys. Rev. 7934502H. -W. Lin et al. [ Hadron Spectrum Collaboration ], Phys. Rev. D79, 034502 (2009).
. C Gattringer, C Hagen, C B Lang, M Limmer, D Mohler, A Schafer, Phys. Rev. 7954501C. Gattringer, C. Hagen, C. B. Lang, M. Limmer, D. Mohler, A. Schafer, Phys. Rev. D79, 054501 (2009).
. G P Engel, BGRPhys. Rev. 8234505CollaborationG. P. Engel et al. [ BGR [Bern-Graz-Regensburg] Col- laboration ], Phys. Rev. D82, 034505 (2010).
. M S Mahbub, W Kamleh, D B Leinweber, A Cais, A G Williams, Phys. Lett. 693M. S. Mahbub, W. Kamleh, D. B. Leinweber, A. O Cais, A. G. Williams, Phys. Lett. B693, 351-357 (2010).
Wallace. R G Edwards, J J Dudek, D G Richards, S J , Phys. Rev. 8474508R. G. Edwards, J. J. Dudek, D. G. Richards, S. J. Wal- lace, Phys. Rev. D84, 074508 (2011).
. C B Lang, D Mohler, S Prelovsek, M Vidmar, arXiv:1105.5636Phys. Rev. D. 8454503heplatC. B. Lang, D. Mohler, S. Prelovsek and M. Vidmar, Phys. Rev. D 84, 054503 (2011) [arXiv:1105.5636 [hep- lat]].
. S Prelovsek, C B Lang, D Mohler, M Vidmar, arXiv:1111.0409hep-latS. Prelovsek, C. B. Lang, D. Mohler and M. Vidmar, [arXiv:1111.0409 [hep-lat]].
. W Melnitchouk, S O Bilson-Thompson, F D R Bonnet, J N Hedditch, F X Lee, D B Leinweber, A G Williams, J M Zanotti, Phys. Rev. D. 67114506W. Melnitchouk, S. O. Bilson-Thompson, F. D. R. Bon- net, J. N. Hedditch, F. X. Lee, D. B. Leinweber, A. G. Williams and J. M. Zanotti et al., Phys. Rev. D 67, 114506 (2003).
. Y Nemoto, N Nakajima, H Matsufuru, H Suganuma, Phys. Rev. D. 6894505Y. Nemoto, N. Nakajima, H. Matsufuru and H. Sug- anuma, Phys. Rev. D 68, 094505 (2003).
. F X Lee, C Bennhold, Nucl. Phys. A. 754248F. X. Lee and C. Bennhold, Nucl. Phys. A 754, 248 (2005).
. T Burch, C Gattringer, L Y , T. Burch, C. Gattringer, L. Y. .
. C Glozman, D Hagen, C B Hierl, A Lang, Schafer, Phys. Rev. D. 7414504Glozman, C. Hagen, D. Hierl, C. B. Lang and A. Schafer, Phys. Rev. D 74, 014504 (2006).
. N Ishii, T Doi, M Oka, H Suganuma, Prog. Theor. Phys. Suppl. 168598N. Ishii, T. Doi, M. Oka and H. Suganuma, Prog. Theor. Phys. Suppl. 168, 598 (2007).
. T T Takahashi, M Oka, Phys. Rev. D. 8134505T. T. Takahashi and M. Oka, Phys. Rev. D 81, 034505 (2010)
. T Hyodo, D Jido, Prog. Part. Nucl. Phys. 6755T. Hyodo and D. Jido, Prog. Part. Nucl. Phys. 67, 55 (2012)
. V Bernard, U. -G Meissner, A Rusetsky, Nucl. Phys. 788V. Bernard, U. -G. Meissner, A. Rusetsky, Nucl. Phys. B788, 1-20 (2008).
. V Bernard, M Lage, U. -G Meissner, A Rusetsky, JHEP. 080824V. Bernard, M. Lage, U. -G. Meissner, A. Rusetsky, JHEP 0808, 024 (2008).
. M Doring, U. -G Meissner, E Oset, A Rusetsky, arXiv:1107.3988Eur. Phys. J. A. 47139heplatM. Doring, U. -G. Meissner, E. Oset and A. Rusetsky, Eur. Phys. J. A 47, 139 (2011) [arXiv:1107.3988 [hep- lat]].
. B J Menadue, W Kamleh, D B Leinweber, M S Mahbub, arXiv:1109.6716hep-latB. J. Menadue, W. Kamleh, D. B. Leinweber and M. S. Mahbub, arXiv:1109.6716 [hep-lat].
. M Lüscher, Commun. Math. Phys. 105153M. Lüscher, Commun. Math. Phys. 105 (1986) 153 (1986).
. M Lüscher, Nucl. Phys. B. 354531M. Lüscher, Nucl. Phys. B 354 (1991) 531.
. M Doring, J Haidenbauer, U. -G Meissner, A Rusetsky, Eur. Phys. J. A. 47163M. Doring, J. Haidenbauer, U. -G. Meissner, A. Rusetsky, Eur. Phys. J. A 47, 163 (2011).
. A Torres, L R Dai, C Koren, D Jido, E Oset, Phys. Rev. D. 8514027A. Martinez Torres, L. R. Dai, C. Koren, D. Jido and E. Oset, Phys. Rev. D 85, 014027 (2012).
. E E Kolomeitsev, M F M Lutz, Phys. Lett. 582E. E. Kolomeitsev, M. F. M. Lutz, Phys. Lett. B582, 39-48 (2004).
. J Hofmann, M F M Lutz, Nucl. Phys. 733J. Hofmann, M. F. M. Lutz, Nucl. Phys. A733, 142-152 (2004).
. F. -K Guo, P. -N Shen, H. -C Chiang, R. -G Ping, B. -S Zou, Phys. Lett. 641F. -K. Guo, P. -N. Shen, H. -C. Chiang, R. -G. Ping, B. -S. Zou, Phys. Lett. B641, 278-285 (2006).
. D Gamermann, E Oset, D Strottman, M J Vicente, Vacas, Phys. Rev. 7674016D. Gamermann, E. Oset, D. Strottman, M. J. Vicente Vacas, Phys. Rev. D76, 074016 (2007).
. M Doring, U G Meissner, JHEP. 12019M. Doring, U. G. Meissner, JHEP 1201, 009 (2012).
. K Rummukainen, S A Gottlieb, Nucl. Phys. 450K. Rummukainen, S. A. Gottlieb, Nucl. Phys. B450, 397-436 (1995).
. C H Kim, C T Sachrajda, S R Sharpe, Nucl. Phys. 727C. h. Kim, C. T. Sachrajda, S. R. Sharpe, Nucl. Phys. B727, 218-243 (2005).
. Z Davoudi, M J Savage, arXiv:1108.5371hep-latZ. Davoudi and M. J. Savage, arXiv:1108.5371 [hep-lat].
. Z Fu, arXiv:1110.0319hep-latZ. Fu, arXiv:1110.0319 [hep-lat].
. M Doring, U G Meissner, E Oset, A Rusetsky, to be submittedM. Doring, U.G. Meissner, E. Oset and A. Rusetsky, to be submitted.
. L Roca, E Oset, arXiv:1201.0438hep-latL. Roca and E. Oset, arXiv:1201.0438 [hep-lat].
. J A Oller, E Oset, J R Pelaez, hep-ph/9804209Phys. Rev. D. 5999903Erratum-ibid. D 60, 099906 (1999). Erratum-ibid. DJ. A. Oller, E. Oset and J. R. Pelaez, Phys. Rev. D 59, 074001 (1999) [Erratum-ibid. D 60, 099906 (1999)] [Erratum-ibid. D 75, 099903 (2007)] [hep-ph/9804209].
. E Oset, A Ramos, C Bennhold, Phys. Lett. 527E. Oset, A. Ramos, C. Bennhold, Phys. Lett. B527, 99- 105 (2002).
. J A Oller, E Oset, hep-ph/9702314Nucl. Phys. A. 620407Erratum-ibid. AJ. A. Oller and E. Oset, Nucl. Phys. A 620, 438 (1997) [Erratum-ibid. A 652, 407 (1999)] [hep-ph/9702314].
|
[] |
[
"Observation of Complete Photonic Bandgap in Low Refractive Index Contrast Inverse Rod-Connected Diamond Structured Chalcogenides",
"Observation of Complete Photonic Bandgap in Low Refractive Index Contrast Inverse Rod-Connected Diamond Structured Chalcogenides"
] |
[
"Lifeng Chen [email protected] \n†Department of Electrical and Electronic Engineering\n‡Optoelectronics Research Centre\nUniversity of Bristol\nMerchant Venturers Building, Woodland RoadBS8 1UBBristolUnited Kingdom\n\nUniversity of Southampton\nUniversity RoadSO17 1BJSouthamptonUnited Kingdom\n\n¶Oxford Instruments Plasma Technology\nNorth End\nBS49 4APYatton, BristolUnited Kingdom\n",
"Katrina A Morgan ",
"Ghada A Alzaidy ",
"Chung-Che Huang ",
"Ying-Lung Daniel Ho \n†Department of Electrical and Electronic Engineering\n‡Optoelectronics Research Centre\nUniversity of Bristol\nMerchant Venturers Building, Woodland RoadBS8 1UBBristolUnited Kingdom\n\nUniversity of Southampton\nUniversity RoadSO17 1BJSouthamptonUnited Kingdom\n\n¶Oxford Instruments Plasma Technology\nNorth End\nBS49 4APYatton, BristolUnited Kingdom\n",
"Mike P C Taverne \n†Department of Electrical and Electronic Engineering\n‡Optoelectronics Research Centre\nUniversity of Bristol\nMerchant Venturers Building, Woodland RoadBS8 1UBBristolUnited Kingdom\n\nUniversity of Southampton\nUniversity RoadSO17 1BJSouthamptonUnited Kingdom\n\n¶Oxford Instruments Plasma Technology\nNorth End\nBS49 4APYatton, BristolUnited Kingdom\n",
"Xu Zheng \n†Department of Electrical and Electronic Engineering\n‡Optoelectronics Research Centre\nUniversity of Bristol\nMerchant Venturers Building, Woodland RoadBS8 1UBBristolUnited Kingdom\n\nUniversity of Southampton\nUniversity RoadSO17 1BJSouthamptonUnited Kingdom\n\n¶Oxford Instruments Plasma Technology\nNorth End\nBS49 4APYatton, BristolUnited Kingdom\n",
"Zhong Ren ",
"Zhuo Feng ",
"Ioannis Zeimpekis ",
"Daniel W Hewak [email protected] ",
"John G Rarity "
] |
[
"†Department of Electrical and Electronic Engineering\n‡Optoelectronics Research Centre\nUniversity of Bristol\nMerchant Venturers Building, Woodland RoadBS8 1UBBristolUnited Kingdom",
"University of Southampton\nUniversity RoadSO17 1BJSouthamptonUnited Kingdom",
"¶Oxford Instruments Plasma Technology\nNorth End\nBS49 4APYatton, BristolUnited Kingdom",
"†Department of Electrical and Electronic Engineering\n‡Optoelectronics Research Centre\nUniversity of Bristol\nMerchant Venturers Building, Woodland RoadBS8 1UBBristolUnited Kingdom",
"University of Southampton\nUniversity RoadSO17 1BJSouthamptonUnited Kingdom",
"¶Oxford Instruments Plasma Technology\nNorth End\nBS49 4APYatton, BristolUnited Kingdom",
"†Department of Electrical and Electronic Engineering\n‡Optoelectronics Research Centre\nUniversity of Bristol\nMerchant Venturers Building, Woodland RoadBS8 1UBBristolUnited Kingdom",
"University of Southampton\nUniversity RoadSO17 1BJSouthamptonUnited Kingdom",
"¶Oxford Instruments Plasma Technology\nNorth End\nBS49 4APYatton, BristolUnited Kingdom",
"†Department of Electrical and Electronic Engineering\n‡Optoelectronics Research Centre\nUniversity of Bristol\nMerchant Venturers Building, Woodland RoadBS8 1UBBristolUnited Kingdom",
"University of Southampton\nUniversity RoadSO17 1BJSouthamptonUnited Kingdom",
"¶Oxford Instruments Plasma Technology\nNorth End\nBS49 4APYatton, BristolUnited Kingdom"
] |
[] |
Three-dimensional complete photonic bandgap materials or photonic crystals block light propagation in all directions. The rod-connected diamond structure exhibits the largest photonic bandgap known to date and supports a complete bandgap for the lowest refractive index contrast ratio down to n high /n low ∼ 1.9. We confirm this threshold by measuring a complete photonic bandgap in the infrared region in Sn-S-O (n ∼ 1.9) 1 arXiv:1905.00404v1 [physics.optics] 1 May 2019 and Ge-Sb-S-O (n ∼ 2) inverse rod-connected diamond structures. The structures were fabricated using a low-temperature chemical vapor deposition process via a singleinversion technique. This provides a reliable fabrication technique of complete photonic bandgap materials and expands the library of backfilling materials, leading to a wide range of future photonic applications. Keywords direct laser writing, two-photon lithography, chemical vapor deposition, chalcogenide materials, photonic bandgap, three-dimensional photonic crystals Three-dimensional (3D) complete photonic bandgap (PBG) structures have been widely studied since their invention in 1987 by John 1 and Yablonovitch. 2 A complete PBG structure can prohibit photon propagation in any direction and this strong confinement of light can be exploited for applications ranging through high precision sensing, 3 ultralow power and ultrafast optical switches, 4 low threshold nanolasers, 5 high efficiency single photon sources, 6 and integrated photonic circuits. 7 However, such 3D PBG materials are difficult to fabricate. Currently two main techniques of fabrication have been demonstrated: bottom-up and top-down. The bottom-up method refers to schemes where nano-objects self-assemble into structures that then exhibit a PBG. 8,9 The top-down approach refers to creating 3D structures using etching, ion-milling, lithography or laser writing that then produce PBGs. Many of the top-down techniques involve miscellaneous fabrication steps such as waferfusion and micromanipulation, 10,11 while others such as single prism holographic lithography 12,13 do not allow for local modification for defects or waveguides. Alternatively, direct laser writing (DLW) using two-photon polymerization (2PP) allows for a variety of high refractive index contrast (RIC) 3D photonic crystal (PhC) structures with complete PBGs in near-infrared 14 and visible 15 regions to be realized. To fulfill the high RIC (> 2 : 1) requirement high index material (silicon or titanium dioxide) needs to be deposited into 3D and Quantum Information (NSQI), University of Bristol, and the Optoelectronics Research Centre (ORC), University of Southampton, and computational facilities of the Advanced Computing Research Centre (ACRC), University of Bristol.
|
10.1021/acsphotonics.9b00184
|
[
"https://arxiv.org/pdf/1905.00404v1.pdf"
] | 141,463,837 |
1905.00404
|
0a4854c89787af81e9d3596996f2560fd97d82e2
|
Observation of Complete Photonic Bandgap in Low Refractive Index Contrast Inverse Rod-Connected Diamond Structured Chalcogenides
Lifeng Chen [email protected]
†Department of Electrical and Electronic Engineering
‡Optoelectronics Research Centre
University of Bristol
Merchant Venturers Building, Woodland RoadBS8 1UBBristolUnited Kingdom
University of Southampton
University RoadSO17 1BJSouthamptonUnited Kingdom
¶Oxford Instruments Plasma Technology
North End
BS49 4APYatton, BristolUnited Kingdom
Katrina A Morgan
Ghada A Alzaidy
Chung-Che Huang
Ying-Lung Daniel Ho
†Department of Electrical and Electronic Engineering
‡Optoelectronics Research Centre
University of Bristol
Merchant Venturers Building, Woodland RoadBS8 1UBBristolUnited Kingdom
University of Southampton
University RoadSO17 1BJSouthamptonUnited Kingdom
¶Oxford Instruments Plasma Technology
North End
BS49 4APYatton, BristolUnited Kingdom
Mike P C Taverne
†Department of Electrical and Electronic Engineering
‡Optoelectronics Research Centre
University of Bristol
Merchant Venturers Building, Woodland RoadBS8 1UBBristolUnited Kingdom
University of Southampton
University RoadSO17 1BJSouthamptonUnited Kingdom
¶Oxford Instruments Plasma Technology
North End
BS49 4APYatton, BristolUnited Kingdom
Xu Zheng
†Department of Electrical and Electronic Engineering
‡Optoelectronics Research Centre
University of Bristol
Merchant Venturers Building, Woodland RoadBS8 1UBBristolUnited Kingdom
University of Southampton
University RoadSO17 1BJSouthamptonUnited Kingdom
¶Oxford Instruments Plasma Technology
North End
BS49 4APYatton, BristolUnited Kingdom
Zhong Ren
Zhuo Feng
Ioannis Zeimpekis
Daniel W Hewak [email protected]
John G Rarity
Observation of Complete Photonic Bandgap in Low Refractive Index Contrast Inverse Rod-Connected Diamond Structured Chalcogenides
Three-dimensional complete photonic bandgap materials or photonic crystals block light propagation in all directions. The rod-connected diamond structure exhibits the largest photonic bandgap known to date and supports a complete bandgap for the lowest refractive index contrast ratio down to n high /n low ∼ 1.9. We confirm this threshold by measuring a complete photonic bandgap in the infrared region in Sn-S-O (n ∼ 1.9) 1 arXiv:1905.00404v1 [physics.optics] 1 May 2019 and Ge-Sb-S-O (n ∼ 2) inverse rod-connected diamond structures. The structures were fabricated using a low-temperature chemical vapor deposition process via a singleinversion technique. This provides a reliable fabrication technique of complete photonic bandgap materials and expands the library of backfilling materials, leading to a wide range of future photonic applications. Keywords direct laser writing, two-photon lithography, chemical vapor deposition, chalcogenide materials, photonic bandgap, three-dimensional photonic crystals Three-dimensional (3D) complete photonic bandgap (PBG) structures have been widely studied since their invention in 1987 by John 1 and Yablonovitch. 2 A complete PBG structure can prohibit photon propagation in any direction and this strong confinement of light can be exploited for applications ranging through high precision sensing, 3 ultralow power and ultrafast optical switches, 4 low threshold nanolasers, 5 high efficiency single photon sources, 6 and integrated photonic circuits. 7 However, such 3D PBG materials are difficult to fabricate. Currently two main techniques of fabrication have been demonstrated: bottom-up and top-down. The bottom-up method refers to schemes where nano-objects self-assemble into structures that then exhibit a PBG. 8,9 The top-down approach refers to creating 3D structures using etching, ion-milling, lithography or laser writing that then produce PBGs. Many of the top-down techniques involve miscellaneous fabrication steps such as waferfusion and micromanipulation, 10,11 while others such as single prism holographic lithography 12,13 do not allow for local modification for defects or waveguides. Alternatively, direct laser writing (DLW) using two-photon polymerization (2PP) allows for a variety of high refractive index contrast (RIC) 3D photonic crystal (PhC) structures with complete PBGs in near-infrared 14 and visible 15 regions to be realized. To fulfill the high RIC (> 2 : 1) requirement high index material (silicon or titanium dioxide) needs to be deposited into 3D and Quantum Information (NSQI), University of Bristol, and the Optoelectronics Research Centre (ORC), University of Southampton, and computational facilities of the Advanced Computing Research Centre (ACRC), University of Bristol.
templates. Most deposition temperatures are above the polymer melting point hence double inversion methods 16 or protective layers methods 15 have been used to make complete PBG materials. There have also been successful demonstrations of DLW into photosensitive chalcogenide materials, followed by etching, showing bandgaps in the 3 − 4 µm wavelength range. 17,18 In this work, we have developed a low temperature Chemical Vapor Deposition (CVD) of chalcogenide materials 19,20 to directly backfill unmodified polymer templates. Two materials were chosen for this work, Sn-S and Ge-Sb-S, due to their high refractive index values and low absorption in the near-infrared region, but also because they have attractive nonlinear optical properties suitable for applications, such as optical switches. 20 The chalcogenide materials are conformally coated on polymeric RCD templates, 21 which are written by a commercial DLW system (Nanoscribe GmbH). The chalcogenide/polymer structure is then exposed to an oxygen plasma, resulting in selective etching of the polymeric scaffold. This novel approach results in chalcogenide inverse RCD structures, and here, we successfully demonstrate measurements showing a complete PBG at near-infrared wavelength (0.9 − 1.7 µm) with low RIC (n high /n low ∼ 1.9 : 1 for Sn-S-O/air and 2:1 for Ge-Sb-S-O/air) materials.
To achieve a complete PBG for smaller wavelengths, a higher PBG ratio (gap width to center wavelength ratio ∆λ/λ 0 ) structure is required to minimize fabrication tolerance, that is, prevent errors from templates and depositions. The RCD structure, 22 from the A7 crystal family, 23 is reported to retain the highest complete PBG ratio among all crystal geometries and the lowest RIC (n high /n low ∼ 1.9 : 1) known to support a complete PBG. 24,25 This RCD structure is described as rods replacing bonds between atoms in a diamond crystal. The and inverse (orange) RCD in different high index material FF; bold black line with shadow in between indicates the complete PBG region changes as a function of normalized rod radius of RCD in an inverse RCD. The simulations assume rods are cylindrical and a high-index material with n = 2.4, 2, and 1.9 in air (n = 1), respectively. The rod radius was varied with a step of 0.01a. Note that the bandgap ratios were plotted as a function of high-index filling fraction (computed by MPB), while the bandgap regions were plotted as a function of the normalized radius r/a. The relationship between filling fraction and radius is almost, but not perfectly, linear. This explains the misalignment between the two types of plots. Figure 1a). The translational symmetry of an RCD (and its inversion) is the same as a face-centered cubic (FCC) structure, and the first Brillouin zone is a truncated octahedron (red line in Figure 1b). XWKLU are the symmetry points on the Brillouin zone in an RCD structure. A high-quality RCD is not a layered structure and, thus, cannot be easily achieved using layer-by-layer 2D lithography methods. Moreover, the rod diameter required for direct high-index RCD 26 is below the 2PP DLW system resolution with a 780 nm laser.
Fortunately, its inverse structure shows a slightly smaller PBG ratio of 11% (compared to 11.7 %) at the same RIC (2.4:1), material filling fraction (FF), and rod radius, as illustrated in Figure 1c. By utilizing this inverse structure, one can create a relatively low resolution (big rods) polymer template to realize an air-filled high index structure.
Results and Discussion
Using the MIT photonic band (MPB) software, 27 based on the plane wave expansion method, we calculated the normal and inverse RCDs photonic band structure. We also used Lumerical, 28 a commercial-grade simulator based on the finite-difference time-domain (FDTD) method, to calculate their angular dependent reflection spectra. For the FDTD simulations, we used a plane wave as source and set it to different propagation angles (relative to the normal incidence) to create angular spectrum results. Substrates are not included in all calculations as the substrate thickness is far bigger than the photonic crystal thickness and it generates barely visible differences compared to the simulations without substrate, but hugely increases the required computational resources. Figure 1c plots the PBG ratio and the normalized frequency (a/λ) of the PBG position as a function of the air rod radius (or high index material FF) in an inverse RCD structure. In this example, we assume rods are cylindrical and used three RICs 2.4:1, 2:1 and 1.9:1 (refractive index is averaged and nondispersive). A band structure calculation for inverse RCDs with RIC 2.4:1 at different FF ( Figure 1c) shows the complete PBG only appears within the range of air rod radius from 0.175a to 0.3a, with a corresponding FF of material from 60% to 10% and normalized frequency of PBG from 0.525 to 0.8(a/λ). The maximum complete PBG ratio is 11%, ranging from 0.63 to 0.71(a/λ), and appears with a 0.25a air rod radius. When reducing the RIC to 2:1 and 1.9, the FF and radius ranges to obtain a complete PBG decrease, while the PBG ratio decreases and the midgap frequency increases. However, the optimal radius stays around 0.23-0.25a for all three RICs. A practical resolution constant for the RCD structure a = 1 µm is chosen, which results in the PBG wavelength range from 1.3 to 1.9 µm and around 500 nm diameter template rods. An optimized commercialized DLW system based on 2PP has shown voxel resolution down to 200 nm lateral and 300 nm vertical. 29 The fabrication of an inverse RCD structure can be described in three main phases: template construction, high index material backfilling, and polymer removal, shown in Figure 2.
Phase one constructs a polymer RCD template ( Figure 2a) using a 2PP DLW method (see Materials and Methods for the details). We followed the optimized rod radius based on the MPB simulations (Figure 1c). The fabricated template has 6 lattice periods in z-axis and 14 in x-and y-axes (Figure 2d). To examine the 3D structure quality, an optical characterization of the template is performed prior to the backfilling process. This is done using a home-built Fourier imaging spectroscopy setup 30 (see Materials and Methods).
In phase two (Figure 2b), high refractive index chalcogenide materials (Sn-S or Ge-Sb-S) are conformally deposited into the polymer templates using an in-house built CVD system 31 (see Materials and Methods). The deposition rate of the CVD materials is controlled by the ratios between precursors and reactive gas, chamber pressure, deposition temperature, and gas flow. The low melting point nature of the polymer template limits the deposition temperature to 200℃ or below. The Sn-S deposition was carried out at room temperature whereas 150℃ was used for Ge-Sb-S deposition. A Scanning Electron Microscope (SEM) photo of a fully backfilled Ge-Sb-S-polymer RCD structure is shown in Figure 2e.
The final phase three is polymer removal. In phase two, the high index material grows omni-directionally inside and on top of the polymer templates. We used focused ion beam (FIB) milling to cut a thin layer off the top, to expose the buried polymer beneath, enabling the oxygen plasma to access the polymer template from above. The oxygen plasma reacts with the polymer to form gaseous compounds that escape from the template, but this treatment also partially oxidizes the chalcogenides resulting in reduced refractive indexes (see Materials and Methods for the details). Once the polymer template is removed, the partially oxidized high index materials will remain forming an inverse RCD structure. The photonic band structures of the polymer templates are measured using wide-angle Fourier imaging spectroscopy and compared with the FDTD simulations. 30 The RCD tem-plate fabricated via the 2PP process is an air-polymer-based crystal, where the RIC is approximately 1:1.5. Figure 3 shows the angular reflection spectra comparison between measured polymer templates and simulations via FDTD. In the measurement result for the Sn-S-O structure (Figure 4a), a continuous reflec-tion peak (the fundamental PBG) across symmetry points X-W-K and L-U-X appears at around 1250 − 1500 nm with 15 − 35% reflectivity. The fundamental band reflectivity at K-L direction drops to around 15% due to the low RIC (1.9:1). Some higher order bands (reflection peaks) appear between 1000 − 1300 nm in the W-K and U-X direction, matching its MPB simulation results. For the Ge-Sb-S-O structure measurement result in Figure 4d Moreover, our simulation work 26 points the way toward micro/nanocavity designs capable of confining light in mode volumes down to 10 −3 cubic wavelengths. 33 Our 3D lithography approach can be directly adapted to writing these cavity and waveguide structures, incorporating emitters to open new regimes of single photon level interactions, 34 novel nanolasers 5 and high-bandwidth, lossless, and subwavelength scale optical circuits. 7 Future work will consider the fabrication of inverse RCD structures, along different growth directions (in particular, with the L direction aligned with the substrate normal) in order to measure the reflectivity at these higher angles more accurately. However, the accuracy will then be affected by structural differences due to the elliptical voxel shape and the ensuing required differences in writing technique.
Materials and Methods
Direct Laser Writing (DLW)
The DLW system is a commercial system from Nanoscribe GmbH, based on 2PP, which contains a 780 nm femtosecond laser (pulse width ∼ 120 f s and repetition rate ∼ 80 M Hz) substrate. An exposed template sample is developed using SU-8 developer for 30 min (to remove unpolymerized resist) and IPA for 5 min (to remove SU-8 developer).
Fourier Imaging Spectroscopy (FIS)
We used an identical system to that described in our previous work. 30 This home-built Fourier imaging spectroscope uses a 4× objective lens to collimate a fiber (200 µm diameter) coupled white light source (Bentham Ltd. WLS100 300 − 2500 nm), focusing the light beam with an NA = 0.9, 60× objective lens on the sample. The detection plane is a projection image for the backfocal plane of the objective lens. This image is scanned by a fiber (105 µm diameter) attaches to a x-y motorized stage, the other end of the fiber connects to a spectrometer (Ocean optics NIRQuest512), which has 900 − 1700 nm spectrum range.
The angular resolution of the system is ∼ 2°per scan step.
Chemical Vapor Deposition (CVD)
For Sn-S deposition: SnCl 4 (99.999% pure from Alfa Aesar) is used as the precursor to react with H 2 S gas (99.9% pure from Air Liquide) to form Sn-S at room temperature with the chamber pressure of 100 mbar controlled by a Vacuubrand MV10NT diaphragm pump.
A 30 mm O.D. × 1000 mm long quartz tube is used for CVD reaction and the precursor, SnCl 4 vapor, was delivered with Ar gas through a mass flow controller (MFC) at 10 sccm, whereas H 2 S gas was delivered through another MFC at 50 sccm.
For Ge-Sb-S deposition: GeCl 4 (99.9999% pure from Umicore) and SbCl 5 (99.999% pure from GWI) are used as the precursors to react with H 2 S gas (99.9% pure from Air Liquide) to form Ge-Sb-S at 150℃ with atmospheric chamber pressure. A 30 mm O.D. × 1000 mm long quartz tube is used for CVD reaction and the precursors, GeCl 4 and SbCl 5 vapors, are delivered individually with Ar gas through MFCs at 20 and 80 sccm, respectively, whereas H 2 S gas is delivered through another MFC at 50 sccm.
Inductively Coupled Plasma (ICP) Etching
We used an ICP system, PlasmalabSystem 100 (ICP 180), from Oxford Instruments in the polymer removal process. The process was run twice for Sn-S structure and each time uses SEM to confirm complete removal of polymer template. The first run duration time was 5 min 30 s with 30 mTorr chamber pressure, oxygen flow rate was 50 sccm, and 100 W RF forward power and 400 W ICP forward power, reaction temperature was 60℃. The second run reduced duration to 5 min and chamber pressure to 20 mTorr, other settings unchanged.
For Ge-Sb-S structure the ICP parameters changed to duration time 40 min, 30 mTorr chamber pressure, oxygen flow rate was 50 sccm, 20 W RF forward power and 400 W ICP forward power, reaction temperature decreased to 40℃ to reduce etching rate.
Evaluation of the Refractive Index Values
The refractive index of planar chalcogenide films grown under identical conditions and exposed to similar plasma etching were evaluated by ellipsometry. However, these measurements yielded refractive index values that were far too high (> 2.6 for GeSbS) to explain the optical measurements. It was suspected that this might be due to oxidation effects which would be confined to the surface in planar films, while our porous structures are effectively fully oxidized. EDX measurements of the composition of planar films confirmed limited oxidation. In contrast, EDX measured compositions of 3D RCD structures were The shadow region is where the PBG located. In this case, the refractive index of the structure is close to 2.0. Figure 6: Sn-S-O refractive index fitted via FDTD simulations. The modeling results are compared with the measurement data (from the inverse RCD) at normal incident angle. The shadow region is where the PBG is located. In this case, the refractive index of the structure is close to 1.9.
Figure 1 :
1conventional cubic unit cell consists of four tetrahedrons (Figure 1a) stacked two by two in orthogonal directions (Figure 1b). The lattice constant, a, is used to define an RCD structure (Figure 1b). For a realistic simulation of the fabricated samples, elliptical rods are preferred, with a width w and a "transverse height" h (∼ 2/3H, where H is the "vertical height"; (a) Smallest unit of an RCD with structure parameters, width w and height h and H, labeled. (b) Individual RCD lattice with its Brillouin zone (red bold line), XWKLU are the symmetry points in reciprocal space. (c) Diagram calculated via MPB simulations: color dot lines showing complete bandgap ratio comparison of normal (blue)
Figure 2 :
2(a-c) Illustrations and (d-f) SEM photos for each fabrication step. (a) and (d) show the polymer template with a size around 14 × 14 µm in the x-y plane. (b) and (e) backfilled (showing Ge-Sb-S structure) template with no visible air gap. (c) and (f) are the 45°oblique views for the inverse Ge-Sb-S-O RCD; insets show enlarged areas from the cross section, and parameters are measured as w ∼ 389 nm and H ∼ 402 nm.
Figure 2f shows the oblique view of a completed inverse RCD structure (Ge-Sb-S-O based) from SEM. Elliptical air gaps are visible in the cross section, with width w ∼ 389 nm and height h ∼ 501 nm (ascertained from the SEM measured H ∼ 402 nm and Figure 1a).
Figure 3 :
3Intensity color plot for angular (a, c) reflection measurements and (b, d) FDTD simulation of RCD polymer templates. (a) and (b) are the results in X-W-K directions; (c) and (d) are the results in X-U-L directions.
Figure 3a ,
3ab is the optical response (unpolarized reflection) for a detection angle in XUL andFigure 3c,d is for a detection angle in XWK.Figure 3aand c demonstrate partial band gaps with reflectivity above 20 % (30 % in simulation) at around 1200 − 1300 nm wavelength in normal incidence angle (X) and blue-shifted reflection peaks at the second symmetry point (U and W, respectively), with reflectivity up to about 40 % in measurements and 50 % in FDTD simulation. FDTD simulations use structure parameters h = 500 nm, w = 400 nm, a = 0.925 µm (slightly less than the targeted 1µm lattice size due to polymer shrinkage 21 ), and finite 10 by 10 lattice periods in the x-y plane and 6 periods in the z-direction. The simulated fundamental bandgaps closely match the experimental results from the polymer template, with minor differences appearing only in higher order bands, demonstrating a high-quality template has been achieved.
Figure 4a ,Figure 4 :
4a4d, b, and e show the optical response of the Sn-S-O (Ge-Sb-S-O) inverse RCDs in all symmetry points XWKLUX for measurement and FDTD simulation, respec-Color plot (a, d) measured and (b, e) FDTD simulated angular reflection spectra, mapped with photonic band diagram in X-W-K-L-U-X directions. The reflection spectra color line plots (c, f) for measured structures of X, W, K, L, and U directions individually have been noise reduction processed to show the main features: the shadow area includes the main band features and the black area indicates the complete PBG; (a)-(c) are for Sn-S-O and (d)-(f) are for Ge-Sb-S-O. tively. MPB band structure calculations have been layered on top of all results in Figure 4. The refractive index of partially oxidized chalcogenides is estimated by using the energydispersive X-ray (EDX) spectroscopy technique to measure the material composition on both 3D structures and a test deposition on a wafer placed next to the sample in the deposition process. The dispersions of index for both materials are less than 0.1 in the range of 900 − 1700 nm, according to literature, 31,32 thus, we use averaged nondispersive refractive indexes for both materials in simulations. The resulting indexes are 1.9 for Sn-S-O and 2.0 for Ge-Sb-S-O at 900 − 1700 nm (see Materials and Methods for details).
peak across all symmetry points appears at around 1250 − 1550 nm, with a maximum 70 % reflectivity at the W direction and lowest reflectivity around 30 % at the L direction. Both FDTD simulations (Figure 4b, e) are based on finite size structures (10 lattices in the x-y plane). A small DC offset was applied to the color scale of measured data to suppress the background scattered light. The FDTD simulation parameters for the air rods are h = 400 nm and w = 450 nm for the Sn-S-O structure and h = 500 nm and w = 400 nm for Ge-Sb-S-O structure adjusted to best fit the optical results. This also confirms the values measured from the SEM results and supports our estimates of refractive indexes. The FDTD simulation shows lowered reflections in the peak (drop to less than 20 % reflectivity) at high observation angles (in the K-L region) due to the finite size (in the x-y plane) of the structures and edging effects, in contrast to the MPB (infinite structure size) calculations. For the same reasons, the measurements also show this effect, although there is a slight discontinuity seen around the L direction from a limitation of the imaging lens numerical aperture (NA). Figure 4c and f demonstrate the reflection spectra at each symmetry point for Sn-S-O and Ge-Sb-S-O structures, respectively. The overlapping reflection peaks at around 1425 nm for the Sn-S-O structure, and 1410 − 430 nm for the Ge-Sb-S-O structure indicate the appearance of complete bandgaps in both Sn-S-O (bandgap ratio > 0.3%) and Ge-Sb-S-O (bandgap ratio > 1%). Conclusion By introducing the single-inversion process using a low-temperature CVD technique, we demonstrate low RIC (Sn-S-O and Ge-Sb-S-O) inverse RCD structures with complete PBGs working in the near-infrared region for the first time. This single-inversion approach dramatically simplifies the 3D fabrication process. Using low-temperature CVD and removing polymer templates with oxygen plasma, we have shown it is possible to completely fill the nanoscale void space of 3D templates with chalcogenide materials. Optical modeling of the PBG material guided the design and, when compared with characterization results, enabled an estimation of device quality at each fabrication step. The complete PBGs (> 0.3% and > 1%) of inverse RCDs formed in low RIC (1.9:1 and 2:1), via Sn-S-O and Ge-Sb-S-O chalcogenide materials, were experimentally measured with results compared with numerical simulations using the FDTD technique and plane-wave expansion method. These results demonstrate the threshold of the lowest RIC supporting a complete PBG, experimentally validating results predicted by the topology optimization approach. 25 These results open the way for developing a process to reliably fabricate arbitrary photonic bandgap structures in technologically relevant wavelength regions (1.4 − 1.6 µm).
and a high NA (= 1.4) oil immersion objective lens (100×, Zeiss). The laser writing power is set to 20% of mean output power (20 mW), with an adaptive piezo stage scanning speed of 50 µm/s and three repetitions per line. The photoresist used is a liquid negative resist, IP-L 780 (Nanoscribe GmbH), drop-casted onto a 22 mm × 22 mm × 170 µm glass
Ge 12 Sb 15 S 33 O 40 and Sn 15 S 14 O 71 , showing significant oxygen uptake and suggesting much lower refractive index values around 2.0 and 1.9, respectively. This was further confirmed by using the refractive index (RI) values as a hand fitting parameter in the calculation of the expected reflection spectra at normal incidence using the FDTD method, as illustrated in Figures 5 and 6.
Figure 5 :
5Ge-Sb-S-O refractive index fitted via FDTD simulations. The modeling results are compared with the measurement data (from the inverse RCD) at normal incident angle.
Strong localization of photons in certain disordered dielectric superlattices. S John, 10.1103/PhysRevLett.58.2486Physical Review Letters. 58John, S. Strong localization of photons in certain disordered dielectric superlattices. Physical Review Letters 1987, 58, 2486-2489.
Inhibited Spontaneous Emission in Solid-State Physics and Electronics. E Yablonovitch, 10.1103/PhysRevLett.58.2059Physical Review Letters. 58Yablonovitch, E. Inhibited Spontaneous Emission in Solid-State Physics and Electron- ics. Physical Review Letters 1987, 58, 2059-2062.
Photonic crystal cavity based gas sensor. T Sünner, T Stichel, S.-H Kwon, T W Schlereth, S Höfling, M Kamp, A Forchel, 10.1063/1.2955523Applied Physics Letters. 92261112Sünner, T.; Stichel, T.; Kwon, S.-H.; Schlereth, T. W.; Höfling, S.; Kamp, M.; Forchel, A. Photonic crystal cavity based gas sensor. Applied Physics Letters 2008, 92, 261112.
Sub-femtojoule all-optical switching using a photonic-crystal nanocavity. K Nozaki, T Tanabe, A Shinya, S Matsuo, T Sato, H Taniyama, M Notomi, 10.1038/nphoton.2010.89Nature Photonics. 4Nozaki, K.; Tanabe, T.; Shinya, A.; Matsuo, S.; Sato, T.; Taniyama, H.; Notomi, M. Sub-femtojoule all-optical switching using a photonic-crystal nanocavity. Nature Pho- tonics 2010, 4, 477-483.
Thresholdless nanoscale coaxial lasers. M Khajavikhan, A Simic, M Katz, J H Lee, B Slutsky, A Mizrahi, V Lomakin, Y Fainman, 10.1038/nature10840Nature. 482Khajavikhan, M.; Simic, A.; Katz, M.; Lee, J. H.; Slutsky, B.; Mizrahi, A.; Lomakin, V.; Fainman, Y. Thresholdless nanoscale coaxial lasers. Nature 2012, 482, 204-207.
Diamond-based single-photon emitters. I Aharonovich, S Castelletto, D A Simpson, C.-H Su, A D Greentree, S Prawer, 10.1088/0034-4885/74/7/076501Reports on Progress in Physics. 7476501Aharonovich, I.; Castelletto, S.; Simpson, D. A.; Su, C.-H.; Greentree, A. D.; Prawer, S. Diamond-based single-photon emitters. Reports on Progress in Physics 2011, 74, 76501.
Why trap light. S John, 10.1038/nmat3503Nature Materials. 11997John, S. Why trap light? Nature Materials 2012, 11, 997.
. A Blanco, E Chomski, S Grabtchak, M Ibisate, S John, S W Leonard, C Lopez, F Meseguer, H Miguez, J P Mondia, G A Ozin, O Toader, vanBlanco, A.; Chomski, E.; Grabtchak, S.; Ibisate, M.; John, S.; Leonard, S. W.; Lopez, C.; Meseguer, F.; Miguez, H.; Mondia, J. P.; Ozin, G. A.; Toader, O.; van
Large-scale synthesis of a silicon photonic crystal with a complete threedimensional bandgap near 1.5 micrometres. H M Driel, 10.1038/35013024Nature. 405Driel, H. M. Large-scale synthesis of a silicon photonic crystal with a complete three- dimensional bandgap near 1.5 micrometres. Nature 2000, 405, 437-440.
Self-Assembled Photonic Structures. J F Galisteo-López, M Ibisate, R Sapienza, L S Froufe-Pérez, Á Blanco, C López, 10.1002/adma.201000356Advanced Materials. 23Galisteo-López, J. F.; Ibisate, M.; Sapienza, R.; Froufe-Pérez, L. S.; Blanco,Á.; López, C. Self-Assembled Photonic Structures. Advanced Materials 2011, 23, 30-69.
Practical approach for a rod-connected diamond photonic crystal operating at optical wavelengths. K Aoki, 10.1063/1.3264088Applied Physics Letters. 95Aoki, K. Practical approach for a rod-connected diamond photonic crystal operating at optical wavelengths. Applied Physics Letters 2009, 95, 191910.
Microassembly of semiconductor three-dimensional photonic crystals. K Aoki, H T Miyazaki, H Hirayama, K Inoshita, T Baba, K Sakoda, N Shinya, Y Aoyagi, 10.1038/nmat802Nature Materials. 2Aoki, K.; Miyazaki, H. T.; Hirayama, H.; Inoshita, K.; Baba, T.; Sakoda, K.; Shinya, N.; Aoyagi, Y. Microassembly of semiconductor three-dimensional photonic crystals. Nature Materials 2003, 2, 117-121.
S.-G Park, M Miyake, S.-M Yang, P Braun, 10.1002/adma.201004547Cu2O Inverse Woodpile Photonic Crystals by Prism Holographic Lithography and Electrodeposition. 23Park, S.-G.; Miyake, M.; Yang, S.-M.; Braun, P. V. Cu2O Inverse Woodpile Photonic Crystals by Prism Holographic Lithography and Electrodeposition. Advanced Materials 2011, 23, 2749-2752.
Fabrication of photonic crystals for the visible spectrum by holographic lithography. M Campbell, D N Sharp, M T Harrison, R G Denning, A J Turberfield, 10.1038/35003523Nature. 40453Campbell, M.; Sharp, D. N.; Harrison, M. T.; Denning, R. G.; Turberfield, A. J. Fabri- cation of photonic crystals for the visible spectrum by holographic lithography. Nature 2000, 404, 53.
New Route to Three-Dimensional Photonic Bandgap Materials: Silicon Double Inversion of Polymer Templates. N Tétreault, G Von Freymann, M Deubel, M Hermatschweiler, F Pérez-Willard, S John, M Wegener, G Ozin, 10.1002/adma.200501674Advanced Materials. 18Tétreault, N.; von Freymann, G.; Deubel, M.; Hermatschweiler, M.; Pérez-Willard, F.; John, S.; Wegener, M.; Ozin, G. a. New Route to Three-Dimensional Photonic Bandgap Materials: Silicon Double Inversion of Polymer Templates. Advanced Materials 2006, 18, 457-460.
Titania Woodpiles with Complete Three-Dimensional Photonic Bandgaps in the Visible. A Frölich, J Fischer, T Zebrowski, K Busch, M Wegener, 10.1002/adma.201300896Advanced Materials. 25Frölich, A.; Fischer, J.; Zebrowski, T.; Busch, K.; Wegener, M. Titania Woodpiles with Complete Three-Dimensional Photonic Bandgaps in the Visible. Advanced Materials 2013, 25, 3588-3592.
Rapid and Low-Cost Prototyping of 3D Nanostructures with Multi-Layer Hydrogen Silsesquioxane Scaffolds. L T Varghese, L Fan, J Wang, Y Xuan, M Qi, 10.1002/smll.201301658Small. 9Varghese, L. T.; Fan, L.; Wang, J.; Xuan, Y.; Qi, M. Rapid and Low-Cost Prototyping of 3D Nanostructures with Multi-Layer Hydrogen Silsesquioxane Scaffolds. Small 2013, 9, 4237-4242.
Adaptive optics enhanced direct laser writing of high refractive index gyroid photonic crystals in chalcogenide glass. B P Cumming, M D Turner, G E Schröder-Turk, S Debbarma, B Luther-Davies, M Gu, 10.1364/OE.22.000689Opt. Express. 22Cumming, B. P.; Turner, M. D.; Schröder-Turk, G. E.; Debbarma, S.; Luther- Davies, B.; Gu, M. Adaptive optics enhanced direct laser writing of high refractive index gyroid photonic crystals in chalcogenide glass. Opt. Express 2014, 22, 689-698.
Planar defects in three-dimensional chalcogenide glass photonic crystals. E Nicoletti, D Bulla, B Luther-Davies, M Gu, 10.1364/OL.36.002248Optics Letters. 36Nicoletti, E.; Bulla, D.; Luther-Davies, B.; Gu, M. Planar defects in three-dimensional chalcogenide glass photonic crystals. Optics Letters 2011, 36, 2248.
D Freeman, C Grillet, M W Lee, C L Smith, Y Ruan, A Rode, M Krolikowska, S Tomljenovic-Hanic, C De Sterke, M J Steel, B Luther-Davies, S Madden, D J Moss, Y Lee, B J Eggleton, 10.1016/j.photonics.2007.11.001Chalcogenide glass photonic crystals. Photonics and Nanostructures -Fundamentals and Applications. 6Freeman, D.; Grillet, C.; Lee, M. W.; Smith, C. L.; Ruan, Y.; Rode, A.; Kro- likowska, M.; Tomljenovic-Hanic, S.; de Sterke, C.; Steel, M. J.; Luther-Davies, B.; Madden, S.; Moss, D. J.; Lee, Y.-h.; Eggleton, B. J. Chalcogenide glass photonic crys- tals. Photonics and Nanostructures -Fundamentals and Applications 2008, 6, 3-11.
. D W Hewak, D Brady, R J Curry, G Elliott, C.-C Huang, M Hughes, K Knight, A Mairaj, M Petrovich, R Simpson, C Smith, C Sproat, Photonic glasses and glass-ceramicsHewak, D. W.; Brady, D.; Curry, R. J.; Elliott, G.; Huang, C.-C.; Hughes, M.; Knight, K.; Mairaj, A.; Petrovich, M.; Simpson, R.; Smith, C.; Sproat, C. In Pho- tonic glasses and glass-ceramics;
. G S Murugan, Ed, Research SignpostMurugan, G. S., Ed.; Research Signpost, 2010; pp 29-102.
Evidence of near-infrared partial photonic bandgap in polymeric rod-connected diamond structures. L Chen, M P C Taverne, X Zheng, J.-D Lin, R Oulton, M Lopez-Garcia, Y.-L D Ho, J G Rarity, 10.1364/OE.23.026565Optics Express. 2326565Chen, L.; Taverne, M. P. C.; Zheng, X.; Lin, J.-D.; Oulton, R.; Lopez-Garcia, M.; Ho, Y.-L. D.; Rarity, J. G. Evidence of near-infrared partial photonic bandgap in poly- meric rod-connected diamond structures. Optics Express 2015, 23, 26565.
Photonic Band Gaps in Experimentally Realizable Periodic Dielectric Structures. C T Chan, K M Ho, C M Soukoulis, 10.1209/0295-5075/16/6/009Europhysics Letters. 16563EPLChan, C. T.; Ho, K. M.; Soukoulis, C. M. Photonic Band Gaps in Experimentally Realizable Periodic Dielectric Structures. EPL (Europhysics Letters) 1991, 16, 563.
A 7 structure: A family of photonic crystals. C Chan, S Datta, K Ho, C Soukoulis, 10.1103/PhysRevB.50.1988Physical Review B. 50Chan, C.; Datta, S.; Ho, K.; Soukoulis, C. A 7 structure: A family of photonic crystals. Physical Review B 1994, 50, 1988-1991.
Diamond-structured photonic crystals. M Maldovan, E L Thomas, 10.1038/nmat1201Nature Materials. 3Maldovan, M.; Thomas, E. L. Diamond-structured photonic crystals. Nature Materials 2004, 3, 593-600.
Robust topology optimization of three-dimensional photonic-crystal band-gap structures. H Men, K Y K Lee, R M Freund, J Peraire, S G Johnson, 10.1364/OE.22.022632Optics Express. 2222632Men, H.; Lee, K. Y. K.; Freund, R. M.; Peraire, J.; Johnson, S. G. Robust topology optimization of three-dimensional photonic-crystal band-gap structures. Optics Express 2014, 22, 22632.
Modelling defect cavities formed in inverse three-dimensional rod-connected diamond photonic crystals. M P C Taverne, Y.-L D Ho, X Zheng, S Liu, L.-F Chen, M Lopez-Garcia, J G Rarity, 10.1209/0295-5075/116/64007Europhysics Letters. 11664007EPLTaverne, M. P. C.; Ho, Y.-L. D.; Zheng, X.; Liu, S.; Chen, L.-F.; Lopez-Garcia, M.; Rar- ity, J. G. Modelling defect cavities formed in inverse three-dimensional rod-connected diamond photonic crystals. EPL (Europhysics Letters) 2016, 116, 64007.
Block-iterative frequency-domain methods for Maxwell's equations in a planewave basis. S Johnson, J Joannopoulos, 10.1364/OE.8.000173Optics express. 8Johnson, S.; Joannopoulos, J. Block-iterative frequency-domain methods for Maxwell's equations in a planewave basis. Optics express 2001, 8, 173-190.
Fabrication of Silicon Inverse Woodpile Photonic Crystals. M Hermatschweiler, A Ledermann, G A Ozin, M Wegener, G Von Freymann, 10.1002/adfm.200601074Advanced Functional Materials. 17Hermatschweiler, M.; Ledermann, A.; Ozin, G. a.; Wegener, M.; von Freymann, G. Fab- rication of Silicon Inverse Woodpile Photonic Crystals. Advanced Functional Materials 2007, 17, 2273-2277.
Direct wide-angle measurement of a photonic band structure in a three-dimensional photonic crystal using infrared Fourier imaging spectroscopy. L Chen, M Lopez-Garcia, M P C Taverne, X Zheng, Y.-L D Ho, J Rarity, 10.1364/OL.42.001584Optics Letters. 421584Chen, L.; Lopez-Garcia, M.; Taverne, M. P. C.; Zheng, X.; Ho, Y.-L. D.; Rarity, J. Direct wide-angle measurement of a photonic band structure in a three-dimensional photonic crystal using infrared Fourier imaging spectroscopy. Optics Letters 2017, 42, 1584.
Optical properties of CVD grown amorphous GeSbS thin films. C C Huang, C C Wu, K Knight, D W Hewak, 10.1016/j.jnoncrysol.2009.12.027Journal of Non-Crystalline Solids. 356Huang, C. C.; Wu, C. C.; Knight, K.; Hewak, D. W. Optical properties of CVD grown amorphous GeSbS thin films. Journal of Non-Crystalline Solids 2010, 356, 281-285.
Optical properties of thermally evaporated SnS thin films. M El-Nahass, H Zeyada, M Aziz, N El-Ghamaz, 10.1016/S0925-3467(02)00030-7Optical Materials. 20El-Nahass, M.; Zeyada, H.; Aziz, M.; El-Ghamaz, N. Optical properties of thermally evaporated SnS thin films. Optical Materials 2002, 20, 159-170.
Strong light confinement in rod-connected diamond photonic crystals. M P C Taverne, Y.-L D Ho, X Zheng, L Chen, C.-H N Fang, J Rarity, 10.1364/OL.43.005202Opt. Lett. 43Taverne, M. P. C.; Ho, Y.-L. D.; Zheng, X.; Chen, L.; Fang, C.-H. N.; Rarity, J. Strong light confinement in rod-connected diamond photonic crystals. Opt. Lett. 2018, 43, 5202-5205.
Strong Coupling of a Spin Ensemble to a Superconducting Resonator. Y Kubo, F R Ong, P Bertet, D Vion, V Jacques, D Zheng, A Dréau, J.-F Roch, A Auffeves, F Jelezko, J Wrachtrup, M F Barthe, P Bergonzo, D Esteve, 10.1103/PhysRevLett.105.140502Phys. Rev. Lett. 140502Kubo, Y.; Ong, F. R.; Bertet, P.; Vion, D.; Jacques, V.; Zheng, D.; Dréau, A.; Roch, J.- F.; Auffeves, A.; Jelezko, F.; Wrachtrup, J.; Barthe, M. F.; Bergonzo, P.; Esteve, D. Strong Coupling of a Spin Ensemble to a Superconducting Resonator. Phys. Rev. Lett. 2010, 105, 140502.
|
[] |
[
"Renormalization factor of four fermi operators with clover fermion and Iwasaki gauge action Renormalization factor of four fermi operators",
"Renormalization factor of four fermi operators with clover fermion and Iwasaki gauge action Renormalization factor of four fermi operators"
] |
[
"Yusuke Taniguchi [email protected] \nInstitute of Physics\nUniversity of Tsukuba\n305-8571TsukubaIbarakiJapan\n",
"Yusuke Taniguchi \nInstitute of Physics\nUniversity of Tsukuba\n305-8571TsukubaIbarakiJapan\n"
] |
[
"Institute of Physics\nUniversity of Tsukuba\n305-8571TsukubaIbarakiJapan",
"Institute of Physics\nUniversity of Tsukuba\n305-8571TsukubaIbarakiJapan"
] |
[
"XXIX International Symposium on Lattice Field Theory"
] |
Renormalization factors of four-quark operators are perturbatively calculated for the improved Wilson fermion with clover term and the Iwasaki gauge action. A main application shall be the K → ππ decay amplitude and the calculation is restricted to the parity odd operator, for which the operators are multiplicatively renormalizable without mixing with wrong operators that have different chiral structures.
|
10.22323/1.139.0331
|
[
"https://arxiv.org/pdf/1111.2381v2.pdf"
] | 54,706,420 |
1111.2381
|
20f2ae840470f279b709737e2ac257d6bd81b646
|
Renormalization factor of four fermi operators with clover fermion and Iwasaki gauge action Renormalization factor of four fermi operators
July 10 -16 2011
Yusuke Taniguchi [email protected]
Institute of Physics
University of Tsukuba
305-8571TsukubaIbarakiJapan
Yusuke Taniguchi
Institute of Physics
University of Tsukuba
305-8571TsukubaIbarakiJapan
Renormalization factor of four fermi operators with clover fermion and Iwasaki gauge action Renormalization factor of four fermi operators
XXIX International Symposium on Lattice Field Theory
July 10 -16 2011Squaw Valley, Lake Tahoe, California * Speaker.
Renormalization factors of four-quark operators are perturbatively calculated for the improved Wilson fermion with clover term and the Iwasaki gauge action. A main application shall be the K → ππ decay amplitude and the calculation is restricted to the parity odd operator, for which the operators are multiplicatively renormalizable without mixing with wrong operators that have different chiral structures.
Introduction
Calculation of weak matrix elements of phenomenological interest is one of major application of lattice QCD. A calculation of four quark hadron matrix elements with the Wilson fermion encounters an obstacle since unwanted mixing is introduced through quantum correction with operators that have wrong chirality.
One of the solution is to make use of the parity odd operator. By using discrete symmetries of the parity, the charge conjugation and flavor exchanging transformations it was shown [1] that the parity odd four quark operator has no extra mixing with wrong operators even without chiral symmetry. One of application of this virtue may be a calculation of the K → ππ decay amplitude with the Wilson fermion.
An improvement with the clover term is indivisible for the Wilson fermion. The RG improved gauge action of Iwasaki type has a good property at lattice spacing around a −1 ∼ 2 GeV imitating that in the continuum. It is plausible to use a combination of the Iwasaki gauge action and the improved Wilson fermion with clover term for our numerical simulation. Unfortunately renormalization factors of the ∆S = 1 four quark operators are not available for this combination of action except for ∆S = 2 part [2]. A purpose of this report is to give the renormalization factor of four quark operators perturbatively which contribute to the K → ππ decay.
Four quark operators
We adopt the Iwasaki gauge action and the improved Wilson fermion action with the clover term. The Feynman rules for this action is given in Ref. [3]. We shall adopt the Feynman gauge and set the Wilson parameter r = 1 in the following.
We shall evaluate the renormalization factor of the following ten operators
Q (2n−1) = (sd) L ∑ q=u,d,s α (n) q (qq) L , Q (2n) = (s × d) L ∑ q=u,d,s α (n)
q (q × q) L , (n = 1, 2, 5), (2.1)
Q (2n−1) = (sd) L ∑ q=u,d,s α (n) q (qq) R , Q (2n) = (s × d) L ∑ q=u,d,s α (n) q (q × q) R , (n = 3, 4), (2.2) α (1) q = (1, 0, 0) , α(2)q = α (3) q = (1, 1, 1) , α(4)q = α (5) q = 1, − 1 2 , − 1 2 (2.3) where (sd) R/L = sγ µ (1 ± γ 5 ) d (2.4)
and × means a following contraction of the color indices
Q (2) = (s × d) L (u × u) L = (s a d b ) L (u b u a ) L . (2.5)
We are interested in the parity odd part only, which contribute to the K → ππ decay amplitude Q (2n−1)
VA+AV = −Q (2n−1) VA − Q (2n−1) AV , Q(2n)VA+AV = −Q (2n) VA − Q (2n)
AV , (n = 1, 2, 5), (2.6)
Q (2n−1) VA−AV = Q (2n−1) VA − Q (2n−1) AV , Q (2n) VA−AV = Q (2n) VA − Q (2n) AV , (n = 3, 4), (2.7) Q (2n−1) VA = (sd) V ∑ q=u,d,s α (n) q (qq) A , Q (2n−1) AV = (sd) A ∑ q=u,d,s α (n) q (qq) V , (2.8) Q (2n) VA = (s × d) V ∑ q=u,d,s α (n) q (q × q) A , Q (2n) AV = (s × d) A ∑ q=u,d,s α (n) q (q × q) V , (2.9)
where current-current vertex means
(sd) V (qq) A = sγ µ d qγ µ γ 5 q .
(2.10)
Renormalization factor in MS scheme
We renormalized the lattice bare operators Q
(k) lat to get the renormalized operator Q (k)
MS . We adopt the MS scheme with DRED or NDR. We notice there are two kinds of one loop corrections to the operators. One is given by gluon exchanging diagrams given in Ref. [2,4] for ∆S = 2 operator and the other is the penguin diagrams given in Ref. [5] for ∆S = 1 operators.
The renormalization of the operator is given by
Q (i) MS = Z g i j Q ( j) lat + Z pen i Q pen lat + Z sub i O sub lat (3.1) where Q ( j)
lat is the four quark operators on the lattice, Q pen lat is the QCD penguin operator and O sub lat is a lower dimensional operator to be subtracted. Z g i j comes from gluon exchanging diagrams. Z pen i is contribution from the penguin diagram.
Gluon exchanging diagrams
For gluon exchanging diagram the one loop contributions are evaluated in terms of those to the quark bilinear operators by using the Fierz rearrangement and the charge conjugation [4]. Summing up contributions from three types of diagrams [4] the one loop correction to the four quark operators is given in a form
Q (i) one−loop = T lat i j Q ( j) tree , (3.2) where Q ( j) tree = Q ( j)
VA±AV is a tree level operator. The correction factors are already evaluated for the improved action in Ref. [2] and is given as follows for our notation of the four quark operators
T lat 11 = T lat 22 = T lat 33 = T lat 44 = T lat 99 = T lat 10,10 = g 2 16π 2 − N 2 + 2 N ln (λ a) 2 + N 2 − 2 2N (V V +V A ) + 1 2N (V S +V P ) , (3.3) T lat 55 = T lat 77 = g 2 16π 2 − N 2 − 4 N ln (λ a) 2 + N 2 (V V +V A ) − 1 2N (+V S +V P ) , (3.4) T lat 66 = T lat 88 = g 2 16π 2 −4 N 2 − 1 N ln (λ a) 2 + N 2 − 1 2N (V S +V P ) ,(3.
5)
T lat 12 = T lat 21 = T lat 34 = T lat 43 = T lat 9,10 = T lat 10,9 = g 2 16π 2
1 2 6 ln (λ a) 2 +V V +V A −V S −V P , (3.6) T lat 56 = T lat 78 = g 2 16π 2 1 2 −6 ln (λ a) 2 −V V −V A +V S +V P ,(3.7)
where λ is a gluon mass introduced for an infra red regularization and the number of color is N = 3. V Γ is a finite part in one loop correction to the bilinear operator, which is evaluated in Ref. [3] for various gauge actions. The renormalization factor is given by taking a ratio of quantum corrections with that in the MS scheme multiplied with the quark wave function renormalization factor Z 2
Z g ii (µa) = Z MS 2 2 1 + T MS ii Z lat 2 2 1 + T lat ii , (3.8) Z g i j (µa) = T MS i j − T i j (i = j). (3.9)
The correction factor in the DRED MS scheme is given by
T MS 11 = T MS 22 = T MS 33 = T MS 44 = T MS 99 = T MS 10,10 = N 2 + 2 N V MS ,(3.T MS 55 = T MS 77 = N 2 − 4 N V MS , (3.12) T MS 56 = T MS 78 = 3V MS , (3.13) T MS 66 = T MS 88 = 4 N 2 − 1 N V MS ,(3.
14)
V MS = g 2 16π 2 log µ 2 λ 2 + 1 . (3.15)
The same infra red regularization with the gluon mass should be adopted. The quark wave function renormalization factor Z 2 is given in Ref. [3]. Substituting the above results we have Z g 11 (µa) = Z g 22 (µa) = Z g 33 (µa) = Z g 44 (µa) = Z g 99 (µa) = Z g 10,10 (µa) i j + c SW z g (1) i j + c 2 SW z g (2) i j . The finite part for the NDR scheme is given in table 2. We need to subtract the evanescent operators in the MS scheme, which comes from a difference of dimensionality from four for gamma matrices in operator vertex.
= 1 + g 2 16π 2 3 N ln (µa) 2 + z g 11 , (3.16) Z g 55 (µa) = Z g 77 (µa) = 1 + g 2 16π 2 − 3 N ln (µa) 2 + z g 55 , (3.17) Z g 66 (µa) = Z g 88 (µa) = 1 + g 2 16π 2 3 N 2 − 1 N ln (µa) 2 + z g 66 ,(3.
Penguin diagrams
Contribution from the penguin diagram is evaluated with the same procedure as in Ref. [5] and the one loop correction to the four quark operators is given in a form
Q (i) one−loop = T pen i lat Q pen tree ,(3.23)
where Q pen tree is the penguin operator at tree level
Q pen = Q (4) VA+AV + Q (6) VA−AV − 1 N Q (3) VA+AV + Q (5) VA−AV . (3.24)
The correction factor is given by
T pen i lat = g 2 16π 2 C(Q i )
3 ln a 2 p 2 +V lat pen (3.25) with operator dependent factor The correction factor in the MS scheme is given in a similar form
C (Q 1 ) = 0, C (Q 2 ) = 1, C (Q 3 ) = 2, (3.26) C (Q 4 ) = C (Q 6 ) = ∑ q=u,d,s = N f ,(3.Q (i) one−loop = T pen i MS Q pen tree , (3.32) T pen i MS = g 2 16π 2 C(Q i ) 3 ln p 2 µ 2 − 5 3 − c (Q i ) (3.33)
With the same infra red regulator p. The scheme dependent finite term is given by
c (NDR) (Q 2 ) = c (NDR) (Q 2n−1 ) = −1, c (NDR) (Q 2n ) = 0, (n ≥ 2) (3.34) c (DRED) (Q 2 ) = c (DRED) (Q 2n−1 ) = c (DRED) (Q 2n ) = 1 4 , (n ≥ 2). (3.35)
Combining these two contributions the renormalization factor for the penguin operator is given by
Z pen i = T pen i MS − T pen i lat = g 2 16π 2 C(Q i ) 3 − ln a 2 µ 2 + z pen i , (3.36) z pen i = −V lat pen − 5 3 − c i . (3.37)
Numerical value of the finite part is given in table 3. Table 3: Finite part of the renormalization factor from the penguin diagram. Coefficients of the term c k SW (k = 0, 1) are given in the column marked as (k). z pen i (DRED) (0) z pen 2 (NDR) (0) z pen 2n−1 (NDR) (0) z pen 2n (NDR) (0) (z pen i ) (1) −0.2039 1.0462 1.0462 0.0461 1.0878
Mixing with lower dimensional operator
We shall evaluate the amputated quark bilinear vertex function given by We consider a leading contribution to the vertex at tree level, which introduces mixing with lower dimensional operators. We immediately get
T
MS 12 = T MS 21 = T MS 34 = T MS 43 = T MS 9,10 = T MS 10,9 = −3V MS , (3.11)
12 (µa) = Z g 21 (µa) = Z g 34 (µa) = Z g 43 (µa) = Z g 9,10 (µa) = Z g 65 (µa) = Z g 87 (µa) = g 2 16π 2 z g 65 = 0. (3.21) The numerical value of the finite part is given in table 1 for N = 3 as an expansion in c SW z g i j = z g(0)
27 )C
27(Q 5 ) = C (Q 7 ) = 0, (3.28)C (Q 8 ) = C (Q 10 ) = ∑ q=u,d,s α q = N u − N d 2 , (3.29) C (Q 9 ) = −1. (3.30)p is a momentum of intermediate gluon propagator given in terms of external quark momentum, for which we set the on-shell condition. The finite part is expanded as V lat pen = −1.7128 + c SW (−1.0878) .(3.31)
ab (γ 5 ) αβ I (sub) (m d ) − I (sub) (m s ) ,
δ ab (γ 5 ) αβ I (sub) (m d ) − I (sub) (m s ) , (3.40) I (sub) (am) = d 4 l (2π) 4 4W (l, am) sin 2 l +W (l, am) 2 , (3.41) W (l, am) = am + ∑ µ 1 − cos l µ . (3.42)
Table 1 :
1Finitepart z g
i j of the renormalization factor from gluon exchanging diagrams in the DRED scheme.
z g
11
z g
55
z g
66
(0)
(1)
(2)
(0)
(1)
(2)
(0)
(1)
(2)
-23.596 3.119 2.268 -25.183 5.420 2.923 -18.041 -4.933 -0.020
z g
12
z g
56
(0)
(1)
(2)
(0)
(1)
(2)
-2.381 0.451 -2.020 2.381 -0.451 2.020
−24.096 −25.350 −19.708 −4.881 −1.120 −3
AcknowledgmentThis work is done for a collaboration with K. -I. Ishikawa, N. Ishizuka, A. Ukawa and T. Yoshié. This work is supported in part by Grants-in-Aid of the Ministry of Education (Nos. 22540265, 23105701).i j of the renormalization factor from gluon exchanging diagrams in the NDR scheme. c SW dependent terms are the same as that in the DRED scheme (n = 1, 2).which may be evaluated with an expansion in the quark mass(3.43)The numerical value is given byThis contribution introduces a mixing with the lower dimensional bilinear operator (sγ 5 d) multiplied with a mass difference (m d − m s ). As it is clear from(3.42)this is due to the chiral symmetry breaking effect in the Wilson fermion. It may be better not to expand in quark mass since the coefficient (3.45) is rather large and d 3 /d(am) 3 I (sub) (0) term has an infra red divergence at m = 0. The subtraction factor is given by= 0, (n = 1, 2, 5).(3.48)ConclusionIn this report we have calculated the one-loop contributions for the renormalization factors of parity odd four-quark operators, which contribute to the K → ππ decay amplitude, in the improved Wilson fermion with clover term and the Iwasaki gauge action. The operators are multiplicatively renormalizable without any mixing with wrong operators that have different chiral structures except for the lower dimensional operator.
. A Donini, V Gimenez, G Martinelli, M Talevi, A Vladikas, arXiv:hep-lat/9902030Eur. Phys. J. C. 10121A. Donini, V. Gimenez, G. Martinelli, M. Talevi and A. Vladikas, Eur. Phys. J. C 10 (1999) 121 [arXiv:hep-lat/9902030].
. M Constantinou, P Dimopoulos, R Frezzotti, V Lubicz, H Panagopoulos, A Skouroupathis, F Stylianou, arXiv:1011.6059Phys. Rev. D. 8374503hep-latM. Constantinou, P. Dimopoulos, R. Frezzotti, V. Lubicz, H. Panagopoulos, A. Skouroupathis and F. Stylianou, Phys. Rev. D 83 (2011) 074503 [arXiv:1011.6059 [hep-lat]].
. S Aoki, K I Nagai, Y Taniguchi, A Ukawa, arXiv:hep-lat/9802034Phys. Rev. D. 5874505S. Aoki, K. i. Nagai, Y. Taniguchi and A. Ukawa, Phys. Rev. D 58, 074505 (1998) [arXiv:hep-lat/9802034].
. G Martinelli, Phys. Lett. 141395G. Martinelli, Phys. Lett. B141 (1984) 395.
. C W Bernard, A Soni, T Draper, Phys. Rev. 363224C. W. Bernard, A. Soni, T. Draper, Phys. Rev. D36 (1987) 3224.
|
[] |
[
"Finite and infinite speed of propagation for porous medium equations with nonlocal pressure",
"Finite and infinite speed of propagation for porous medium equations with nonlocal pressure"
] |
[
"Diana Stan \nDepartamento de Matemáticas\nUniversidad Autónoma de Madrid\nCampus de Cantoblanco28049MadridSpain\n",
"Félix Del Teso \nDepartamento de Matemáticas\nUniversidad Autónoma de Madrid\nCampus de Cantoblanco28049MadridSpain\n",
"Juan Luis Vázquez \nDepartamento de Matemáticas\nUniversidad Autónoma de Madrid\nCampus de Cantoblanco28049MadridSpain\n"
] |
[
"Departamento de Matemáticas\nUniversidad Autónoma de Madrid\nCampus de Cantoblanco28049MadridSpain",
"Departamento de Matemáticas\nUniversidad Autónoma de Madrid\nCampus de Cantoblanco28049MadridSpain",
"Departamento de Matemáticas\nUniversidad Autónoma de Madrid\nCampus de Cantoblanco28049MadridSpain"
] |
[] |
We study a porous medium equation with fractional potential pressure:for m > 1, 0 < s < 1 and u(x, t) ≥ 0. The problem is posed for x ∈ R N , N ≥ 1, and t > 0. The initial data u(x, 0) is assumed to be a bounded function with compact support or fast decay at infinity. We establish existence of a class of weak solutions for which we determine whether the property of compact support is conserved in time depending on the parameter m, starting from the result of finite propagation known for m = 2. We find that when m ∈ [1, 2) the problem has infinite speed of propagation, while for m ∈ [2, 3) it has finite speed of propagation. In other words m = 2 is critical exponent regarding propagation. The main results have been announced in the note[29].
|
10.1016/j.jde.2015.09.023
|
[
"https://arxiv.org/pdf/1506.04071v1.pdf"
] | 54,780,134 |
1506.04071
|
3d8f3ce3a35604c72e61944fb256d15bdc8a9f2a
|
Finite and infinite speed of propagation for porous medium equations with nonlocal pressure
June 15, 2015 12 Jun 2015
Diana Stan
Departamento de Matemáticas
Universidad Autónoma de Madrid
Campus de Cantoblanco28049MadridSpain
Félix Del Teso
Departamento de Matemáticas
Universidad Autónoma de Madrid
Campus de Cantoblanco28049MadridSpain
Juan Luis Vázquez
Departamento de Matemáticas
Universidad Autónoma de Madrid
Campus de Cantoblanco28049MadridSpain
Finite and infinite speed of propagation for porous medium equations with nonlocal pressure
June 15, 2015 12 Jun 20151Nonlinear fractional diffusionfractional LaplacianRiesz potentialexistence of solutionsfinite/infinite speed of propagation 2000 Mathematics Subject Classification 26A3335K6576S05Addresses: Diana Standianastan@uamesFélix del Tesofelixdelteso@uamesand Juan Luis Vázquezjuanluisvazquez@uames
We study a porous medium equation with fractional potential pressure:for m > 1, 0 < s < 1 and u(x, t) ≥ 0. The problem is posed for x ∈ R N , N ≥ 1, and t > 0. The initial data u(x, 0) is assumed to be a bounded function with compact support or fast decay at infinity. We establish existence of a class of weak solutions for which we determine whether the property of compact support is conserved in time depending on the parameter m, starting from the result of finite propagation known for m = 2. We find that when m ∈ [1, 2) the problem has infinite speed of propagation, while for m ∈ [2, 3) it has finite speed of propagation. In other words m = 2 is critical exponent regarding propagation. The main results have been announced in the note[29].
Introduction
In this paper we study the following nonlocal evolution equation (1.1) u t (x, t) = ∇ · (u m−1 ∇p), p = (−∆) −s u, for x ∈ R N , t > 0,
u(0, x) = u 0 (x) for x ∈ R N ,
for m > 1 and u(x, t) ≥ 0. The model formally resembles the classical Porous Medium Equation (PME) u t = ∆u m = ∇(mu m−1 ∇u) where the pressure p depends linearly on the density function u according to the Darcy Law. In this model the pressure p takes into consideration nonlocal effects through the Inverse Fractional Laplacian operator K s = (−∆) −s , that is the Riesz potential of order 2s. The problem is posed for x ∈ R N , N ≥ 1 and t > 0. The initial data u 0 : R N → [0, ∞) is bounded with compact support or fast decay at infinity.
As a motivating precedent, in the work [10] Caffarelli and Vázquez proposed the following model of porous medium equation with nonlocal diffusion effects (CV) ∂ t u = ∇ · (u∇p), p = (−∆) −s u.
The study of this model has been performed in a series of papers as follows. In [10], Caffarelli and Vázquez developed the theory of existence of bounded weak solutions that propagate with finite speed. In [11], the same authors proved the asymptotic time behaviour of the solutions. Self-similar non-negative solutions are obtained by solving an elliptic obstacle problem with fractional Laplacian for the pair pressure-density, called obstacle Barenblatt solutions. Finally, in [8], Caffarelli, Soria and Vázquez considered the regularity and the L 1 − L ∞ smoothing effect. The regularity for s = 1/2 has been recently done in [9]. The study of fine asymptotic behaviour (rates of convergence) for (CV) has been performed by Carrillo, Huang, Santos and Vázquez [12] in the one dimensional setting. Putting m = 2 in (1.1), we recover Problem (CV).
A main question in this kind of nonlocal nonlinear diffusion models is to decide whether compactly supported data produce compactly supported solutions, a property known as finite speed of propagation. Surprisingly, the answer was proved to be positive for m = 2 in paper [10], for m = 1 we get the linear fractional heat equation, that is explicitly solvable by convolution with a positive kernel, hence it has infinite speed of propagation. The main motivation of this paper is establishing the alternative finite/infinite speed of propagation for the solutions of Problem (1.1) depending on the parameter m. In the process we construct a theory of existence of solutions and derive the main properties. A modification of the numerical methods developed in [17,18] pointed to us to the possibility of having two different propagation properties.
Other related models. Equation (CV) with s = 1/2 in dimension N = 1 has been proposed by Head [20] to describe the dynamics of dislocation in crystals. The model is written in the integrated form as
v t + |v x |(−∂ 2 /∂ xx ) 1/2 v = 0.
The dislocation density is u = v x . This model has been recently studied by Biler, Karch and Monneau in [4], where they prove that the problem enjoys the properties of uniqueness and comparison of viscosity solutions. The relation between u and v is very interesting and will be used by us in the final sections.
Another possible generalization of the (CV) model is ∂ t u = ∇ · (u∇p), p = (−∆) −s (|u| m−2 u), that has been investigated by Biler, Imbert and Karch in [2,3]. They prove the existence of weak solutions and they find explicit self-similar solutions with compact support for all m > 1. The finite speed of propagation for every weak solution has been done in [22].
The second nonlocal version of the classical PME is the model u t = −(−∆) s u m , m > 0, known as the Fractional Porous Medium Equation (FPME). This model has infinite speed of propagation and the existence of fundamental solutions of self-similar type or Barenblatt solutions is known for m > (N − 2s ) + /N . We refer to the recent works [15,16,32,5]. The (FPME) model for m = 1, also called linear fractional Heat Equation, coincides with model (1.1) for s = 1 − s , m = 1.
Main results
We first propose a definition of solution and establish the existence and main properties of the solutions.
(1.2) T 0 R N uφ t dxdt − T 0 R N u m−1 ∇K s (u)∇φdxdt + R N u 0 (x)φ(x, 0)dx = 0
holds for every test function φ in Q T such that ∇φ is continuous, φ has compact support in R N for all t ∈ (0, T ) and vanishes near t = T .
Before entering the discussion of finite versus infinite propagation, we study the question of existence. We have the following result for 1 < m < 2.
1 2 R N |H s [u]| 2 dx + t 0 R N u m−1 |∇K s [u]| 2 dx ≤ 1 2 R N |H s [ u 0 ]| 2 dx.
The existence for m ≥ 2 is covered in the following result. We should have covered existence in the whole range m ≥ 2 where we want to prove finite speed of propagation for the constructed weak solutions, see Theorem 1.4. But the existence theory used in the previous theorem breaks down because of the negative exponents 3 − m that would appear in the first energy estimate for m > 3 (a logarithm would appear for m = 3). A new existence approach avoiding such estimate is needed, and this can be done but is not immediate. We have refrained from presenting such a study here because it would divert us too much from the main interest.
The following is our most important contribution, which deals with the property of finite propagation of the solutions depending on the value of m. 3), s ∈ (0, 1) and let u be a constructed weak solution to problem (1.1) as in Theorem 1.3 with compactly supported initial data u 0 ∈ L 1 (R N ) ∩ L ∞ (R N ). Then, u(·, t) is also compactly supported for any t > 0, i.e. the solution has finite speed of propagation. b) Let N = 1, m ∈ (1, 2), s ∈ (0, 1) and let u be a constructed solution as in Theorem 1.2. Then for any t > 0 and any R > 0, the set M R,t = {x : |x| ≥ R, u(x, t) > 0} has positive measure even if u 0 is compactly supported. This is a weak form of infinite speed of propagation. If moreover u 0 is radially symmetric and monotone non-increasing in |x|, then we get a clearer result: u(x, t) > 0 for all x ∈ R and t > 0.
Theorem 1.4. a) Let N ≥ 1, m ∈ [2,
Remark
(i) By constructed weak solution we mean that it is the limit of the approximations process that produces the result of Theorem 1.3.
(ii) We point out that part (a) of the theorem would still be true when m ≥ 3 once we supply an existence theory based on approximations with solutions of regularized problems.
Organization of the proofs
• In Section 3 we derive useful energy estimates valid for all m > 1. Due to the differences in the computations, we will separate the cases m = 2, 3 and m = 3.
• In Section 4, 5 and 6 we prove the existence of a weak solution of Problem (1.1) as the limit of a sequence of solutions to suitable approximate problems. The range of exponents is 1 < m < 3.
• Section 7 deals with the property of finite speed of propagation for m ≥ 2. See Theorem 7.1.
• In Section 8 we prove the infinite speed of propagation for m ∈ (1, 2) in the one-dimensional case. This section introduces completely different tools. Indeed, we develop a theory of viscosity solutions for the integrated
equation v t +|v x | m−1 (−∆) 1−s v = 0,
where v x = u the solution of (1.1), and we prove infinite speed of propagation in the usual sense for the solution v of the integrated problem.
Though we do not get the same type of infinite propagation result for 1 < m < 2 in several spatial dimensions, the evidence (partial results and explicit solutions) points in that direction, see the comments in Section 10.
Functional setting
We will work with the following functional spaces (see [19]). Let s ∈ (0, 1). Let F denote the Fourier transform. We consider
H s (R N ) = u : L 2 (R N ) : R N (1 + |ξ| 2s )|Fu(ξ)| 2 dξ < +∞ with the norm u H s (R N ) = u L 2 (R N ) + R N |ξ| 2s |Fu(ξ)| 2 dξ.
For functions u ∈ H s (R N ), the Fractional Laplacian is defined by
(−∆) s u(x) = C N,s P.V. R N u(x) − u(y) |x − y| N +2s dy = CF −1 (|ξ| 2s (Fu)), where C N,s = π −(2s+N/2) Γ(N/2 + s)/Γ(−s). Then u H s (R N ) = u L 2 (R N ) + C (−∆) s/2 u L 2 (R N ) .
For functions u that are defined on a subset Ω ⊂ R N with u = 0 on the boundary ∂Ω, the fractional Laplacian and the H s (R N ) norm are computed by extending the function u to all R N with u = 0 in R N \ Ω. For technical reasons we will only consider the case s < 1/2 in N = 1 dimensional space.
The inverse operator (−∆) −s coincides with the Riesz potential of order 2s that will be denoted here by K s . It can be represented by convolution with the Riesz kernel K s :
K s [u] = K s * u, K s (x) = 1 c(N, s) |x| −(N −2s) ,
where c(N, s) = π N/2−2s Γ(s)/Γ((N − 2s)/2). The Riesz potential K s is a self-adjoint operator. The square root of K s is K s/2 , i.e. the Riesz potential of order s (up to a constant). We will denote it by H s := (K s ) 1/2 . Then H s can be represented by convolution with the kernel K s/2 . We will write K and H when s is fixed and known. We refer to [26] for the arguments of potential theory used throughout the paper.
The inverse fractional Laplacian K s [u] is well defined as an integral operator for all s ∈ (0, 1) in dimension N ≥ 2, and s ∈ (0, 1/2] in the one-dimensional case N = 1. We extend our result to the remaining case s ∈ (1/2, 1) by giving a suitable meaning to the combined operator (∇K s ). The details concerning this case will be given in Section 6.5.
For functions depending on x and t, convolution is applied for fixed t with respect to the spatial variables and we then write u(t) = u(·, t).
Functional inequalities related to the fractional Laplacian
We recall some functional inequalities related to the fractional Laplacian operator that we used throughout the paper. We refer to [16] for the proofs.
Lemma 2.1 (Stroock-Varopoulos Inequality). Let 0 < s < 1, q > 1. Then
(2.1) R N |v| q−2 v(−∆) s vdx ≥ 4(q − 1) q 2 R N (−∆) s/2 |v| q/2 2 dx for all v ∈ L q (R N ) such that (−∆) s v ∈ L q (R N ).
Lemma 2.2 (Generalized Stroock-Varopoulos Inequality). Let 0 < s < 1. Then
(2.2) R N ψ(v)(−∆) s vdx ≥ R N (−∆) s/2 Ψ(v) 2 dx whenever ψ = (Ψ ) 2 .
Theorem 2.3 (Sobolev Inequality). Let 0 < s < 1 (s < 1 2 if N = 1). Then
(2.3) f 2N N −2s ≤ S s (−∆) s/2 f 2 ,
where the best constant is given in [5] page 31.
Approximation of the Inverse Fractional Laplacian (−∆) −s
We consider an approximation K s as follows. Let K s (z) = c N,s |z| −(N −2s) the kernel of the Riesz potential
K s = (−∆) −s , 0 < s < 1 (0 < s < 1/2 if N = 1). Let ρ (x) = −N ρ(x/ )
, > 0 a standard mollifying sequence, where ρ is positive, radially symmetric and decreasing, ρ ∈ C ∞ c (R N ) and R N ρ dx = 1. We define the regularization of K s as K s = ρ K s . Then
(2.4) K s [u] = K s u
is an approximation of the Riesz potential K s = (−∆) −s . Moreover, K s and K s are self-adjoint operators with K s = (H s ) 2 , K s = (H s ) 2 . Also, ρ = σ * σ where σ has the same properties as ρ. Then, we can write H s as the operator with kernel K s/2 * σ . That is:
R N u K s [u]dx = R N |H s [u]| 2 dx.
Also H s commutes with the gradient:
∇H s [u] = H s [∇u].
Basic estimates
In what follows, we perform formal computations on the solution of Problem (1.1), for which we assume smoothness, integrability and fast decay as |x| → ∞. The useful computations for the theory of existence and propagation will be justified later by the approximation process. We fix s ∈ (0, 1) and m ≥ 1. Let u be the solution of Problem (1.1) with initial data u 0 ≥ 0. We assume u ≥ 0 for the beginning. This property will be proved later.
• Conservation of mass:
(3.1) d dt R N u(x, t)dx = R N u t dx = R N ∇ · (u m−1 ∇K s [u])dx = 0.
• First energy estimate: The estimates here are significantly different depending on the exponent m. Therefore, we consider the cases:
Case m = 3: d dt R N log u(x, t)dx = R N u t u dx = R N ∇u · ∇K s [u] = R N |∇H s [u]| 2 dx.
Therefore, by the conservation of mass (3.1) we obtain
(3.2) d dt R N (u − log u)dx = − R N |∇H s [u]| 2 dx. Case m = 3: d dt R N u 3−m (x, t)dx = (3 − m) R N u 2−m u t dx = (3 − m) R N u 2−m ∇(u m−1 ∇K s [u])dx = −(3 − m)(2 − m) R N ∇u · ∇K s [u]dx = −C R N |∇H s [u]| 2 dx.
Here C = (3 − m)(2 − m) is negative for m ∈ (2, 3) and positive otherwise.
If m > 3 or 1 < m < 2 then d dt R N u 3−m dx = −|C| R N |∇H s [u]| 2 dx. If 2 < m < 3 then d dt R N u 3−m dx = |C| R N |∇H s [u]| 2 dx, or equivalently d dt R N u − u 3−m dx = −|C| R N |∇H s [u]| 2 dx.
• Second energy estimate:
1 2 d dt R N |H s [u](x, t)| 2 dx = R N H s [u](H s [u]) t dx = R N K s [u]u t dx (3.3) = R N K s [u]∇ · (u m−1 ∇K s [u])dx = − R N u m−1 |∇K s [u]| 2 dx.
• L ∞ estimate: We prove that the L ∞ (R N ) norm does not increase in time. Indeed, at a point of maximum x 0 of u at time t = t 0 , we have
u t = (m − 1)u m−1 ∇u · ∇p + u m−1 ∆K s [u].
The first term is zero since ∇u(x 0 , t 0 ) = 0. For the second one we have
−∆K s = (−∆)(−∆) −s = (−∆) 1−s so that ∆K s [u](x 0 , t 0 ) = −(−∆) 1−s u(x 0 , t 0 ) = −c R N u(x 0 , t 0 ) − u(y, t 0 ) |x 0 − y| N −2(1−s) dy ≤ 0,
where c = c(s, N ) > 0. We conclude by the positivity of u that
u t (x 0 , t 0 ) = u m−1 (x 0 , t 0 )∆K s [u](x 0 , t 0 ) ≤ 0.
• Conservation of positivity: we prove that if u 0 ≥ 0 then u(t) ≥ 0 for all times. The argument is similar to the one above.
• L p estimates for 1 < p < ∞. The following computations are valid for all m ≥ 1, since p + m − 2 > 0:
d dt R N u p (x, t)dx = p R N u p−1 ∇ · (u m−1 ∇K s [u])dx = −p R N u m−1 ∇(u p−1 ) · ∇K s [u]dx = − p(p − 1) m + p − 2 R N ∇(u p+m−2 ) · ∇K s [u]dx = p(p − 1) m + p − 2 R N u p+m−2 ∆K s [u]dx = − p(p − 1) m + p − 2 R N u p+m−2 (−∆) 1−s u dx ≤ − 4p(p − 1) (m + p − 1) 2 R N (−∆) 1−s 2 u m+p−1 2 2 dx,
where we applied the Stroock-Varopoulos inequality (2.1) with r = m + p + 1. We obtain that
d dt R N u p (x, t)dx ≤ − 4p(p − 1) (m + p − 1) 2 S 2 1−s R N |u(x, t)| N (m+p−1) N −2+2s dx N −2+2s N ,
with the restriction of s > 1/2 if N = 1.
Existence of smooth approximate solutions for m ∈ (1, ∞)
Our aim is to solve the initial-value problem (1.1) posed in Q = R N × (0, ∞) or at least Q T = R N × (0, T ), with parameter 0 < s < 1. We will consider initial data u 0 ∈ L 1 (R N ). We assume for technical reasons that u 0 is bounded and we also impose decay conditions as |x| → ∞.
Approximate problem
We make an approach to problem (1.1) based on regularization, elimination of the degeneracy and reduction of the spatial domain. Once we have solved the approximate problems, we derive estimates that allow us to pass to the limit in all the steps one by one, to finally obtain the existence of a weak solution of the original problem (1.1). Specifically, for small , δ, µ ∈ (0, 1) and R > 0 we consider the following initial boundary value problem posed in
Q T,R = {x ∈ B R (0), t ∈ (0, T )} (P δµR ) (U 1 ) t = δ∆U 1 + ∇ · (d µ (U 1 )∇K s [U 1 ]) for (x, t) ∈ Q T,R , U 1 (x, 0) = u 0 (x) for x ∈ B R (0), U 1 (x, t) = 0 for x ∈ ∂B R (0), t ≥ 0.
The regularization tools that we use are as follows. u 0 = u 0, ,R is a nonnegative, smooth and bounded approximation of the initial data u 0 such that
u 0 ∞ ≤ u 0 ∞ for all > 0. For every µ > 0, d µ : [0, ∞) → [0, ∞) is a continuous function defined by (4.1) d µ (v) = (v + µ) m−1 .
The approximation of K s of K s = (−∆) −s is made as before in Section 2. The existence of a solution U 1 (x, t) to Problem (P δµR ) can be done by classical methods and the solution is smooth. See for instance [27] for similar arguments.
In the weak formulation we have
(4.2) T 0 B R U 1 (φ t − δ∆φ)dxdt − T 0 B R d µ (U 1 )∇K s [U 1 ]∇φdxdt + B R u 0 (x)φ(x, 0)dx = 0
valid for smooth test functions φ that vanish on the spatial boundary ∂B R and for large t. We use the notation
B R = B R (0).
Notations. The existence of a weak solution of problem (1.1) is done by passing to the limit step-by-step in the approximating problems as follows. We denote by U 1 the solution of the approximating problem (P δµR ) with parameters , δ, µ, R. Then we will obtain U 2 (x, t) = lim →0 U 1 (x, t). Thus U 2 will solve an approximating problem (P δµR ) with parameters δ, µ, R. Next, we take U 3 = lim R→∞ U 2 (x) that will be a solution of Problem (P µδ ), U 4 := lim µ→0 U 3 (x, t) solving Problem (P δ ). Finally we obtain u(x, t) = lim δ→0 U 4 (x, t) which solves problem (1.1).
A-priori estimates for the approximate problem
We derive suitable a-priori estimates for the solution U 1 (x, t) to Problem (P δµR ) depending on the parameters , δ, µ, R.
• Decay of total mass: Since U 1 ≥ 0 and U 1 = 0 in ∂B R , then ∂U 1 ∂n ≤ 0 and so, an easy computation gives us
d dt B R U 1 (x, t)dx = δ B R ∆U 1 dx + B R ∇ · (d µ (U 1 )∇K [U 1 ])dx = ∂B R ∂U 1 ∂n dσ + ∂B R d µ (U 1 ) ∂(K [U 1 ]) ∂n dσ ≤ 0. (4.3)
We conclude that
B R U 1 (x, t)dx ≤ B R u 0 (x) for all t > 0.
• Conservation of L ∞ bound: we prove that 0 ≤ U 1 (x, t) ≤ || u 0 || ∞ . The argument is as in the previous section, using also that at a minimum point ∆U 1 ≥ 0 and at a maximum point ∆U 1 ≤ 0. Also at this kind of points we have that ∇d µ (U 1 ) = d µ (U 1 )∇U 1 = 0.
• Conservation of non-negativity:
U 1 (x, t) ≥ 0 for all t > 0, x ∈ B R .
The proof is similar to the one in the previous section.
First energy estimate
We choose a function F µ such that
F µ (0) = F µ (0) = 0 and F µ (u) = 1/d µ (u).
Then, with these conditions one can see that F µ (z) > 0 for all z > 0. Also F µ (U 1 ) and F µ (U 1 ) vanish on ∂B r × [0, T ], therefore, after integrating by parts, we get
(4.4) d dt B R F µ (U 1 )dx = −δ B R |∇U 1 | 2 d µ (u) dx − B R |∇H s [U 1 ]| 2 dx,
where H = K 1/2 . This formula implies that for all 0 < t < T we have
(4.5) B R F µ (U 1 (t))dx + δ t 0 B R |∇U 1 | 2 d µ (U 1 ) dxdt + t 0 B R |∇H s [U 1 ]| 2 dxdt = B R F µ ( u 0 )dx.
This implies estimates for |∇H s (U 1 )| 2 and δ|∇U 1 | 2 /d µ (U 1 ) in L 1 (Q T,R ). We show how the upper bounds for such norms depend on the parameters , δ, R, µ and T .
The explicit formula for F µ is as follows:
F µ (U 1 ) = 1 (2 − m)(3 − m) [(U 1 + µ) 3−m − µ 3−m ] − 1 2 − m µ 2−m U 1 for m = 2, 3, − log (1 + (U 1 /µ)) + U 1 /µ, for m = 3, (U 1 + µ) log (1 + (U 1 /µ)) − U 1 , for m = 2.
From formula (4.4) we obtain that the quantity
B R F µ (U 1 (x, t))dx is non-increasing in time: 0 ≤ B R F µ (U 1 (x, t))dx ≤ B R F µ ( u 0 )dx, ∀t > 0.
Then, if we control the term B R F µ ( u 0 )dx, we will obtain uniform estimates independent of time t > 0 for the quantity
δ t 0 B R |∇U 1 | 2 d µ (U 1 ) dxdt + t 0 B R |∇H s [U 1 ]| 2 dxdt.
These estimate are different depending on the range of parameters m.
• Uniform bound in the case m ∈ (1, 2). We obtain uniform bounds in all parameters , R, δ, µ for the energy estimate (4.5), that allow us to pass to the limit and obtain a solution of the original problem (1.1). By the Mean Value Theorem
B R F µ ( u 0 )dx ≤ 1 (2 − m)(3 − m) B R [( u 0 + µ) 3−m − µ 3−m ]dx ≤ 1 2 − m B R ( u 0 + µ) 2−m u 0 dx ≤ 1 2 − m ( u 0 ∞ + 1) 2−m R N u 0 dx.
Our main estimate in the case m ∈ (1, 2) is:
(4.6) δ t 0 B R |∇U 1 | 2 d µ (U 1 ) dxdt + t 0 B R |∇H s [U 1 ]| 2 dxdt ≤ C 1 , where C 1 = C 1 (m, u 0 ) = 2 (2 − m) ( u 0 ∞ + 1) 2−m u 0 L 1 (R N )
. This is a bound independent of the parameters , δ, R and µ.
• Upper bound in the case m ∈ (2, 3).
B R F µ ( u 0 )dx = − 1 (m − 2)(3 − m) B R [( u 0 + µ) 3−m − µ 3−m ]dx + 1 m − 2 µ 2−m B R u 0 dx ≤ 1 m − 2 µ 2−m B R u 0 dx ≤ 1 m − 2 µ 2−m R N u 0 dx.
This upper bound will allow us to obtain compactness arguments in and R for fixed µ. We will be able to
control B R F µ ( u 0 )dx − B R F µ (U 1 (t)
)dx uniformly in µ, after passing to the limit as → 0 and R → ∞, due to a exponential decay result on the solution at time t ∈ [0, T ] that we will prove in Section 5 and the conservation of mass.
Remark. These techniques do not apply in the case m ≥ 3 because even an exponential decay on the solution is not enough to control the terms in the first energy estimate.
Second energy estimate
Similar computations to (3.3) yields to the following energy inequality
1 2 d dt B R |H s [U 1 ]| 2 dx ≤ −δ B R |∇H s [U 1 ]| 2 dx − B R (U 1 + µ) m−1 |∇K s [U 1 ]| 2 dx.
This implies that, for all 0 < t < T we have
(4.7) 1 2 B R |H s [U 1 (t)]| 2 dx + δ T 0 B R |∇H s [U 1 ]| 2 dx + T 0 B R (U 1 + µ) m−1 |∇K s [U 1 ]| 2 dx ≤ 1 2 B R |H s [ u 0 ]| 2 dx.
Note that the last integral is well defined as long as
u 0 ∈ L 1 (R N ) ∩ L ∞ (R N ).
5 Exponential tail control in the case m ≥ 2
In this section and the next one we will give the proof of Theorem 1.3. Weak solutions of the original problem are constructed by passing to the limit after a tail control step. We develop a comparison method with a suitable family of barrier functions, that in [10] received the name of true supersolutions.
Theorem 5.1. Let 0 < s < 1/2, m ≥ 2 and let U 1 be the solution of Problem (P δµR ). We assume that U 1 is bounded 0 ≤ U 1 (x, t) ≤ L and that u 0 lies below a function of the form
V 0 (x) = Ae −a|x| , A, a > 0.
If A is large, then there is a constant C > 0 that depends only on (N, s, a, L, A) such that for any T > 0 we will have the comparison
U 1 (x, t) ≤ Ae Ct−a|x| for all x ∈ R N , 0 < t ≤ T.
Proof. • Reduction. By scaling we may put a = L = 1. This is done by considering instead of U 1 , the functioñ U 1 defined as
(5.1) U 1 (x, t) = LŨ 1 (ax, bt), b = L m−1 a 2−2s ,
which satisfies the equation
(Ũ 1 ) t = δ 1 ∆Ũ 1 + ∇.(d µ L (Ũ 1 )∇K a s (Ũ 1 )), with δ 1 = a 2s δ/L m−1 . Note that thenŨ 1 (x, 0) ≤ A 1 e −|x| with A 1 = A/L. The corresponding bound forŨ 1 (x, t) will beŨ 1 (x, t) ≤ A/L e C1t−|x| with C 1 = C/b = C L m−1 a 2−2s −1 .
• Contact analysis. Therefore we assume that 0 ≤ U 1 (x, 0) ≤ 1 and also that
U 1 (x, 0) ≤ Ae −r , r = |x| > 0,
where A > 0 is a constant that will be chosen below, say larger than 2. Given constants C, and η > 0, we consider a radially symmetric candidate for the upper barrier function of the form
U (x, t) = Ae Ct−r + hAe ηt ,
and we take h small. Then C will be determined in terms of A to satisfy a true supersolution condition which is obtained by contradiction at the first point (x c , t c ) of possible contact of u and U .
The equation satisfied by u can be written in the form
(5.2) (U 1 ) t = δ∆U 1 + (m − 1)(u + µ) m−2 ∇U 1 · ∇p + (U 1 + µ) m−1 ∆p, p = K s [U 1 ].
We will obtain necessary conditions in order for equation (5.2) to hold at the contact point (x c , t c ). Then, we prove there is a suitable choice of parameters C, A, η, h, µ such that the contact can not hold.
Estimates on u and p at the first contact point. For 0 < s < 1/2, at the first contact point (x c , t c ) we have the estimates
∂ r U 1 = −Ae Ctc−rc , ∆U 1 ≤ Ae Ctc−rc , (U 1 ) t ≥ ACe Ctc−rc + hηAe ηtc .
Since we assumed our solution u is bounded by 0 ≤ u ≤ 1, then
(5.3) U 1 (x c , t c ) = Ae Ctc−rc + hAe ηtc ≤ 1.
Moreover, from [10] we have the following upper bounds for the pressure term at the contact point for 0 < s < 1/2:
(5.4) ∆p(x c , t c ) ≤ K 1 , (−∂ r p)(x c , t c ) ≤ K 2 .
Note that we are considering a regularized version of the p used in [10]. Of course the estimates still true (maybe with slightly bigger constants) since U 1 is regular.
Necessary conditions at the first contact point. Equation (5.2) at the contact point (x c , t c ) with r c = |x c |, implies that
ACe Ctc−rc + hηAe ηtc ≤ δAe Ctc−rc + (m − 1) (U 1 (x c , t c ) + µ) m−2 (−Ae Ctc−rc )(∂ r p)+ + (U 1 (x c , t c ) + µ) m−1 ∆p.
We denote ξ := r c +(η −C)t c . Using also (5.4) with K = max{K 1 , K 2 }, we obtain, after we simplify the previous inequality by Ae Ctc−rc ,
C + hηe ξ ≤ δ + (m − 1) (U 1 (x c , t c ) + µ) m−2 K + (U 1 (x c , t c ) + µ) m−2 (1 + he ξ + µ A e rc−Ctc )K,
and equivalently
C + ηe ξ ≤ δ + K (u(x c , t c ) + µ) m−2 m + he ξ + µ A e rc−Ctc .
We take C = η and µ A ≤ h. Then
C + hCe rc ≤ δ + K (U 1 (x c , t c ) + µ) m−2 m + he rc + he rc−Ctc .
Moreover,
C + hCe rc ≤ δ + K (U 1 (x c , t c ) + µ) m−2 (m + 2he rc ) . By (5.3) we have that µ < U 1 (x c , t c ) + µ < 1 + µ. Since m ≥ 2, then C + hCe rc ≤ δ + K (1 + µ) m−2 (m + 2he rc ) .
This is impossible for C large enough such that
(5.5) C ≥ δ + mK(1 + µ) m−2 and C ≥ 2K (1 + µ) m−2 .
Since µ < 1 and δ < 1, then we can choose C as constant, only depending on m and K.
Theorem 5.2. Let 1/2 ≤ s < 1, m ≥ 2.
Under the assumptions of the previous theorem, the stated tail estimate works locally in time. The global statement must be replaced by the following: there exists an increasing function C(t) such that
(5.6) u(x, t) ≤ Ae C(t)t−a|x| for all x ∈ R N and all 0 ≤ t ≤ T.
Proof. The proof of this result is similar to the one in [10] but with a technical adaptation to our model. When
N ≥ 2, 1/2 ≤ s < 1, the upper bound ∆p(x c , t c ) ≤ K 0 at the first contact point holds. Moreover, in [10], the following upper bound for (−∂ r p)(x c , t c ) is obtained, (−∂ r p)(x c , t c ) ≤ K 1 + K 2 ||U 1 (t)|| 1/q 1 ||U 1 (t)|| (q−1)/q ∞ ,
where 1 ≤ q < N/(2s − 1). We know that ||U 1 (t)|| ∞ ≤ 1 and before the first contact point we have that
U 1 (x, t) ≤ Ae ct e −|x| , hence ||U 1 (t)|| 1 ≤ K 3 Ae Ct . Therefore, if we consider K = max{K 0 , K 1 , K 2 K 1/q 3 } we have that (5.7) ∆p(x c , t c ) ≤ K, (−∂ r p)(x c , t c ) ≤ K + KA 1/q e Ctc/q .
Using this estimates in the equation we obtain
C + hηe ξ ≤ δ + K(m − 1) (u(x c , t c ) + µ) m−2 (1 + A 1/q e Ctc/q )+ + K(u(x c , t c ) + µ) m−2 (1 + he ξ + µ A e rc−Ctc ).
We put C = η, h = µ/A and use that µ < u(x c , t c ) + µ < 1 + µ to get
C + hCe rc ≤ δ + KA 1/q (m − 1) (1 + µ) m−2 e Ctc/q + K(1 + µ) m−2 (m + 2he rc ) .
We consider µ < 1. The contradiction argument works as before with the big difference that we must restrict the time so that e Ctc/q ≤ 2, which happens if
t c ≤ T 1 = (q log 2)/C. Then C + hCe rc ≤ δ + 2 m−1 KA 1/q (m − 1) + 2 m−2 Km + 2 m−2 Khe rc .
Since A > 1 and δ < 1, and hence 2
m−1 KA 1/q (m − 1) + 2 m−2 Km < 2 m−1 KA 1/q (2m − 1)
, we get a contradiction by choosing C such that:
C = 2 m KA 1/q m ≥ δ + 2 m−1 KA 1/q (2m − 1).
We have proved that there will be no contact with the barrier
B 1 (x, t) = Ae Ct−|x| for t < T 1 = c 1 A −1/q where c 1 = q log 2
Km2 m . We can repeat the argument for another time interval by considering the problem with initial value at time T 1 , that is,
U 1 (x, T 1 ) ≤ Ae CT1−|x| = A 1 e −|x| where A 1 = Ae CT1 , and we get U (x, t) ≤ e C1t−|x| for T 1 ≤ t < T 2 = c 1 A −1/q e −CT1/q where C 1 = Ce CT1/q .
In this way we could find an upper bound to a certain time for the solution depending on the initial data through A.
When N = 1, 1/2 ≤ s < 1, the operator ∂ r p and ∆p are considered in the sense given in Section 6.5.
6 Existence of weak solutions for m ∈ (1, 3)
6.1 Limit as → 0
We begin with the limit as → 0 in order to obtain a solution of the equation
(P δµR ) (U 2 ) t = δ∆U 2 + ∇ · (d µ (U 2 )∇K s [U 2 ]).
Let U 1 be the solution of (P δµR ). We fix δ, µ and R and we argue for close to 0. Then, by the energy formula (4.6) and the estimates from Section 4.2 we obtain that
(6.1) δ t 0 B R |∇U 1 | 2 (U 1 + µ) m−1 dxdt ≤ C(µ, m, u 0 ), t 0 B R |∇H s [U 1 ]| 2 dxdt ≤ C(µ, m, u 0 ), valid for all > 0. Since U 1 ∞ ≤ u 0 ∞ for all > 0, then t 0 B R |∇U 1 | 2 dxdt ≤ C(µ, m, u 0 )( u 0 ∞ + 1) m−1 , ∀ > 0.
We recall that in the case m ∈ (1, 2) the constant C is independent of µ, that is C = C(m, u 0 ).
I. Convergence as → 0. We perform an analysis of the family of approximate solutions (U 1 ) in order to derive a compactness property in suitable functional spaces.
• Uniform boundedness: U 1 ∈ L ∞ (Q T,R ), and the bound ||U
1 (t)|| L ∞ (R N ) ≤ ||u 0 || L ∞ (R N ) is independent of , δ, µ and R for all t > 0. Moreover ||U 1 (t)|| L 1 (R N ) ≤ ||u 0 || L 1 (R N ) for all t > 0.
• Gradient estimates. From the energy formula (6.1) we derive
U 1 ∈ L 2 ([0, T ] : H 1 0 (B R )), ∇H s [U 1 ] ∈ L 2 ([0, T ] : L 2 (R N )) uniformly bounded for > 0. Since ∇H s [U 1 ] is "a derivative of order 1 − s of U 1 ", we conclude that (6.2) U 1 ∈ L 2 ([0, T ], H 1−s (R N )).
• Estimates on the time derivative (U 1 ) t : we use the equation (P δµR ) to obtain that
(6.3) (U 1 ) t ∈ L 2 ([0, T ] : H −1 (R N ))
as follows:
(a) Since U 1 ∈ L 2 ([0, T ] : H 1 0 (B R )) we obtain that ∆U 1 ∈ L 2 ([0, T ] : H −1 (R N )). (b)
As a consequence of the Second Energy Estimate and the fact that
U 1 ∈ L ∞ (Q T ), we have that d µ (U 1 )∇K s [U 1 ] ∈ L 2 ([0, T ] : L 2 (R N )), therefore ∇ · (d µ (U 1 )∇K s [U 1 ])) ∈ L 2 ([0, T ] : H −1 (R N )). Now, since ||(U 1 ) t || L 1 t ([0,T ]:H −1+s (R N )) ≤ T 1/2 ||(U 1 ) t || L 2 t ([0,T ]:H −1+s (R N ))
, expressions (6.2) and (6.3), allow us to apply the compactness criteria of Simon, see Lemma 9.3 in the Appendix, in the context of
H 1−s (R N ) ⊂ L 2 (R N ) ⊂ H −1 (R N ),
and we conclude that the family of approximate solutions (U 1 ) is relatively compact in L 2 ([0, T ] : L 2 (R N )). Therefore, there exists a limit (
U 1 ) ,δ,µ,R → (U 2 ) δ,µ,R as → 0 in L 2 ([0, T ] : L 2 (R N ))
, up to subsequences. Note that, since (U 1 ) is a family of positive functions defined on B R and extended to 0 in R N \ B R , then the limit U 2 = 0 a.e. on R N \ B R . We obtain that
(6.4) U 1 →0 −→ U 2 in L 2 ([0, T ] : L 2 (B R )) = L 2 (B R × [0, T ]).
II. The limit is a solution of the new problem (P δµR ). More exactly, we pass to the limit as → 0 in the definition (4.2) of a weak solution of Problem (P δµR ) and prove that the limit U 2 (x, t) of the solutions U 1 (x, t) is a solution of Problem (P δµR ). The convergence of the first integral in (4.2) is justified by (6.4) since
T 0 B R (U 1 − U 2 )(φ t − δ∆φ)dxdt ≤ ||U 1 − U 2 || L 2 (B R ×[0,T ]) ||φ t − δ∆φ|| L 2 (B R ×[0,T ]) .
Convergence of the second integral in (4.2) is consequence of the second energy estimate (4.7) as we show now. First we note that
||(U 1 + µ) m−1 2 ∇K s [U 1 ]|| L 2 (B R ×(0,T )) ≤ C
for some constant C > 0 independent of . Then, Banach-Alaoglu ensures that there exists a subsequence such that
(U 1 + µ) m−1 2 ∇K s [U 1 ] →0 −→ v in L 2 (B R × (0, T )) weakly . Moreover, it is trivial to show that (U 1 + µ) − m−1 2 →0 −→ (U 2 + µ) − m−1 2 in L 2 (B R × (0, T )). Then ∇K s [U 1 ] = (U 1 + µ) m−1 2 (U 1 + µ) m−1 2 ∇K s [U 1 ] →0 −→ v (U 2 + µ) m−1 2 in L 1 (B R × (0, T )).
In particular we get that there exists a limit of ∇K s [U 1 ] as → 0 in any L p (B R × (0, T )) with 1 ≤ p ≤ ∞. Now we need to identify this limit. The following Lemma shows that ∇K s [
U 1 ] →0 −→ ∇K s [U 2 ]
in distributions, and so we can conclude convergence in L 2 (B R × (0, T )).
Lemma 6.1. Let s ∈ (0, 1) (0 < s < 1/2 if N = 1). Then (1) K s [U 1 ] →0 −→ K s [U 2 ] in L 1 (B R × (0, T )). (2) T 0 B R K s [U 1 ]∇ψ dxdt →0 −→ T 0 B R K s [U 2 ]∇ψ dxdt for every ψ ∈ C ∞ c (Q T ).
Proof. For the first part of the Lemma, we split the integral as follows,
T 0 B R (K s [U 1 ] − K s [U 2 ])dxdt = T 0 B R (K s [U 1 ] − K s [U 1 ])dxdt + T 0 B R (K s [U 1 ] − K s [U 2 ])dxdt. Note that K s [U 1 ] = K s * U 1 with K s ∈ L 1 loc (R N ) and K s [U 1 ] = K s * U 1 with K s = ρ * K s where ρ is a standard mollifier.
Then the first integral on the right hand side goes to zero as → 0. The second integral goes to zero with as consequence of (6.4).
The second part of the Lemma is just a corollary of the first part.
T 0 B R (K s [U 1 ] − K s [U 2 ])∇ψdxdt ≤ ||∇ψ|| ∞ ||K s [U 1 ] − K s [U 2 ]|| L 1 (B R ×(0,T )) .
The remaining case N = 1, s ∈ (1/2, 1) will be explained in Section 6.5. We conclude that,
T 0 B R d µ (U 1 )∇K s [U 1 ]∇φdxdt → T 0 B R d µ (U 2 )∇K s [U 2 ]∇φdxdt, as → 0.
Note that we can obtain also that ∇H s [U 1 ]
→0
−→ ∇H s [U 2 ] in L 2 (B R × (0, T )) using the same argument. This allows us to pass to the limit in the energy estimates.
The conclusion of this step is that we have obtained a weak solution of the initial value problem (P δµR ) posed in B R × [0, T ] with homogeneous Dirichlet boundary conditions. The regularity of U 2 , H s [U 2 ] and K s [U 2 ] is as stated before. We also have the energy formulas
(6.5) B R F µ (U 2 (t))dx + δ t 0 B R |∇U 2 | 2 d µ (U 2 ) dxdt + t 0 B R |∇H s [U 2 ]| 2 dxdt = B R F µ (u 0 )dx. 1 2 B R |H s [U 2 (t)]| 2 dx+δ t 0 B R |∇H s [U 2 ]| 2 dx dt + t 0 B R (U 2 + µ) m−1 |∇K s [U 2 ]| 2 dx dt ≤ 1 2 B R |H s [ u 0 ]| 2 dx.
We do not pass now to the limit as δ → 0, because we lose H 1 estimates for U 2 and we deal with the problem caused by the boundary data. Therefore, we keep the term δ∆U 2 .
Limit as R → ∞
We will now pass to the limit as R → ∞. The estimates used in the limit on in Section 4.2 are also independent on R. Then the same technique may be applied here in order to pass to the limit as R → ∞. Indeed, we get that U 3 = lim R→∞ U 2 in L 2 (R N × (0, T )) is a weak solution of the problem in the whole space
(P µδ ) (U 3 ) t = δ∆U 3 + ∇ · (U 3 + µ) m−1 ∇K s [U 3 ] x ∈ R N , t > 0.
This problem satisfies the property of conservation of mass, that we prove next.
Lemma 6.2. Let u 0 ∈ L 1 (R N )∩L ∞ (R N )
. Then the constructed non-negative solution of Problem (P µδ ) satisfies
(6.6) R N U 3 (x, t)dx = R N u 0 (x)dx for all t > 0.
Proof. Let ϕ ∈ C ∞ 0 (R N ) a cutoff test function supported in the ball B 2R and such that ϕ ≡ 1 for |x| ≤ R, we recall the construction in the Appendix 9.2. We get
B 2R (U 3 ) t ϕdx = δ B 2R U 3 ∆ϕdx − B 2R (U 3 + µ) m−1 ∇K s [U 3 ] · ∇ϕdx = I 1 + I 2 .
Since U 3 (t) ∈ L 1 (R N ) for any t ≥ 0, we estimate the first integral as I 1 = O(R −2 ) and then I 1 → 0 as R → ∞. For the second integral we have
I 2 = B 2R K s [U 3 ]∇ · (U 3 + µ) m−1 ∇ϕ dx, I 2 = (m − 1) B 2R K s [U 3 ](U 3 + µ) m−2 ∇u · ∇ϕdx + B 2R K s [U 3 ](U 3 + µ) m−1 ∆ϕdx = I 21 + I 22 .
Since ∇U 3 ∈ L 2 (R N ) and U 3 ∈ L ∞ (R N ),
|I 21 | ≤ C||(U 3 + µ) m−2 || ∞ B 2R |∇U 3 | 2 dx 1/2 B 2R |K s [U 3 ]| 2 |∇ϕ| 2 dx 1/2 . Now ∇ϕ = O(R −1 ), ∇ϕ ∈ L p with p > N , so we need K s [U 3 ] ∈ L q for q < 2 1 1 − 1 N/2 = 2N N − 2 which is true since K[U 3
] ∈ L q for q > q 0 = N/(N − 2s), and q 0 < 2N/(N − 2) if 4s < N + 2. So, since p > N ,
|I 21 | ≤ C B 2R |∇K s [U 3 ]| q dx 1/q B 2R |∇ϕ| p dx 1/p ≤ C B 2R R −p dx 1/p ≤ CR N −p p R→∞ −→ 0.
For I 22 , we will use the same trick of the previous section,
I 22 = B 2R K s [U 3 ] (U 3 + µ) m−1 − µ m−1 ∆ϕdx + µ m−1 B 2R K s [U 3 ]∆ϕdx = I 221 + I 222 .
Now,
I 222 = µ m−1 B 2R U 3 K s [∆ϕ]dx = µ m−1 ||U 3 || 1 O(R −2+2s ) R→∞ −→ 0,
where we use the fact that K∆ has homogeneity 2 − 2s > 0 as a differential operator. Also,
I 221 = B 2R f (ξ)U 3 K s (U 3 )∆ϕdx,
where f (s) = s m−1 and ξ ∈ [µ, µ + U 3 (x)]. Again, since U 3 ∈ L ∞ , there exists a global bound for f (ξ), that is, f (ξ) ≤ (m − 1) max{µ m−2 , (µ + ||U 3 || ∞ ) m−2 } and so integral I 221 → 0 as R → ∞ (details could be found in [10]).
In the limit R → ∞, ϕ ≡ 1 and we get (6.6).
Consequence. The estimates done in Section 4.2 can be improved passing to the limit R → ∞, since the conservation of mass (6.6) eliminates some of the integrals that presented difficulties when trying to obtain upper bounds independent of µ. Therefore, we compute the following terms in the energy estimate (6.5).
For m = 2, 3 we have
B R F µ (u 0 )dx − B R F µ (U 2 )dx = = C B R [(u 0 + µ) 3−m − µ 3−m ]dx − 1 2 − m µ 2−m B R u 0 dx − C B R [(U 2 + µ) 3−m − µ 3−m ]dx + 1 2 − m µ 2−m B R U 2 dx −→ C R N [(u 0 + µ) 3−m − µ 3−m ]dx − C R N [(U 3 + µ) 3−m − µ 3−m ]dx, (6.7)
as R → ∞. We use the notation C =
1 (2−m)(3−m) . For m = 3 we have B R F µ (u 0 )dx − B R F µ (U 2 )dx = = − B R log 1 + u 0 µ dx + 1 µ B R u 0 dx + B R log 1 + U 2 µ dx − 1 µ B R U 2 dx −→ R N log 1 + U 3 µ dx − R N log 1 + u 0 µ dx as R → ∞. (6.8)
The following theorem summarizes the results obtained until now.
R N U 3 (x, t)dx = R N u 0 (x)dx
and ||U 3 (·, t)|| ∞ ≤ ||u 0 || ∞ . The following energy estimates also hold:
(i) First energy estimate:
• If m = 3, δ t 0 R N |∇U 3 | 2 (U 3 + µ) 2 dxdt + t 0 R N |∇H s [U 3 ]| 2 dxdt + R N log u 0 µ + 1 dx (6.9) ≤ R N log U 3 (t) µ + 1 dx.
• If m = 2, 3 and
δ t 0 R N |∇U 3 | 2 (U 3 + µ) m−1 dxdt + t 0 R N |∇H s [U 3 ]| 2 dxdt + (6.10) +C R N (U 3 (t) + µ) 3−m − µ 3−m dx ≤ C R N (u 0 + µ) 3−m − µ 3−m dx where C = C(m) = 1 (2−m)(3−m) .
(ii) Second energy estimate:
1 2 R N |H s [U 3 (T )]| 2 dx+δ T 0 R N |∇H s [U 3 ]| 2 dx dt + T 0 R N (U 3 + µ) m−1 |∇K s [U 3 ]| 2 dx dt ≤ 1 2 R N |H s [u 0 ]| 2 dx.
Limit as µ → 0
Similarly to the previous limits we can prove that
U 4 = lim µ→0 U 3 in L 2 (R N × (0, T )) when m ∈ (1, 3). Then U 4
will be a solution of problem
(P δ ) (U 4 ) t = δ∆U 4 + ∇ · U m−1 4 ∇K s [U 4 ] x ∈ R N , t > 0.
In order to pass to the limit, we need to find uniform bounds on µ > 0 for terms 3 and 4 of the energy estimates (6.9) and (6.10).
Uniform upper bounds
• Case m ∈ (1, 2). By the Mean Value Theorem,
1 (m − 2)(3 − m) R N (u 0 + µ) 3−m − µ 3−m dx ≤ 1 (m − 2) R N (u 0 + µ) 2−m u 0 dx ≤ (||u 0 || ∞ + 1) 2−m m − 2 R N u 0 dx.
This bound is independent of µ.
• Case m ∈ (2, 3). The function f (ζ) = ζ 3−m is concave and so f (U 3 + µ) ≤ f (µ) + f (U 3 ). In this way,
1 (2 − m)(3 − m) R N (U 3 (t) + µ) 3−m − µ 3−m dx ≤ 1 (2 − m)(3 − m) R N U 3 (t) 3−m dx.
The last integral is finite due to the exponential decay for U 3 that we proved in Section 5. In this way, the last estimate is uniform in µ.
The limit is a solution of the new problem (P δ ). The argument from Section 6.1 does not apply for the limit
(6.11) T 0 R N (U 3 + µ) m−1 ∇K s [U 3 ]∇φdxdt µ→0 −→ T 0 R N U m−1 4 ∇K s [U 4 ]∇φdxdt.
In order to show that this convergence holds, we note that from the first energy estimate we get that ] as µ → 0 in L 2 (Ω). Then we have the convergence (6.11) since U 3 ∈ L ∞ (R N ) and φ is compactly supported.
Remarks. • In the case m = 2 the corresponding term is R N U 3 log − (U 3 + µ)dx which is uniformly bounded if U 3 has an exponential tail. This has been proved by Caffarelli and Vázquez in [10]. We do not repeat the proof here.
• The case m ≥ 3 is more difficult since we can not find uniform estimates in µ > 0 for the energy estimates that allow us to pass to the limit.
Limit as δ → 0
We will prove that there exists a limit u = lim δ→0 U 4 in L 2 (R N × (0, T )) and that u(x, t) is a weak solution to Problem (1.1). Thus, we conclude the proof of Theorem 1.2 stated in the introduction of this chapter.
We comment on the differences that appear in this case. From the first energy estimate we have that
δ T 0 R N |∇U 4 | 2 U m−1 4 dxdt ≤ C(m, u 0 ),
which gives us that δ∇U 4 ∈ L 2 (Q T ) since U 4 ∈ L ∞ (Q T ). Then, as in Section 6.1, we have that
δ∆U 4 ∈ H −1 (R N ) uniformly in δ. Also ∇(U m−1 4 ∇K s [U 4 ]) ∈ H −1 (R N ) as before. Then (U 4 ) t ∈ H −1 (R N )
independently on δ. Therefore we use the compactness argument of Simon to obtain that there exists a limit
U 4 (x, t) → u(x, t) L 2 ((0, T ) × R N ).
Now we show that u is the weak solution of Problem (1.1). It is trivial that
δ T 0 R N U 4 ∆φ → 0 as δ → 0. On the other hand, ∇K s [U 4 ] = H s [∇H s [U 4 ]] ∈ L 2 loc (Q T ) uniformly on δ > 0 since ∇H s [U 4
] ∈ L 2 (Q T ) uniformly on δ > 0. In this way, ∇K s [U 4 ] has a weak limit in L 2 loc (Q T ). As in Lemma 6.1 (2) we can identify this limit and so on,
∇K s [U 4 ] → ∇K s [u] weakly in L 2 loc (Q T ) as δ → 0 and therefore T 0 R N U m−1 4 ∇K s [U 4 ]∇φdxdt δ→0 −→ T 0 R N u m−1 ∇K s [u]∇φdxdt. since U m−1 4 → u m−1 in L 2 loc (Q T ) as δ → 0.
6.5 Dealing with the case N = 1 and 1/2 < s < 1
As we have commented before, the operator K s is not well defined when N = 1 and 1/2 < s < 1 since the kernel |x| 1−2s does not decay at infinity, indeed it grows. It makes no sense to think of equation (6.12) in terms of a pressure as before. This is maybe not very convenient, but it is not an essential problem, since equation (1.1) can be considered in the following sense:
(6.12) u t (t, x) = ∇ · u m−1 (∇K s )[u] for x ∈ R N , t > 0,
where the combined operator (∇K s ) is defined as the convolution operator
(∇K s )[u] := (∇K s ) * u with K s (x) = c s |x| 1−2s .
Other authors that dealt with N = 1 have considered operator (∇K s ) before. They use the notation ∇ 2s−1 to refer to it. Note that ∇K s (x) = (−1 + 2s)c s x |x| 3−2s , and so, ∇K s ∈ L 1 loc (R) for N = 1 and 1/2 < s < 1. Moreover, (∇K s ) is an integral operator in this range. As in Subsection 2. since the operator H s is well defined for any s ∈ (0, 1) even in dimension N = 1.
In this way, almost all the arguments from Section 6 apply by replacing ∇(K s (u)) for (∇K s )(u). The only exception is Lemma 6.1 where the weak L 2 (R) limit of ∇K s [U 1 ] is identified. This argument is replaced by the following Lemma:
Lemma 6.4. Let N=1 and 1/2 < s < 1. Then
T 0 B R U 1 (∇K s ) [ψ] dxdt →0 −→ T 0 B R U 2 (∇K s )[ψ] dxdt ∀ψ ∈ C ∞ c (Q T ).
Proof.
T 0 B R U 1 (∇K s ) [ψ] − U 2 (∇K s )[ψ] dxdt = T 0 B R (U 1 − U 2 )(∇K s ) [ψ] dxdt + T 0 B R U 2 (∇K s ) [ψ] − (∇K s )[ψ] dxdt.
The first integral on the right hand side goes to zero with since ||(∇K s ) [ψ]|| L ∞ (R) ≤ K for some positive constant K which does not depend on and U 1 → U 2 as → 0 in L 2 (B R × (0, T )). The second integral also goes to zero as consequence of (6.13) and the fact that U 2 ∈ L ∞ (R) uniformly on .
Finite propagation property for m ∈ [2, 3)
In this section we will prove that compactly supported initial data u 0 (x) determine the solutions u(x, t) that have the same property for all positive times. for some a, b > 0, with support in the ball B b (0), then there is a constant C large enough, such that
u(x, t) ≤ a(Ct − (|x| − b)) 2 .
Actually, we can take C(L, a) = C(1, 1)L m− 3 2 +s a 1 2 −s . For 1/2 ≤ s < 1 a similar conclusion is true, but C = C(t) is an increasing function of t and we do not obtain a scaling dependence of L and a.
Proof. The method is similar to the tail control section. We assume u(x, t) ≥ 0 has bounded initial data u 0 (x) = u(x, t 0 ) ≤ L, and also that u 0 is below the parabola U 0 (x) = a(|x| − b) 2 , a, b > 0. Moreover the support of U 0 is the ball of radius b and the graphs of u 0 and U 0 are strictly separated in that ball. We take as comparison function U (x, t) = a(Ct − (|x| − b)) 2 and argue at the first point in space and time where u(x, t) touches U from below. The fact that such a first contact point happens for t > 0 and x = ∞ is justified by regularization, as before. We put r = |x|. Note that since u ≤ 1 we must have |h| ≤ 1. Assuming that u is also C 2 smooth, since we deal with a first contact point (x c , t c ), we have that u
= U , ∇(u − U ) = 0, ∆(u − U ) ≤ 0, (u − U ) t ≥ 0, that is u(x c , t c ) = h 2 , u r = −2h, ∆u ≤ 2N, u t ≥ 2Ch.
For p = K s (u) and using the equation u t = (m − 1)u m−2 ∇u · ∇p + u m−1 ∆p, we get the inequality
(7.1) 2Ch ≤ 2(m − 1)h 2m−3 −p r + h 2 ∆p ,
where p r and ∆p are the values of p r and ∆p at the point (x c , t c ). In order to get a contradiction, we will use estimates for the values of p r and ∆p already proved in [10] (see Theorem 5.1. of [10])
(7.2) − p r ≤ K 1 + K 2 h 1+2s + K 3 h, ∆p ≤ K 4 .
Therefore, inequality (7.1) combined with the estimates (7.2) implies that
(7.3) 2C ≤ 2(m − 1)h 2m−4 K 1 + K 2 h 1+2s + Kh ,
which is impossible for C large (independent of h), since m > 2 and |h| ≤ 1. Therefore, there cannot be a contact point with h = 0. In this way we get a minimal constant C = C(N, s) for which such contact does not take place.
Remark: For m < 2, we do not obtain a contradiction in the estimate (7.3), since the term K 1 h 2m−4 can be very large for small values of |h|.
• Reduction. Dependence on L and a. The equation is invariant under the scaling
(7.4) u(x, t) = Au(Bx, T t)
with parameters A, B, T > 0 such that T = A m−1 B 2−2s .
Step I. We prove that if u has height 0 ≤ u(x, t) ≤ 1 and initially satisfies u(
x, 0) = u 0 (x) ≤ (|x| − b) 2 then u(x, t) ≤ U (x, t) = (Ct − (|x| − b)) 2 for all t > 0.
Step II. We search for parameters A, B, T for which the function u is defined by (7.4) satisfies
0 ≤ u(x, t) ≤ L, u(x, 0) ≤ a(|x| − b) 2 .
An easy computation gives us A = L, AB 2 = a, b = b/B. • Case 1/2 ≤ s < 1. The proof relies on estimating the term ∂ r p at a possible contact point. This is independent on m and it was done in [10].
Moreover, by the relation between
Lemma 7.2. Under the assumptions of Theorem 7.1 there is no contact between u(x, t) and the parabola U (x, t), in the sense that strict separation of u and U holds for all t > 0 if C is large enough.
Proof. We want to eliminate the possible contact of the supports at the lower part of the parabola, that is the minimum |x| = Ct + b. Instead of analyzing the possible contact point, we proceed by a change in the test function that we replace by
U (x, t) = (Ct − (|x| − b)) 2 + (1 + Dt) for |x| ≤ b + Ct, (1 + Dt), for |x| ≥ b + Ct.
The function U is constructed from the parabola U by a vertical translation (1 + Dt) and a lower truncation with 1 + Dt outside the ball {|x| ≤ b + Ct}. Here 0 < < 1 is a small constant and D > 0 will be suitable chosen.
We assume that the solution u(x, t) starts as u(x, 0) = u 0 (x) and touches for the first time the parabola U at t = t c and spatial coordinate x c . The contact point can not be a ball {|x| ≤ b + Ct} since U is a parabola here and this case was eliminated in the previous Theorem 7.1. Consider now the case when the first contact point between u(x, t) and U (x, t) is when |x c | ≥ b + Ct c . At the contact point we have that u = U , ∇(u − U ) = 0, ∆(u − U ) ≤ 0, (u − U ) t ≥ 0. In this region the spatial derivatives of U are zero, hence the equation gives us
D = ( (1 + Dt c )) m−1 ∆p,
where ∆p is the value of ∆p = (−∆) 1−s u at the point (x c , t c ). Since is small we get that the bound u(x, t) ≤ U 1 (x, t) is true for all |x| ≤ R N . This allows us to prove that that ∆p is bounded by a constant K. We obtain that D ≤ ( (1 + Dt c )) m−1 K. Since m ≥ 2 and < 1, this implies that
D ≤ (1 + Dt c ) m−1 K.
We obtain a contradiction for large D, for example D = 2K, and for
t c < T c = 1 2K 2 1/(m−1) − 1 .
Therefore, we proved that a contact point between u and U is not posible for t < T c , and thus u(x, t) ≤ U (x, t) for t < T c . The estimate on t c is uniform in and we obtain in the limit → 0 that
u(x, t) ≤ U (x, t) = (Ct − (|x| − b)) for t < 1 2K 2 1/(m−1) − 1 .
As a consequence, the support of u(x, t) is bounded by the line |x| = Ct + b in the time interval [0, T c ). The comparison for all times can be proved with an iteration process in time.
• Regularity requirements. Using the smooth solutions of the approximate equations, the previous conclusions hold for any constructed weak solution.
Remark. The following result about the free boundary is valid only for s < 1/2 and for solutions with bounded and compactly supported initial data. The result is a direct consequence of the parabolic barrier study done in the previous section. Since that barrier does not depend explicitly on m if m ≥ 2, the proof presented in [10] is valid here. By free boundary FB(u) we mean, the topological boundary of the support of the solution
S(u) = {(x, t) : u(x, t) > 0}.
Persistence of positivity
This property is also interesting in the sense that avoids the possibility of degeneracy points for the solutions.
In particular, assuming that the solutions are continuous, it implies the non-shrinking of the support. Due to the nonlocal character of the operator, the following theorem can be proved only for a certain class of solutions.
Lemma 7.4. Let u be a weak solution as constructed in Theorem 1.3 and assume that the initial data u 0 (x) is radially symmetric and non-increasing in |x|. Then u(x, t) is also radially symmetric and non-increasing in |x|.
Proof. The operators in the approximate problem (P δµR ) are invariant under rotation in the space variable.
Since the solution of problem (P δµR ) is unique, then we obtain that u(x, t) is radially symmetric.
Theorem 7.5. Let u be a weak solution as constructed in Theorem 1.3 and assume that it is a radial function of the space variable u(|x|, t) and is non-increasing in |x|. If u 0 (x) is positive in a neighborhood of a point x 0 , then u(x 0 , t) is positive for all times t > 0.
Proof. A similar technique as the one presented in the tail analysis is used for this proof, but with what we call true subsolutions. Assume u 0 (x) ≥ c > 0 in a ball B R (x 0 ). By translation and scaling we can also assume c = R = 1 and x 0 = 0. Again, we will study a possible first contact point with a barrier that shrinks quickly in time, like
(7.5) U (x, t) = e −at F (|x|),
with F : R ≥0 −→ R ≥0 to be chosen later and a > 0 large enough. Choose F (0) = 1/2, F (r) = 0 for r ≥ 1/2 and F (r) ≤ 0 for all r ∈ R ≥0 . The contact point (x c , t c ) is sought in B 1/2 (0) × (0, ∞). By approximation we can assume that u is positive everywhere so there are no contact points at the parabolic border. At the possible contact point (x c , t c ) we have
u(x c , t c ) = U (x c , t c ), u t (x c , t c ) ≤ U t (x c , t c ) = −aU (x c , t c ), ∇u(x c , t c ) = ∇U (x c , t c ) = e −atc F (|x c |)e r , e r = x c /|x c |.
We recall the equation
u t = (m − 1)u m−2 ∇u∇p + u m−1 ∆p.
Then at the contact point (x c , t c ) we have
−aU = U t ≥ u t = (m − 1)U m−2 ∇U ∇p + U m−1 ∆p,
where ∆p = ∆p(x c , t c ). Then According to [10] we know that the term F (|x|) p r ≥ 0 and ∆p is bounded uniformly. Therefore −ae −atc F (|x c |) ≥ e −a(m−1)tc F (|x c |) m−1 ∆p.
Simplifying and using that m ≥ 2, ∆p is bounded uniformly and also F is bounded, we obtain
a ≤ −e −a(m−2)tc F (|x c |) m−2 ∆p ≤ Ke −a(m−2)tc ≤ K.
This is not true if a > K and we arrive at a contradiction.
Remark. There exist counterexamples on the persistence of positivity property when the hypothesis of Theorem 7.5 are not satisfied. In [10] (Theorem 6.2) the authors construct an explicit counterexample by taking an initial data with not connected support.
8 Infinite propagation speed in the case 1 < m < 2 and N = 1
In this section we will consider model (1.1)
(8.1) ∂ t u = ∂ x · (u m−1 ∂ x p), p = (−∆) −s u,
for x ∈ R, t > 0 and s ∈ (0, 1). We take compactly supported initial data u 0 ≥ 0 such that u 0 ∈ L 1 loc (R). We want to prove infinite speed of propagation of the positivity set for this problem. This is not easy, hence we introduce the integrated solution v, given by (8.2) v(x, t) =
x −∞ u(y, t) dy ≥ 0 for t > 0, x ∈ R.
Therefore v x = u and v(x, t) is a solution of the equation
(8.3) ∂ t v = −|v x | m−1 (−∆) α v,
in some sense that we will make precise. The exponents α and s are related by α = 1 − s. The technique of the integrated solution has been extensively used in the standard Laplacian case to relate the porous medium equation with its integrated version, which is the p-Laplacian equation, always in 1D, with interesting results, see e. g. [25]. The use of this tool in [4] for fractional Laplacians in the case m = 2 was novel and very fruitful. We consider equation (8.3) with initial data
(8.4) v(x, 0) = v 0 (x) := x −∞ u 0 (x) dx for all x ∈ R.
Note that v(x, t) is a non-decreasing function in the space variable x. Moreover, since u(x, t) enjoys the property of conservation of mass, then v(x, t) satisfies (see Figure 1)
lim x→−∞ v(x, t) = 0, lim x→+∞ v(x, t) = M
for all t ≥ 0. We devote a separate study to the solution v of the integrated problem (8.3) in Section 8.3. The validity of the maximum principle for equation (8.3) allows to prove a clean propagation theorem for v. The use of the integrated function is what forces us to work in one space dimension. The result continues the theory of the porous medium equation with potential pressure, by proving that model (8.1) has different propagation properties depending on the exponent m by the ranges m ≥ 2 and 1 < m < 2. Such a behaviour is well known to be typical for the classical Porous Medium Equation u t = ∆u m , recovered formally for s = 0, which has finite propagation for m > 1 and infinite propagation for m ≤ 1. Therefore, our result is unexpected, since it shows that for the fractional diffusion model the separation between finite and infinite propagation is moved to m = 2.
Proof of Theorem 1.4, part b). This weaker result follows immediately. In fact, in Theorem 8.1 we prove that v(x, t) defined by (8.2) is positive for every t > 0 if x ∈ R. Therefore for every t > 0 there exist points x arbitrary far from the origin such that u(x, t) > 0.
If moreover, u 0 is radially symmetric and non-increasing in |x| and u inherits the symmetry and monotonicity properties of the initial data as proved in Lemma 7.4. This ensures that u can not take zero values for any x ∈ R and t > 0.
Study of the integrated problem
• Connection between Model (8.1) and Model (8.3)
We explain how the properties of the Model (8.1) with N = 1 can be obtained via a study of the properties of the integrated equation (8.3). We consider equation (8.1) with compactly supported initial data u 0 such that u 0 ≥ 0. Let us say that supp u 0 ⊂ [−R, R], where R > 0. Therefore, the corresponding initial data to be considered for the integrated problem is v 0 (x) =
x −∞ u 0 (y)dy, for all x ∈ R. Then v 0 : R → [0, ∞) and has the properties
(8.5) v 0 (x) = 0 for x < −R, v 0 (x) = M for x > R, v 0 (x) ≥ 0 for x ∈ (−R, R),
where R > 0 is fixed from the beginning and M = R u 0 (x)dx is the total mass. Proof. I. Preliminary estimates.
Regularity
Since v x (x, t) = u(x, t), where u is the solution of Problem (1.1), then by the estimates of Section 6.1 we have the following:
• v x = u ∈ L ∞ ([0, T ] : L ∞ (R)), therefore v ∈ L ∞ ([0, T ] : Lip(R)), where Lip(R) is the space of Lipschitz continuous functions on R. In particular, v ∈ L ∞ ([0, T ] : Lip(B R )) for every B R ⊂ R.
• We have (v t ) x = u t = ∂ x (u m−1 ∂ x (−∆) −s u) in the sense of distributions.
Then v t ∈ L 2 ([0, T ] : L 2 (B)) for every set B ⊂ R, with |B| < +∞. The proof is as follows. The first equality holds in the distributions sense, that is
T 0 R v t ϕ x dx dt := − T 0 R v(ϕ x ) t dx dt = T 0 R u m−1 ∂ x (−∆) −s u ϕ x dx dt, ∀ϕ ∈ C ∞ 0 (R × [0, T ]).
This implies that v t = u m−1 ∂ x (−∆) −s u a.e. in R. Then, using the second energy estimate (3.3), we obtain Then
|v(x 0 , t 1 ) − v(x 0 , t 0 )| ≤ |v(x 0 , t 1 ) − v(x, t 1 )| + |v(x, t 0 ) − v(x 0 , t 0 )| + |v(x, t 1 ) − v(x, t 0 )|.
We know v ∈ Lip x (R); let L the corresponding Lipschitz constant. Then
|v(x 0 , t 1 ) − v(x 0 , t 0 )| ≤ 2Lh + 1 h x x0 |v(y, t 1 ) − v(y, t 0 )|dy ≤ 2Lh + 1 h x x0 t1 t0 v t dt dy ≤ 2Lh 2 + x x0 t1 t0 |v t | dydt ≤ 2Lh + 1 h |B| 1/2 |t 1 − t 0 | 1/2 v t 2 L 2 ([0,T ]:L 2 (B) = 2Lh + |t 1 − t 0 | 1/2 h 1/2 v t 2 L 2 ([0,T ]:L 2 (B) .
Optimizing, we choose h ∼ |t1−t0| 1/2
h 1/2
, that is h ∼ (t 1 − t 0 ) 3/2 , and we obtain that
|v(x 0 , t 1 ) − v(x 0 , t 0 )| ≤ K|t 1 − t 0 | 1/3 .
This estimate holds uniformly in x ∈ R and it proves that v(x, t) is Hölder continuous in time. In particular v ∈ C([0, T ] : C(R)).
Viscosity solutions
Notion of solution. We define the notions of viscosity sub-solution, super-solution and solution in the sense of Crandall-Lions [13]. The definition will be adapted to our problem by considering the time dependency and also the nonlocal character of the Fractional Laplacian operator. For a presentation of the theory of viscosity solutions to more general integro-differential equations we refer to Barles and Imbert [1].
It will be useful to make the notations: Let v ∈ USC(R × (0, ∞)) (resp. v ∈ LSC(R × (0, ∞)) ). We say that v is a viscosity subsolution (resp. super-solution) of equation (8.3) on R × (0, ∞) if for any point (x 0 , t 0 ) with t 0 > 0 and any τ ∈ (0, t 0 ) and any test function ϕ ∈ C 2 (R × (0, ∞)) ∩ L ∞ (R × (0, ∞)) such that v − ϕ attains a global maximum (minimum) at the point (x 0 , t 0 ) on
USCQ τ = R × (t 0 − τ, t 0 ] we have that ∂ t ϕ(x 0 , t 0 ) + |ϕ x (x 0 , t 0 )| m−1 ((−∆) α ϕ(·, t 0 ))(x 0 ) ≤ 0 (≥ 0).
Since equation (8.3) is invariant under translation, the test function ϕ in the above definition can be taken such that ϕ touches v from above in the sub-solution case, resp. ϕ touches v from below in the super-solution case.
We say that v is a viscosity sub-solution (resp. super-solution) of the initial-value problem (8.
3)-(8.4) on R × (0, ∞) if it satisfies moreover at t = 0 v(x, 0) ≤ lim sup y→x, t→0 v(y, t) (resp. v(x, 0) ≥ lim inf y→x, t→0
v(y, t)).
We say that v ∈ C(R × (0, ∞)) is a viscosity solution if v is a viscosity sub-solution and a viscosity supersolution on R × (0, ∞).
(v ) t = δ∆(v ) + |(v ) x | m−1 (−∆) 1−s v .
Since u → u, then we get that v → v as → 0 (and similarly with respect to the other parameters). The final argument is to prove that a limit of viscosity solutions is a viscosity solution of Problem (8.3)-(8.4).
The standard comparison principle for viscosity solutions holds true. We refer to Imbert, Monneau and Rouy [23] where they treat the case m = 2 and α = 1/2. Also, we mention Jakobsen and Karlsen [24] for the elliptic case.
If w(x, 0) ≤ v 0 ≤ W (x, 0), then w ≤ W in R × (0, ∞).
We give now our extended version of parabolic comparison principle, which represents an important instrument when using barrier methods. This type of result is motivated by the nonlocal character of the problem and the construction of lower barriers in a desired region Ω ⊂ R possibly unbounded. This determines the parabolic boundary of a domain of the form Ω
× [0, T ] to be (R \ Ω) × [0, T ] ∪ R × {0},
where Ω ⊂ R. A similar parabolic comparison has been proved in [6] and has been used for instance in [6,31].
Proposition 8.6. Let m > 1, α ∈ (0, 1). Let v be a viscosity solution of Problem (8.3)-(8.4). Let Φ : R×[0, ∞) → R such that Φ ∈ C 2 (Ω × (0, T )). Assume that • Φ t + |Φ x | m−1 (−∆) α Φ < 0 for x ∈ Ω, t ∈ [0, T ];
• Φ(x, 0) < v(x, 0) for all x ∈ R (comparison at initial time);
• Φ(x, t) < v(x, t) for all x ∈ R \ Ω and t ∈ (0, T ) (comparison on the parabolic boundary).
Then Φ(x, t) ≤ v(x, t) for all x ∈ R, t ∈ (0, T ).
Proof. The proof relies on the study of the difference Φ − v : R × [0, ∞) → R. At the initial time t = 0 we have by hypothesis that Φ(x, 0) − v(x, 0) < 0 for all x ∈ R. Now, we argue by contradiction. We assume that the function Φ − v has a first contact point (x c , t c ) where x c ∈ Ω and t c ∈ (0, T ). That is, (Φ − v)(x c , t c ) = 0 and (Φ − v)(x, t) < 0 for all 0 < t < t c , x ∈ R, by regularity assumptions. Therefore, (Φ − v) has a global maximum point at (x c , t c ) on R × (0, t c ]. Therefore, v − Φ attains a global minimum at (x c , t c ).
Since v is a viscosity solution and Φ is an admissible test function then by definition
Φ t (x c , t c ) + |Φ x (x c , t c )| m−1 (−∆) α Φ(x c , t c ) ≥ 0,
which is a contradiction since this value is negative by hypothesis.
Self-Similar Solutions. Formal approach
Self-similar solutions are the key tool in describing the asymptotic behaviour of the solution to certain parabolic problems. We perform here a formal computation of a type of self-similar solution to equation (8.3), being motivated by the construction of suitable lower barriers.
Let m ∈ (1, 2) and α ∈ (0, 1). We search for self-similar solutions to equation (8.3) of the form
U (x, t) = Φ(|y|t −b )
which solve equation (8.3) in R × (0, ∞). After a formal computation, it follows that the exponent b > 0 is given by b = 1/(m − 1 + 2α) and the profile function Φ is a solution of the equation
byΦ (y) − |Φ (y)| m−1 (−∆) α Φ(y) = 0.
We deduce that any possible behaviour of the form Φ(y) = c|y| −γ with γ > 1 is given by
(8.6) γ = 2α + m 2 − m .
The value of the self-similarity exponent will be used in the next section for the construction of a lower barrier. A further analysis of self-similar solutions is beyond the purpose of this paper and can be the subject of a new work. We mention that in the case m = 2, the profile function Φ has been computed explicitly by Biler, Karch and Monneau in [4].
Construction of the lower barrier
In this section we present a class of sub-solutions of equation (8.3) which represent an important tool in the proof of the infinite speed of propagation. For a suitable choice of parameters this type of sub-solution will give us a lower bound for v in the corresponding domain. This motivates us to refer to this function as a lower barrier. We mention that a similar lower barrier has been constructed in [31].
Let γ = m + 2α 2 − m and b = 1 m − 1 + 2α
be the exponents deduced in Section 8.4.
We fix x 0 < 0. In the sequel we will use as an important tool a function G : R → R such that, given any two constants C 1 > 0 and C 2 > 0, we have that • (G1) G is compactly supported in the interval (−x 0 , ∞);
• (G2) G(x) ≤ C 1 for all x ∈ R; • (G3) (−∆) s G(x) ≤ −C 2 |x| −(1+2s) for all x < x 0 .
This technical result will be proven in Lemma 9.1 of Section 9 (Appendix).
Lemma 8.7 (Lower Barrier). Let x 0 < 0, > 0 and ξ > 0. Also, let G be a function with the properties (G1),(G2) and (G3). We consider the barrier
(8.7) Φ (x, t) = (t + τ ) bγ (|x| + ξ) −γ + G(x) − , t ≥ 0, x ∈ R.
Then for a suitable choice of the parameter C 2 > 0, the function Φ satisfies
(8.8) (Φ ) t + |(Φ ) x | m−1 (−∆) α Φ ≤ 0 for x < x 0 , t > 0.
Moreover, C 1 is a free parameter and C 2 = C 2 (N, m, α, τ ).
Proof. We start by checking under which conditions Φ satisfies (8.8), that is, Φ is a classical sub-solution of equation (8.3) in Q. To this aim, we have that
(Φ ) t + |(Φ ) x | m−1 (−∆) α Φ = bγ (t + τ ) bγ−1 (|x| + ξ) γ + γ m−1 (t + τ ) bγ(m−1) (|x| + ξ) (γ+1)(m−1) (−∆) α Φ (x, t) = bγ (t + τ ) bγ−1 (|x| + ξ) γ + γ m−1 (t + τ ) bγ(m−1) (|x| + ξ) (γ+1)(m−1) (t + τ ) bγ (−∆) α [(|x| + ξ) −γ ] + (−∆) α G .
Now, by Lemma 9.2 we get the estimate (−∆) α ((|x| + ξ) −γ ) ≤ C 3 |x| −(1+2α) for all |x| ≥ |x 0 |, with positive constant C 3 = C 3 (N, m, α). At this step, we choose the parameter C 2 in the assumption (G2) to be at least C 2 > C 3 . The precise choice will be deduced later. Since γ = (γ + 1)(m − 1) + 1 + 2α, we continue as follows:
(Φ ) t + |(Φ ) x | m−1 (−∆) α Φ ≤ bγ (t + τ ) bγ−1 (|x| + ξ) γ + γ m−1 (t + τ ) bγm (|x| + ξ) (γ+1)(m−1) (C 3 − C 2 )|x| −(1+2α) = (|x| + ξ) −(γ+1)(m−1) · · bγ(t + τ ) bγ−1 (|x| + ξ) −(1+2α) + γ m−1 (t + τ ) bγm (C 3 − C 2 )|x| −(1+2α) ≤ (|x| + ξ) −(γ+1)(m−1) |x| −(1+2α) bγ(t + τ ) bγ−1 + γ m−1 (t + τ ) bγm (C 3 − C 2 )
which is negative for all (x, t) ∈ Q, if we ensure that C 2 is such that:
(8.9) C 2 > C 3 + bγ 2−m τ bγ(1−m)−1 .
This choice of C 2 is independent on the parameters ξ, .
From now on, we will take τ = 1, which will be enough for our purpose. We can now prove the main result for the model (8.3) which in particular implies the infinite speed of propagation of model (1.1) for 1 < m < 2 in dimension N = 1.
Proof Theorem 8.1
Let x 0 < 0 fixed. We prove that v(x, t) > 0 for all t > 0 and x < x 0 . By scaling arguments, the initial data v 0 with properties (8.5), satisfies
(8.10) v 0 (x) ≥ H x0 (x) = 0, x < x 0 , 1, x > x 0 .
We will prove that v(x, t) ≥ Φ (x, t) in the parabolic domain Q T = {x < x 0 , t ∈ [0, T ]} by using as an essential tool the Parabolic Comparison Principle established in Proposition 8.6. We describe the proof in the graphics below, where the barrier function is represented, for simplicity, without the modification caused by the function G(·) (Figure 3).
To this aim we check the required conditions in order to apply the above mentioned comparison result.
• Comparison on the parabolic boundary. This will be done in two steps.
(a) Comparison at the initial time. The initial data (8.10) naturally impose the following conditions on Φ . At time t = 0 we have Φ (x 0 , 0) < 0, which holds only if ξ satisfies
(8.11) ξ > x 0 + − 1 γ . Therefore Φ (x 0 , 0) < v 0 (x 0 ) since v 0 (x 0 ) > 0.
(b) Comparison on the lateral boundary. Let k 1 := min{v(x, t) :
x ≥ x 0 , 0 < t ≤ T } with k 1 > 0. This results follows from the continuity v ∈ C([0, T ] : C(R)) since v 0 (x 0 ) = 1. We impose the condition
Φ (x, t) < v(x, t) for all x ≥ x 0 , t ∈ [0, T ].
It is sufficient to have (T + 1) bγ (ξ −γ + C 1 ) < k 1 .
The maximum value of T for which this inequality holds is
(8.12) T < k 1 ξ −γ + C 1 1/bγ − 1.
We need to impose a compatibility condition on the parameters in order to have T > 0, that is:
(8.13) ξ > (k 1 − C 1 ) − 1 γ .
The remaining parameter C 1 from assumption (G2) is chosen here such that: C 1 < k 1 .
By Proposition 8.6 we obtain the desired comparison
v(x, t) ≥ Φ (x, t) for all (x, t) ∈ Q T .
• Infinite speed of propagation. Let x 1 < x 0 and t 1 ∈ (0, T ) where T is given by (8.12). We prove there exists a suitable choice of ξ and such that Φ (x 1 , t 1 ) > 0. This is equivalent to impose the following upper bound on ξ:
(8.14) ξ < x 1 + (t 1 + 1) b − 1 γ .
We need to check now if there exists > 0 such that condition (8.14) is compatible with conditions (8.11) and (8.13). For the compatibility of conditions (8.11) and (8.13) we have
x 0 + − 1 γ < ξ < x 1 + (t 1 + 1) b − 1 γ ,
that is,
(8.15) < (t 1 + 1) b − 1 x 0 − x 1 γ .
For conditions (8.13) and (8.14) we need
(k 1 − C 1 ) − 1 γ ≤ ξ < x 1 + (t 1 + 1) b − 1 γ ,
which is equivalent to
(8.16) < (t 1 + 1) b (k 1 − C 1 ) − 1 γ − x 1 γ .
Both upper bounds (8.15) and (8.16) make sense since 0 > x 0 > x 1 and k 1 > C 1 .
Summary. The proof was performed in a constructive manner and we summarize it as follows: C 1 < k 1 , T given by (8.12). Then by taking the minimum of (8.15)-(8.16), ξ satisfying (8.11)-(8.13)-(8.14) we obtain that Φ(t 1 , x 1 ) > 0.
This proofs that v(t 1 , x 1 ) > 0 for any t ∈ (0, T ).
Remark. The parameter ξ of the barrier depends on by (8.11) and (8.14) and therefore ξ → ∞ as → 0. Therefore Φ (x, t) → 0 as → 0 for every (x, t) ∈ Q T and we can not derive a lower parabolic estimate for v(x, t)
in Q T . x Φ (x, 0) v 0 (x) 1 0 − x 0(−∆) s u(x) = σ N,s P.V. R N u(x) − u(y) |x − y| N +2s , 0 < s < 1,
where σ s a normalization constant given by
σ N,s = 2 2s Γ( N +2s 2 ) π N/2 Γ(−N/2) .
First, given the expression of the fractional Laplacian, we construct a function with the desired properties.
Lemma 9.1. Given two arbitrary constants C 1 , C 2 > 0 there exists a function G : R → [0, +∞) with the following properties:
1. G is compactly supported.
2. G(x) ≤ C 1 for all x ∈ R 3. (−∆) s G(x) ≤ −C 2 |x| −(1+2s) for all x ∈ R with d(x, supp(G)) ≥ 1.
Proof. Let R an arbitrary positive number to be chosen later. We consider a smooth function G 1 : R → [0, +∞) such that G 1 (x) ≤ C 1 for all x ∈ R and supported in the interval [−1, 1].
We define G R (x) = G 1 (x/R). Therefore G R L 1 (R) = R G 1 L 1 (R) , G R ≤ C 1 and G is supported in the interval [−R, R]. Then for |x| ≥ R + 1 we have that
(−∆) s G R (x) = σ s R G R (x) − G R (y) |x − y| 1+2s dy = −σ s R −R G R (y) |x − y| 1+2s dy ≤ −σ s R −R G R (y) (|x| + R) 1+2s dy = −σ s (|x| + R) −(1+2s) G R L 1 (R) ≤ −σ s 2 −(1+2s) G R L 1 (R) |x| −(1+2s) = −σ s 2 −(1+2s) R G 1 L 1 (R) |x| −(1+2s) .
It is enough to choose R ≥ C 2 2 1+2s σ s ||G 1 || L 1 (R) to get (−∆) s G R (x) ≤ C 2 |x| −(1+2s) . Note that R implicitly depends on C 1 since ||G 1 || L 1 (R) ≤ 2C 1 .
Secondly, we need to estimate the fractional Laplacian of a negative power function. The following result is similar to one proven by Bonforte and Vázquez in Lemma 2.1 from [5] with the main difference that our function is C 2 away from the origin. We make a brief adaptation of their proof to our situation. Proof. Let us first estimate the L 1 norm of ϕ. For IV we take use that when |y| < |x|/2 then |y − x| ≥ |x|/2 and |y| < |x| which implies ϕ(y) > ϕ(x). We have IV ≤ ≤ K 4 |x| 1+2s , K 4 = K 4 (γ, s, ξ).
R ϕ(x)dx = |x|<1 ϕ(x)dx + |x|>1 ϕ(x)dx ≤ |x|<1 ξ −γ dx + |x|>1 x −γ dx ≤ 2ξ −γ + 2 ∞ 1 r −γ dr = 2ξ −γ + 2 γ − 1 < C, C = C(γ, ξ).
Since γ > 1, we can conclude that |(−∆) s ϕ(x)| ≤ |I| + |II| + |III| + |IV | = K 5 |x| −γ−2s + K 4 |x| −1−2s ≤ K 6 |x| −1−2s , ∀|x| ≥ |x 0 | > 1.
Reminder on cut-off functions
We remind the construction of cut-off functions. Let f (x) = e −1/x , x ≥ 0, 0, x < 0.
Then f ∈ C ∞ (R). Let
F (x) = f (x) f (x) + f (1 − x) , x ∈ R.
Then F (x) = 0 for x < 0, F (x) = 1 for x ≥ 1 and F (x) ∈ (0, 1) for x ∈ (0, 1). We construct now the cut-off function ϕ : R N → ([0, 1] by: ϕ(x) = F (2 − |x|), x ∈ R N .
Then ϕ ∈ C ∞ (R N ), ϕ(x) = 1 for |x| ≤ 1, ϕ(x) = 0 for |x| ≥ 2 and ϕ(x) ∈ (0, 1) for |x| ∈ (1, 2). The cut-off function for B R is obtained by ϕ R (x) = ϕ(x/R).
Thus ϕ R ∈ C ∞ (R N ), ϕ R (x) = 1 for |x| ≤ R, ϕ R (x) = 0 for |x| ≥ 2R and ϕ R (x) ∈ (0, 1) for |x| ∈ (R, 2R). Also, we have that ∇(ϕ R ) = O(R −1 ), ∆(ϕ R ) = O(R −2 ).
9.3
Compact sets in the space L p (0, T ; B)
Necessary and sufficient conditions of convergence in the spaces L p (0, T ; B) are given by Simon in [28]. We recall now their applications to evolution problems. We consider the spaces X ⊂ B ⊂ Y with compact embedding X → B.
Comments and open problems
• Case m ≥ 3. In this range of exponents the first energy estimate does not hold anymore. Therefore, we lose the compactness result needed to pass to the limit in the approximations to obtain a weak solution of the original problem. The second energy estimate is still true and it gives us partial results for compactness. In our opinion a suitable tool to replace the first energy estimate would be proving the decay of some L p norm. In that case we will also need a Stroock-Varoupolous type inequality for some approximation L s of the fractional Laplacian. The technique of regularizing the kernel by convolution that we have used through this paper does not allow us to prove such kind of inequality. The idea is however to use a different approximation of the pressure term that is well suited to the Stroock-Varoupolous type inequalities. Let us mention [14] where this kind of inequalities are proved for a wider class of nonlocal operators including L s . The technical details are involved and the new approximation may have an interest, hence we think it deserves a separate study.
• Infinite propagation in higher dimensions for self similar solutions. In [30] we proved a transformation formula between self-similar solutions of the model (1.1) with 1 < m < 2 and the fractional porous medium equation u t + (−∆) s u m = 0. This way we obtain infinite propagation for self similar solutions of the form U (x, t) = t −α F (|x|t −α/N ) in R N . This is a partial confirmation that the property of the infinite speed of propagation holds in higher dimensions for every solution of (1.1) with 1 < m < 2.
• Explicit solutions. Y. Huang reports [21] the explicit expression of the Barenblatt solution for the special value of m, m ex = (N + 6s − 2)/(N + 2s). The profile is given by
F M (y) = λ (R 2 + |y| 2 ) −(N +2s)/2 ,
where the two constants λ and R are determined by the total mass M of the solution and the parameter β.
Note that for s = 1/2 we have m ex = 1, and the solution corresponds to the linear case, u t = (−∆) 1/2 u, F 1/2 (r) = C(a 2 + r 2 ) −(N +1)/2 .
• Different generalizations of model (CV) are worth studying:
(i) Changing-sign solutions for the problem ∂ t u = ∇ · (|u|∇p), p = (−∆) −s u.
(ii) Starting from the Problem (CV), an alternative is to consider the problem u t = ∇ · (|u|∇(−∆) −s (|u| m−2 u)), x ∈ R N , t > 0, with m > 1. This problem has been studied by Biler, Imbert and Karch in [3]. They construct explicit compactly supported self-similar solutions which generalize the Barenblatt profiles of the PME. In a later work by Imbert [22], finite speed of propagation is proved for general solutions.
(iii) We should consider combining the above models into ∂ t u = ∇(|u| m−1 ∇p), p = (−∆) −s u. When s = 0 and m = 2 we obtain the signed porous medium equation ∂ t u = ∆(|u| m−1 u).
Definition 1. 1 .
1Let m > 1. We say that u is a weak solution of (1.1) in Q T = R N × (0, T ) with nonnegative initial data u 0 ∈ L 1 (R N ) if (i) u ∈ L 1 (Q T ), (ii) ∇K s [u] ∈ L 1 ([0, T ) : L 1 loc (R N )), (iii) u m−1 ∇K s [u] ∈ L 1 (Q T ), and (iv)
Theorem 1. 2 ..
2Let m ∈ (1, 2), N ≥ 1. Let u 0 ∈ L 1 (R N ) ∩ L ∞ (R N ). Then there exists a weak solution u of equation (1.1) with initial data u 0 such that u ∈ L 1 (Q T ) ∩ L ∞ (Q T ) and ∇H s [u] ∈ L 2 (Q T ).Moreover, u has the following properties:1. (Conservation of mass) For all t > 0 we have (L ∞ estimate) For all t > 0 we have ||u(·, t)|| ∞ ≤ ||u 0 || ∞ . 3. (First Energy estimate) For all t > 0, C = (2 − m)(3 − m) > 0. 4. (Second Energy estimate) For all t > 0,
Theorem 1. 3 .
3Let m ∈ [2, 3). Let u 0 ∈ L 1 (R N ) ∩ L ∞ (R N ) be such that (1.3) 0 ≤ u 0 (x) ≤ Ae −a|x| for some A, a > 0.Then there exists a weak solution u of equation(1.1) with initial data u 0 such that u ∈ L 1 (Q T ) ∩ L ∞ (Q T ), ∇H s [u]∈ L 2 (Q T ) and u satisfies the properties 1, 2, 4 of Theorem 1.2. Moreover, the solution decays exponentially in |x| and the first energy estimate holds in the form|C| t 0 R N |∇H s [u]| 2 dxdt + t) 3−m dxwhere C = C(m) = (2 − m)(3 − m).
increasing in time. Moreover, by Sobolev Inequality (2.3) applied to the function f = u (m+p−1)/2 , we obtain that
Theorem 6. 3 .
3Let m > 1 and u 0 ∈ L 1 (R N ) ∩ L ∞ (R N ) be non-negative. Then there exists a weak solution U 3 of Problem (P µδ ) posed in R N × (0, T ) with initial data u 0 . Moreover, U 3 ∈ L ∞ ([0, ∞) : L 1 (R N )), and for all t > 0 we have
∇H s [U 3 ]
3∈ L 2 ((0, T ) : L 2 (R N )) uniformly on µ. Then ∇K s [U 3 ] = H s [∇H s [U 3 ]] ∈ L 2 ((0, T ) : H s (R N )). Since for any bounded domain Ω, H s (Ω) is compactly embedded in L 2 (Ω) then ∇K s [U 3 ] → ∇K s [U 4
2, the operator (∇K s ) is approximated by (∇K s ) defined as(∇K s ) [u] = (∇K s ) * u where (∇K s ) = ρ * (∇K s). Note that (6.13) (∇K s ) →0 −→ ∇K s in L 1 loc (R), since ∇K s ∈ L 1 loc (R). It is still true that (∇K s )[u] = H s [∇H s [u]] ,
Theorem 7. 1 .
1Let m ≥ 2. Assume u is a bounded solution, 0 ≤ u ≤ L, of Equation (1.1) with K = (−∆) −s with 0 < s < 1 (0 < s < 1/2 if N = 1), as constructed in Theorem 1.3. Assume that u 0 has compact support. Then u(·, t) is compactly supported for all t > 0. More precisely, if 0 < s < 1/2 and u 0 is below the "parabola-like" function U 0 (x) = a(|x| − b) 2 ,
By scaling we may put a = L = 1. We denote by (x c , t c ) this contact point where we have u(x c , t c ) = U (x c , t c ) = (b + Ct c − |x c |) 2 . The contact can not be at the vanishing point |x f (t c )| := b + Ct c of the barrier and this will be proved in Lemma 7.2. We consider that x c lies at a distance h > 0 from |x f (t c )| = b + Ct c (the boundary of the support of the parabola U (x, t) at time t c ), that is b + Ct c − |x c | = h > 0.
A, B and T we obtain A = L, B = ( a/L) 1/2 and then T = L m−2+s a 1−s . Then u(x, t) is below the upper barrier U (x, t) = a( Ct − (|x| − b)) 2 where the new speed is given byC = CA m−1 B 1−2s = CL m−
Corolary 7. 3 (
3Growth estimates of the support). Let u 0 be bounded with u 0 (x) = 0 for |x| > R for someR > 0. If (x, t) ∈ FB(u) then x ≤ R + Ct 1/(2−2s) , where C = C( u 0 ∞ , N, s).
−
ae −atc F (|x c |) ≥ (m − 1)e −a(m−2)tc F (|x|) m−2 e −atc F (|x c |)p r + e −a(m−1)tc F (|x c |) m−1 ∆p.
Theorem 8. 1 (
1Infinite speed of propagation). Let v be the solution of Problem (8.3)-(8.4), and assume that u 0 ≥ 0 is compactly supported. Then 0 < v(x, t) < M for all t > 0 and x ∈ R.
Figure 1 :
1Typical compactly supported initial data for models (8.1) and(8.3).
Proposition 8. 2 .
2The solution v : [0, T ] × R → [0, ∞) of Problem (8.3) y, t)dy is continuous in space and time.
|∂ x (−∆) −s u| 2 dxdt < +∞. II. Continuity in time. Let (x 0 , t 0 ) ∈ R × [0, T ]. Let (x, t 1 ) ∈ R × [0, T ] and h := x − x 0 . Let B = [x 0 , x 1 ].
(Q) = {upper semi-continuous functions u : Q → R}, LSC(Q) = {lower semi-continuous functions u : Q → R}, C(Q) = {continuous functions u : Q → R}. Definition 8.3.
Proposition 8. 4 (u
4Existence of viscosity solutions). Let u be a weak solution for Problem (1.1). Then v defined by formula v(x, t) = x −∞ u(y, t)dy is a viscosity solution for Problem (8.3)-(8.4).Proof. By Proposition 8.2 we know that v ∈ C([0, T ] : C(R)). The idea is to obtain a viscosity solution by the approximation process. Let v defined by v (x, (y, t)dy, where u is the approximation of u as in Section 4. Then v is a classical solution, in particular a viscosity solution, to the problem
Proposition 8 . 5 (
85Comparison Principle). Let m ∈ (1, 2), α ∈ (0, 1), N = 1. Let w be a sub-solution and W be a super-solution in the viscosity sense of equation (8.3).
Figure 2 :Figure 3 :
23Comparison with the barrier at time t Comparison with the barrier at time t section we are interested in estimating the fractional Laplacian of given functions. We recall the definition of the Fractional Laplacian operator
Lemma 9. 2 .
2Let ϕ : R → (0, ∞), ϕ = (|x| + ξ) −γ , where γ > 1 and ξ > 0. Then, for all |x| ≥ |x 0 | > 1, we have that(9.1) |(−∆) s ϕ(x)| ≤ C |x| 1+2s ,with positive constant C > 0 that depends only on γ, ξ, s.
Following the ideas of [ 5 ]R 3
53Lemma 2.1, the computation of the (−∆) s ϕ(x) is based on estimating the integrals on the regionsR 1 = {y : |y| > 3|x|/2}, R 2 = y : = {y : |x − y| < |x|/2}, R 4 = {y : |y| < |x|/2}. Therefore (−∆) s ϕ(x) = R1∪R2∪R3∪R4 ϕ(x) − ϕ(y) |x − y| 1+2s dy.We proceed with the estimate of each of the four integrals:I = |y|>3|x|/2 ϕ(x) − ϕ(y) |x − y| 1+2s dy ≤ ω d ϕ(x) ∞ 3|x|/2 dr r 1+2s = K 1 |x| γ+2s , K 1 = K 1 (γ, s). x) − ϕ(y) |x − y| 1+2s dy ≤ ϕ L ∞ (B |x|/2 (x)) γ+2s , K 3 = K 3 (γ, s).
Lemma 9. 3 .
3Let F be a bounded family of functions in L p (0, T ; X), where 1 ≤ p < ∞ and ∂F/∂t = {∂f /∂t : f ∈ F} be bounded in L 1 (0, T ; Y ). Then the family F is relatively compact in L p (0, T ; B).Lemma 9.4. Let F be a bounded family of functions in L ∞ (0, T ; X) and ∂F/∂t be bounded in L r (0, T ; Y ), where r > 1. Then the family F is relatively compact in C(0, T ; B).
Acknowledgments.Authors partially supported by the Spanish project MTM2011-24696. The second author is also supported by a FPU grant from MECD, Spain.
Second-order elliptic integro-differential equations: viscosity solutions' theory revisited. G Barles, C Imbert, Ann. Inst. H. Poincaré Anal. Non Linéaire. 25G. Barles and C. Imbert, Second-order elliptic integro-differential equations: viscosity solutions' theory revisited, Ann. Inst. H. Poincaré Anal. Non Linéaire, 25 (2008), 567-585.
Barenblatt profiles for a nonlocal porous medium equation. P Biler, C Imbert, G Karch, C. R. Math. Acad. Sci. 349P. Biler, C. Imbert, and G. Karch, Barenblatt profiles for a nonlocal porous medium equation, C. R. Math. Acad. Sci. Paris, 349 (2011), 641-645.
The nonlocal porous medium equation: Barenblatt profiles and other weak solutions. P Biler, C Imbert, G Karch, Arch. Ration. Mech. Anal. 215P. Biler, C. Imbert, and G. Karch,The nonlocal porous medium equation: Barenblatt profiles and other weak solutions, Arch. Ration. Mech. Anal., 215 (2015), 497-529.
Nonlinear diffusion of dislocation density and self-similar solutions. P Biler, G Karch, R Monneau, Comm. Math. Phys. 294P. Biler, G. Karch, and R. Monneau, Nonlinear diffusion of dislocation density and self-similar solutions, Comm. Math. Phys., 294 (2010), 145-168.
Quantitative local and global a priori estimates for fractional nonlinear diffusion equations. M Bonforte, J Vázquez, Adv. Math. 250M. Bonforte and J. Vázquez, Quantitative local and global a priori estimates for fractional nonlinear diffusion equations, Adv. Math., 250 (2014), 242-284.
Front propagation in Fisher-KPP equations with fractional diffusion. X Cabré, J M Roquejoffre, Comm. Math. Phys. 320X. Cabré and J. M. Roquejoffre, Front propagation in Fisher-KPP equations with fractional diffusion, Comm. Math. Phys., 320 (2013), 679-722.
An extension problem related to the fractional Laplacian. L Caffarelli, L Silvestre, Comm. Partial Differential Equations. 327-9L. Caffarelli and L. Silvestre. An extension problem related to the fractional Laplacian. Comm. Partial Differential Equations, 32 (2007), no.7-9:1245-1260.
Regularity of solutions of the fractional porous medium flow. L Caffarelli, F Soria, J L Vázquez, J. Eur. Math. Soc. 15JEMS)L. Caffarelli, F. Soria and J. L. Vázquez, Regularity of solutions of the fractional porous medium flow, J. Eur. Math. Soc. (JEMS), 15 (2013), 1701-1746.
Regularity of solutions of the fractional porous medium flow with exponent 1/2, Algebra i Analiz. L Caffarelli, J Vázquez, arXiv:1409.8190St. Petersburg Mathematical Journal]. 273to appearL. Caffarelli and J. Vázquez, Regularity of solutions of the fractional porous medium flow with exponent 1/2, Algebra i Analiz [St. Petersburg Mathematical Journal], 27 (2015), no. 3 (volumen in honor of Nina Uraltseva), to appear. arXiv:1409.8190.
Nonlinear porous medium flow with fractional potential pressure. L Caffarelli, J L Vazquez, Arch. Ration. Mech. Anal. 202L. Caffarelli and J. L. Vazquez, Nonlinear porous medium flow with fractional potential pressure, Arch. Ration. Mech. Anal., 202 (2011), 537-565.
Asymptotic behaviour of a porous medium equation with fractional diffusion. L A Caffarelli, J L Vázquez, Discrete Contin. Dyn. Syst. 29L. A. Caffarelli and J. L. Vázquez, Asymptotic behaviour of a porous medium equation with fractional diffusion, Discrete Contin. Dyn. Syst., 29 (2011), 1393-1404.
Exponential convergence towards stationary states for the 1D porous medium equation with fractional pressure. J A Carrillo, Y Huang, M C Santos, J L Vázquez, J. Differential Equations. 258J. A. Carrillo, Y. Huang, M. C. Santos, and J. L. Vázquez, Exponential convergence towards stationary states for the 1D porous medium equation with fractional pressure, J. Differential Equations, 258 (2015), 736-763.
User's guide to viscosity solutions of second order partial differential equations. M Crandall, H Ishii, P Lions, Bull. Amer. Math. Soc. (N.S.). 27M. Crandall, H. Ishii, and P. Lions, User's guide to viscosity solutions of second order partial differ- ential equations, Bull. Amer. Math. Soc. (N.S.), 27 (1992)), 1-67.
Uniqueness and existence for very general nonlocal equations of porous medium type. J Endal, E R Jakobsen, F Del Teso, in preparationJ. Endal, E. R. Jakobsen and F. del Teso, Uniqueness and existence for very general nonlocal equa- tions of porous medium type, in preparation.
A fractional porous medium equation. A De Pablo, F Quirós, A Rodríguez, J Vázquez, Adv. Math. 226A. de Pablo, F. Quirós, A. Rodríguez, and J. Vázquez, A fractional porous medium equation, Adv. Math., 226 (2011), 1378-1409.
A general fractional porous medium equation. A De Pablo, F Quirós, A Rodríguez, J Vázquez, Comm. Pure Appl. Math. 65A. de Pablo, F. Quirós, A. Rodríguez, and J. Vázquez, A general fractional porous medium equation, Comm. Pure Appl. Math., 65 (2012), 1242-1284.
Finite difference method for a fractional porous medium equation, Calcolo. F Teso, F. del Teso, Finite difference method for a fractional porous medium equation, Calcolo, 51 (2014), 615- 638.
Finite difference method for a general fractional porous medium equation. F Teso, J L Vázquez, arXiv:1307.2474F. del Teso and J. L. Vázquez, Finite difference method for a general fractional porous medium equation, arXiv:1307.2474., (2013).
Hitchhiker's guide to the fractional Sobolev spaces. E Di Nezza, G Palatucci, E Valdinoci, Bull. Sci. Math. 136E. Di Nezza, G. Palatucci, and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. Math., 136 (2012), 521-573.
Dislocation group dynamics iii. Similarity solutions of the continuum approximation, Philosophical Magazine. A Head, 26A. Head, Dislocation group dynamics iii. Similarity solutions of the continuum approximation, Philosoph- ical Magazine, 26 (1972), 65-72.
Explicit Barenblatt profiles for fractional porous medium equations. Y Huang, Bull. Lond. Math. Soc. 46Y. Huang, Explicit Barenblatt profiles for fractional porous medium equations, Bull. Lond. Math. Soc., 46 (2014), 857-869.
Finite speed of propagation for a non-local porous medium equation, preprint. C Imbert, C. Imbert, Finite speed of propagation for a non-local porous medium equation, preprint, http://arxiv.org/abs/1411.4752.
Homogenization of first order equations with (u/ )-periodic Hamiltonians. II. Application to dislocations dynamics. C Imbert, R Monneau, E Rouy, Comm. Partial Differential Equations. 33C. Imbert, R. Monneau, and E. Rouy, Homogenization of first order equations with (u/ )-periodic Hamiltonians. II. Application to dislocations dynamics, Comm. Partial Differential Equations, 33 (2008), 479-516.
A "maximum principle for semicontinuous functions" applicable to integro-partial differential equations. E R Jakobsen, K H Karlsen, NoDEA Nonlinear Differential Equations Appl. 13E. R. Jakobsen and K. H. Karlsen, A "maximum principle for semicontinuous functions" applicable to integro-partial differential equations, NoDEA Nonlinear Differential Equations Appl., 13 (2006), 137-165.
Asymptotic behaviour of solutions of the porous medium equation with changing sign. S Kamin, J L Vázquez, SIAM J. Math. Anal. 22S. Kamin and J. L. Vázquez, Asymptotic behaviour of solutions of the porous medium equation with changing sign, SIAM J. Math. Anal., 22 (1991), 34-45.
Translated from the Russian by A. P. Doohovskoy, Die Grundlehren der mathematischen Wissenschaften. N S Landkof, Springer-Verlag180New York-HeidelbergFoundations of modern potential theoryN. S. Landkof, Foundations of modern potential theory, Springer-Verlag, New York-Heidelberg, 1972. Translated from the Russian by A. P. Doohovskoy, Die Grundlehren der mathematischen Wissenschaften, Band 180.
Une méthode particulaire déterministe pour deséquations diffusives non linéaires. P Lions, S Mas-Gallic, C. R. Acad. Sci. Paris Sér. I Math. 332P. Lions and S. Mas-Gallic, Une méthode particulaire déterministe pour deséquations diffusives non linéaires, C. R. Acad. Sci. Paris Sér. I Math., 332 (2001), 369-376.
Compact sets in the space L p (0, T ; B). J Simon, Ann. Mat. Pura Appl. 146J. Simon, Compact sets in the space L p (0, T ; B), Ann. Mat. Pura Appl., 146 (1987), 65-96.
Finite and infinite speed of propagation for porous medium equations with fractional pressure. D Stan, F Teso, J L Vázquez, C. R. Math. Acad. Sci. 352D. Stan, F. del Teso, and J. L. Vázquez, Finite and infinite speed of propagation for porous medium equations with fractional pressure, C. R. Math. Acad. Sci. Paris, 352 (2014), 123-128.
Transformations of self-similar solutions for porous medium equations of fractional type. D Stan, F D Teso, J L Vázquez, Nonlinear Anal. 119D. Stan, F. d. Teso, and J. L. Vázquez, Transformations of self-similar solutions for porous medium equations of fractional type, Nonlinear Anal., 119 (2015), 62-73.
The Fisher-KPP Equation with Nonlinear Fractional Diffusion. D Stan, J L Vázquez, SIAM J. Math. Anal. 46D. Stan and J. L. Vázquez, The Fisher-KPP Equation with Nonlinear Fractional Diffusion, SIAM J. Math. Anal., 46 (2014), 3241-3276.
Barenblatt solutions and asymptotic behaviour for a nonlinear fractional heat equation of porous medium type. J L Vázquez, J. Eur. Math. Soc. 16JEMS)J. L. Vázquez, Barenblatt solutions and asymptotic behaviour for a nonlinear fractional heat equation of porous medium type, J. Eur. Math. Soc. (JEMS), 16 (2014), 769-803.
|
[] |
[
"Optimizing directed self-assembled morphology",
"Optimizing directed self-assembled morphology"
] |
[
"Jian Qin \nInstitute for Molecular Engineering\nJames Franck Institute\nUniversity of Chicago\n60637ChicagoIL\n",
"Gurdaman S Khaira \nInstitute for Molecular Engineering\nJames Franck Institute\nUniversity of Chicago\n60637ChicagoIL\n",
"Yongrui Su \nInstitute for Molecular Engineering\nJames Franck Institute\nUniversity of Chicago\n60637ChicagoIL\n",
"Grant P Garner \nInstitute for Molecular Engineering\nJames Franck Institute\nUniversity of Chicago\n60637ChicagoIL\n",
"Marc Miskin \nJuan J. de Pablo Institute for Molecular Engineering\nUniversity of Chicago\nUniversity of Chicago\n60637, 60637ChicagoIL, IL\n",
"Heinrich M Jaeger \nJuan J. de Pablo Institute for Molecular Engineering\nUniversity of Chicago\nUniversity of Chicago\n60637, 60637ChicagoIL, IL\n"
] |
[
"Institute for Molecular Engineering\nJames Franck Institute\nUniversity of Chicago\n60637ChicagoIL",
"Institute for Molecular Engineering\nJames Franck Institute\nUniversity of Chicago\n60637ChicagoIL",
"Institute for Molecular Engineering\nJames Franck Institute\nUniversity of Chicago\n60637ChicagoIL",
"Institute for Molecular Engineering\nJames Franck Institute\nUniversity of Chicago\n60637ChicagoIL",
"Juan J. de Pablo Institute for Molecular Engineering\nUniversity of Chicago\nUniversity of Chicago\n60637, 60637ChicagoIL, IL",
"Juan J. de Pablo Institute for Molecular Engineering\nUniversity of Chicago\nUniversity of Chicago\n60637, 60637ChicagoIL, IL"
] |
[] |
Directed assembly of block polymers is rapidly becoming a viable strategy for lithographic patterning of nanoscopic features. One of the key attributes of directed assembly is that an underlying chemical or topographic substrate pattern used to direct assembly need not exhibit a direct correspondence with the sought after block polymer morphology, and past work has largely relied on trial-and-error approaches to design appropriate patterns. In this work, a computational evolutionary strategy is proposed to solve this optimization problem. By combining the Cahn-Hilliard equation, which is used to find the equilibrium morphology, and the covariance-matrix evolutionary strategy, which is used to optimize the combined outcome of particular substrate-copolymer combinations, we arrive at an efficient method for design of substrates leading to non-trivial, desirable outcomes.
| null |
[
"https://arxiv.org/pdf/1308.0622v1.pdf"
] | 118,601,933 |
1308.0622
|
edfbda61493661036f2a78c21bf41dae0d39253b
|
Optimizing directed self-assembled morphology
2 Aug 2013
Jian Qin
Institute for Molecular Engineering
James Franck Institute
University of Chicago
60637ChicagoIL
Gurdaman S Khaira
Institute for Molecular Engineering
James Franck Institute
University of Chicago
60637ChicagoIL
Yongrui Su
Institute for Molecular Engineering
James Franck Institute
University of Chicago
60637ChicagoIL
Grant P Garner
Institute for Molecular Engineering
James Franck Institute
University of Chicago
60637ChicagoIL
Marc Miskin
Juan J. de Pablo Institute for Molecular Engineering
University of Chicago
University of Chicago
60637, 60637ChicagoIL, IL
Heinrich M Jaeger
Juan J. de Pablo Institute for Molecular Engineering
University of Chicago
University of Chicago
60637, 60637ChicagoIL, IL
Optimizing directed self-assembled morphology
2 Aug 2013
Directed assembly of block polymers is rapidly becoming a viable strategy for lithographic patterning of nanoscopic features. One of the key attributes of directed assembly is that an underlying chemical or topographic substrate pattern used to direct assembly need not exhibit a direct correspondence with the sought after block polymer morphology, and past work has largely relied on trial-and-error approaches to design appropriate patterns. In this work, a computational evolutionary strategy is proposed to solve this optimization problem. By combining the Cahn-Hilliard equation, which is used to find the equilibrium morphology, and the covariance-matrix evolutionary strategy, which is used to optimize the combined outcome of particular substrate-copolymer combinations, we arrive at an efficient method for design of substrates leading to non-trivial, desirable outcomes.
Introduction
Lithography represents one of the key fabrication steps for nanoscopic devices, ranging from electronic circuits to storage media. [1] As critical dimensions continue to shrink, alternative patterning strategies and materials are being sought to circumvent some of the patterning challenges that arise at small length scales. These include roughness, pattern collapse, and defectivity. In recent years, directed assembly of block copolymers on topographic or chemical patterns has received considerable attention as a viable and promising patterning approach for lithographic patterning of ultra-small features. Block polymers are known to spontaneously self-assemble into a wide range of ordered morphologies, including lamellar, cylindrical, or spherical structures. [1,2] In thin films, that self-assembly can be guided through the use of chemical or topographic patterns on the underlying substrate. Past work has shown that it is possible to direct the assembly of simple diblock copolymers and their blends with homopolymers into all of the canonical features that arise in integrated circuits, such as lines, bends, jogs, and spots. An important concept in directed self-assembly is that of pattern interpolation, in which only a subset of any desirable features appears on the substrate, and the block copolymer is used to fill-in the rest, thereby adding information into the fabrication process. For example, to produce a dense array of parallel lines, one need only use a surface pattern that includes a fraction of the lines (e.g. one fourth). Such lines then serve as guiding stripes for a lamellar forming block copolymer having a period that is four times smaller than the spacing between the surface lines.
Dense, periodic arrays of lines or spots are of considerable interest for applications in dense storage media. [3][4][5] For more complex layouts, such as those encountered in logic devices, a central challenge is to guide the materials to assemble into aperiodic, more versatile, and more complicated morphologies or geometries. Within the spirit of density interpolation, the underlying pattern used to guide the assembly need not have a one-to-one correspondence with the geometry of interest; the question that arises then is, for a target morphology, how can one design an optimal sparse pattern to direct block copolymer self-assembly?
One could of course adopt a traditional inverse Monte Carlo algorithm, and rely on a random search of suitable patterns to find a plausible solution. This amounts to a computational trail-and-error search, akin to that performed in experiments. Such an approach has been proposed recently, [6] with good results, in the context of topographically directed assembly. In that work, a mean-field theory in two dimensions (SCFT) [7] was combined with a random search algorithm to identify topographic features leading to suitable morphologies. The disadvantage of such an approach, however, is that the search is blind and, for large parameter spaces, it can rapidly become intractable.
In this work, we propose to use a state-of-the-art optimization technique, namely the covariance-matrix adaptation evolutionary strategy (CMA-ES), [8] in combination with a mean field copolymer model, to identify combination optimal chemical patterns for assembly of block polymers into non-regular morphologies.
Methodology
For concreteness, the strategy presented in this work is described in the context of a surface pattern consisting of circular spots. Extensions to other types of patterns are trivial. The goal is to use the minimal number of spots on the surface to direct the assembly of a lamellar forming diblock copolymer into a target morphology. To determine the equilibrium copolymer morphology for a given placement of the surface spots, we use Ginzburg-Landau (GL) free energy functional, and we evolve morphology using the Cahn-Hilliard (CH) equation. [9] One could use other approaches, including more elaborate mean field theories, theoretically-informed approaches, or full-blown molecular simulations, but a generic GL-CH approach is simple and allows us to demonstrate the general principles put forth in this work without loss of generality. Furthermore, it facilitates studies of large systems, and enables simulations of the evolution dynamics of any given morphology. [10,11] In order to evolve a particular combination of spots and morphology in parameter space, we introduce an objective or "fitness" function that quantifies the difference between the equilibrium, instantaneous morphology and the target morphology. That function depends on the spot positions, with the number of spots held constant. The fitness function is then minimized with a covariance matrix adaptation evolutionary strategy (CMA-ES). [8] CMA-ES is based on the idea of "natural evolution", and has proven to be particularly efficient for optimization of complicated functions when little is known about the underlying landscape. [8]
Cahn-Hilliard equation for assembled morphology
For simplicity, we consider a system of pure diblock copolymers composed of A and B blocks. A and B type monomers have the same reference volume and statistical segment length. The total number of beads and the volume fraction of A blocks are denoted N and f , respectively. The excess free energy cost of creating an A/B contact is quantified by the Flory-Huggins parameter χ. The product χN controls the degree of phase separation; the higher its value, the stronger the tendency of A and B blocks are to segregate.
Following Shi et al., [12,13] we use the free energy form developed by Ohta and Kawasaki [14] to characterize the system morphology. This formalism is valid in the strong segregation regime (large χN ); it is expressed as the sum of three terms
F [φ] = F GL [φ] + F non-local [φ] + drH ext (r)φ(r).(1)
Here φ(r) is the order parameter field quantifying the extent of phase separation, defined as the monomer volume fraction difference, φ(r) ≡ φ A (r)−φ B (r). H ext (r) is the external potential representing the interaction between the guiding spots and the copolymer; we use the hyperbolic tangent function introduced in ref. [12] ,
H ext (r) = −(1/2)V 0 (tanh(−|r − R| + σ)/λ + 1),
where R is the position of the spot center, V 0 and σ are the strength and range of the potential, and λ controls the steepness of the potential's decay.
The Ginzburg-Landau free energy F GL [φ] can be written as
F GL [φ] = dr 1 2 (∇φ) 2 + W (φ) .(2)
The gradient term represents the free energy cost associated with spatial inhomogeneities. The W (φ) term is the local free energy density that drives the phase separation. It depends on the Flory-Huggins χ-parameter, and contains only even powers of φ (this is true since exchanging A and B block labels has no physical consequence). W is generally assumed to be of the form (1/2)φ 2 + (g/4)φ 4 . In this work, following Shi et al. [12] , we set W = −A ln cosh(φ) + φ 2 /2. Here the parameter A controls the degree of phase separation. W has one minimum at φ = 0 for A < 1 and has two minima for A > 1. The shapes of W at A = 0.5, 1.0, and 1.3 are shown in Fig. 1. The term F non-local is the chain stretching energy describing the chain connectivity. It can be written as [12,14] F
non-local = α 2 dr dr ′ δφ(r)G(r − r ′ )δφ(r ′ ),(3)
where δφ(r) ≡ φ(r) −φ is the deviation from the homogeneous value, andφ = 2f − 1. G(r, r ′ ) satisfies −∇ 2 G(r, r ′ ) = δ(r − r ′ ). For simplicity, we consider a two-dimensional representation here, where G(r, r ′ ) = − ln(|r − r ′ |)/2π. [14] Equations (2) and (3) are purely phenomenological. The mean field free energy for a diblock copolymer melt may be mapped onto this form by using the explicit expressions in Ref. [14] (Eqs. (4.5-7)). By inspection, one can arrive at the following mapping rules: (1) the length scales in Eqs. (2) and (3)
are in units of ξ 0 , where ξ 2 0 is defined by R 2 g /(4f (1 − f )χ s N )
and R g is the radius of gyration of the diblock copolymer; (2) the value of A equals χ/χ s , where χ s is the value of the mean field spinodal (χ s = 10.5/N for f = 0); (3) the value of α equals 3/(16f 3 (1 − f ) 3 (χ s N ) 2 ). The extent of the phase separation is controlled by the value of χ or A. The equilibrium domain spacing is proportional to N 2/3 (A − 1) 1/6 . [14] To find the equilibrium morphology, we use the Cahn-Hilliard equation to evolve the φ field, which is appropriate for conserved order parameters and has been widely used to study the material structural evolution in phase field models. [15] The Cahn-Hilliard equation has the form
∂φ(r, t) ∂t = M ∇ 2 δF [φ] δφ(r, t) ,(4)
where M is the effective mobility coefficient and is set to unity. Substituting the free energy expression into the Cahn-Hilliard equation, we get [12] ∂φ(r, t) ∂t
=∇ 2 −∇ 2 φ(r, t) − A tanh(φ(r, t)) + φ(r, t) + ∇ 2 H ext (r) − αδφ(r, t).(5)
Since the Cahn-Hilliard equation is essentially a diffusion equation, the total monomer content in the system remains constant as time evolves, i.e., drφ(r, t) =φV , where V is the volume of the system. For a given block volume fraction f and given spot positions, the values of A, α, and H ext (r) are fixed. The effect of f is implicit in the δφ term. For random initial field values satisfying the stoichiometrical constraint, after a sufficiently long time, the Cahn-Hilliard equation will typically evolve the system into a local equilibrium state.
CMA-ES optimization
Let the target morphology be described byφ(r), and the equilibrium morphology under a given set of spot constraints be φ(r; {R i }), where the dependence on pole position vectors {R i } is explicitly shown. The difference between φ andφ can be quantified by
Ω({R i }) ≡ dr φ(r; R i ) −φ(r) 2 .(6)
Our goal is to optimize spot positions by minimizing Ω.
We resort to the covariance matrix adaptation evolutionary strategy or CMA-ES to minimize Ω. CMA-ES belongs to a family of evolutionary optimization algorithms that mimic the principle of biological evolution. [8] Recently, it has been used with considerable success in the context of materials research for optimization of packing problems [16] and for crystal structure prediction [17] (the use of different variants of evolutionary algorithm have also been reported [18,19] ). It is iterative, stochastic, and does not require that the derivative of the objective function be evaluated. At each iteration stage or generation, a finite number (λ) of samples derived from the previous generation is allowed to mutate and recombine following a prescribed protocol; these offspring are then ranked according to the objective function, and the "best" µ offspring are used for the next generation iteration.
The key to implementing such an algorithm is designing an efficient protocol for mutation and recombination, which on the one hand maintains the population diversity, so that the system is not trapped into local extrema, and on the other hand ensures fast convergence in the neighborhood of the optimal extremum. In a naive random search, the older population is perturbed by independently distributed Gaussian random numbers. In CMA-ES, the correlation among different searching directions, as measured by the covariance matrix, is explicitly considered, and the covariance matrix adapted at each iteration step by "learning" from the fitness of the entire population. The idea is analogous to the approximation of the Hessian matrix in the quasi-Newton method in deterministic optimization. Figure 2: Convergence behavior of CMA-ES compared to a Gaussian random search, tested on the 12dimensional Rosenbrock function, which has a global minimum at zero when all its arguments equal unity. The range for the Gaussian random search is varied using the "1/5" acceptance rate rule [8] . The CMA-ES uses a evolution population of 28 members, out of which 4 best members are selected at each generation. In both cases, the initial point is generated at random.
Our implementation of the CMA-ES is based on the improved algorithm discussed in Ref. [20] . Most parameters required by the algorithm have been set on the basis of heuristic arguments. For our problem, the population size λ and the number of offspring used to generate new populations µ are 28 and 4, respectively (the ratio of the two was recommended to be 7 [20] ). To examine the convergence behavior of CMA-ES, in Fig. 2, we compared the optimization results obtained from CMA-ES and the Gaussian random search for the 12-dimensional Rosenbrock function. Although the exact shape of the convergence curve depends slightly on the location of the starting point, the curves in the figure are representative of the typical efficiency for both methods. It is apparent that the Gaussian random search is frequently trapped in local minima, whereas the CMA-ES is able to find the global minimum. Furthermore, the convergence rate is exponential after a sufficient number iterations.
Results
For the problem of interest here, solving the CH equation is the most computationally demanding step. With that issue in mind, in Sec. 3.1 we focus on the algorithm and optimize the parameters used to solve the CH equation. Then, in Sec. 3.3, we present the optimization results obtained using the CMA-ES algorithm.
Evolving Cahn-Hilliard equation
To solve the CH equation, we discretize the square-shaped simulation cell into an N -by-N grid, and approximate the gradient term in Eq. (5) using central differences. The composition field φ(r, t) is propagated in time using the forward Euler's method, with δt as the time step (effectively, the mobility factor M can be absorbed into the definition of δt). The algorithm complexity scales with N 2 , as confirmed by To find a proper value of time step δt that is small enough to ensure numerical stability yet large enough to evolve the CH equation efficiently, we compared results attained at different δt values. Fig. 4 shows equilibrium morphologies obtained by solving the CH equation using three different values of δt, starting from the same initial configuration. The fact that the three morphologies are nearly indistinguishable suggests that using δt = 0.02 is sufficient. In the remainder of this work, we use this value for our calculations.
The other parameter to be optimized is the number of iteration steps. In our study, we want this number to be sufficiently large to allow the system to reach the equilibrium morphology for a given arrangement of the spots; however, we also want to avoid spending time on equilibrated morphologies. Fig. 5 shows the morphologies at different times along the same trajectory. It is clear that after at least 1000 iteration steps, which correspond to t = 20, the equilibrium morphology has been found. For the results presented in the next section, we used an iteration number of 4000, which ensures that local equilibrium morphologies are found.
Phase diagram in A − f plane
Before optimizing the spot positions using the evolutionary algorithm, we explored the effects of various controlling parameters in the generic CH equation. Fig. 6 shows the typical morphologies obtained for various values of A and f , and at a fixed value of α. As discussed in Sec. 2, f is the block volume fraction that controls the symmetry of the morphology, and A is analogous to the χ parameter, which controls the strength of block incompatibility. The results in Fig. 6 are consistent with the physical meaning of A and α. For A ≤ 1.0, homogeneous morphologies are found for all f values. For A > 1.0, the lamellar patterns are found at compositions close to f = 0.5, and the hexagonally packed cylindrical patterns are found at asymmetric compositions, even though in both cases the presence of defects is apparent.
In what follows, we focus on systems forming a lamellar morphology, and used the following parameters: A = 1.3, f = 0.5, α = 0.002. This set of parameter gives distinct lamellae having a natural periodicity of L 0 ≃ 20.
Optimization using evolutionary algorithm
We now present results obtained using the CMA-ES optimization. As mentioned above, the population size is λ = 28, and the number of fitting samples used to spawn new trajectories is µ = 4. The first target morphology defined here is a pattern mimicking the letter "I", shown in Fig. 7(b). The number of anchoring spots representing the chemical pattern is 9. Initial spot positions are generated at random, and the initial morphology is calculated by evolving the CH equation, as shown in the inset of 7(a). At each iteration step, the spots are repositioned using the CMA-ES algorithm, and the equilibrium morphology is generated by solving the CH equation. The values of the objective function are calculated using Eq. (6) and are plotted in Fig. 7(a), as a function of iteration number (also see the inset for a plot on a logarithmic scale). The results suggest that the magnitude of the objective function decays nearly exponentially, and that there exist two convergence rate regimes. The first (below 150 iterations) has a smaller slope; the second regime (above 150 iterations) has a greater slope. The existence of these two regimes mimics the behavior shown in Fig. 2, implying that the spot positions are first optimized globally, and then locally. The results also show that the optimal spot positions are identified within 250 evolution iterations, and that the residual value of the objective function drops to the level of 10 −8 . The final configuration and the corresponding spot positions are shown in Fig. 7(c). To verify that the solution identified by CMA-ES is indeed at least a local optimal, we performed the following test: we first place the spots at ideal positions that are likely to generate the "I" pattern, and then use the solution of the CH equation as the target morphology and re-iterate from random state. Since now the target morphology is a solution of the CH equation, it is also a well-defined minimum of the objective function Eq. (6), and ideally the minimum should be bracketed by the CMA-ES algorithm. This is indeed confirmed by our results. On the other hand, in general, the exact spot positions obtained from CMA-ES optimization depend slightly on the initial configuration. One way to reduce this dependence is to conduct multiple optimizations, and use the average. target target evolution evolution Figure 8: Target and optimal morphologies for the "M" and "E" patterns. The parameter set is the same as Fig. 7.
To further test the efficiency of the CMA-ES algorithm, we used several other nontrivial patterns. Two sets of target and optimized morphologies are shown in Fig. 8, which mimic the letters "M" and "E", respectively. Both of these two patterns are generated using the same parameter set as the pattern "I", and the convergence behaviors are similar.
Summary
We have presented a methodology to solve the pattern design problem by using a Cahn-Hilliard equation to find the equilibrium morphology of diblock copolymers and by using the CMA-ES algorithm to optimize the underlying chemical pattern. The applicability and usefulness of the proposed strategy were demonstrated for lamellar forming diblock copolymers, and three nontrivial target morphologies.
The size of the systems considered here was modest, about 2L 0 × 2L 0 , and the overall calculation time required to generate an optimal solution was approximately 8 hours on a single processor. Extension of the methodology to larger systems and different morphologies is straightforward. the computational efficiency of the proposed approach could be easily increased by using parallel algorithms: (1) As shown in Fig. 3, the numerical complexity for solving the CH equation scales with the system size (N 2 ). This step involves essentially matrix-vector products, and can be readily parallelized. (2) The CMA-ES essentially involves a set of independent populations, which can also be parallelized in a trivial way.
The objective function used in this work is the simplest that one can think of. More elaborate versions could of course be used. For instance, instead of calculating the difference in real space, one may consider the difference in Fourier mode coefficients. Assigning different weights to long and short wavelength modes may lead to more efficient optimization behavior.
The Cahn-Hilliard equation was used in this work to resolve the composition profile. As a generic framework, the equation also enables us to study assembly dynamics, and can be adapted to study more complex systems, including polymer blends. [13] These possibilities will be addressed in future work.
Figure 1 :
1Local demixing free energy as a function of order parameter. Yellow: A = 0.5; red: A = 1.0; blue: A = 1.3.
Fig. 3, in which the time spent on a fixed number of CH equation iterations is plotted versus the number of grid points N , on a logarithmic scale. The results can be fit with a straight line of slope 2.
Figure 3 :
3Numerical complexity of the algorithm for the CH equation. The abscissa correspond to the number of grid point per edge N . The ordinate axis corresponds to time spent on 2 × 10 5 iteration steps. The dashed line has a slope of 2. Parameters: A = 1.3, α = 0.002, f = 0.5, and δt = 0.02.
Figure 4 :Figure 5 :
45Effects of δt on equilibrium morphology. Equilibrium morphologies obtained by propagating the CH equation with three different values of time step: 0.0002, 0.002, and 0.02. Parameters: A = 1.3, Time dependence of the morphology evolution. Morphologies at different times along the same evolution trajectory using δt = 0.02. Parameters: A = 1.3, α = 0.002, N = 50.
Figure 6 :
62-D phase diagram at varying values of A and f , for α = 0.02. Morphologies are obtained by evolving the CH equation for 2 × 10 8 steps from a random initial configuration (δt = 0.01). The grid size: 100 × 100.
Figure 7 :
7Evolutionary results of "I" pattern. (a) Evolution of the objective function; (b) The target morphology. (c) The optimal morphology and the spot positions. Parameters: A = 1.3, α = 0.002, N = 50.
AcknowledgementNotes and references
. Y.-C Tseng, S B Darling, Polymers. 2Y.-C. Tseng and S. B. Darling, Polymers, 2010, 2, 470-489.
. M P Stoykovich, H Kang, K C Daoulas, G Liu, C.-C Liu, J J De Pablo, M Müller, P F Nealey, ACS Nano. 1M. P. Stoykovich, H. Kang, K. C. Daoulas, G. Liu, C.-C. Liu, J. J. de Pablo, M. Müller and P. F. Nealey, ACS Nano, 2007, 1, 168-175.
. J Y Cheng, C A Ross, V Z Chan, E L Thomas, R G H Lammertink, G J Vancso, Advanced Materials. 13J. Y. Cheng, C. A. Ross, V. Z.-H. Chan, E. L. Thomas, R. G. H. Lammertink and G. J. Vancso, Advanced Materials, 2001, 13, 1174-1178.
. K Naito, H Hieda, M Sakurai, Y Kamata, IEEE Trans. Magn. 38K. Naito, H. Hieda, M. Sakurai and Y. Kamata, IEEE Trans. Magn., 2002, 38, 1949-1951.
K W Guarini, C T Black, Y Zhang, I V Babich, E M Sikorski, L M Gignac, IEEE Tech. Dig. -Int. Electron Devices Meeting. K. W. Guarini, C. T. Black, Y. Zhang, I. V. Babich, E. M. Sikorski and L. M. Gignac, IEEE Tech. Dig. -Int. Electron Devices Meeting 2003, 2003, 541-544.
. A F Hannon, K W Gotrik, C A Ross, A Alexander-Katz, ACS Macro Letters. 2A. F. Hannon, K. W. Gotrik, C. A. Ross and A. Alexander-Katz, ACS Macro Letters, 2013, 2, 251-255.
. M W Matsen, J. Phys.: Condens. Matter. 21M. W. Matsen, J. Phys.: Condens. Matter, 2002, 14, R21.
A E Eiben, J E Smith, Introduction to Evolutionary Computing. SpringerA. E. Eiben and J. E. Smith, Introduction to Evolutionary Computing, Springer, 2003.
. J W Cahn, J E Hilliard, J. Chem. Phys. 28J. W. Cahn and J. E. Hilliard, J. Chem. Phys., 1958, 28, 258-267.
. S Qi, Z.-G Wang, Journal of Chemical Physics. 111S. Qi and Z.-G. Wang, Journal of Chemical Physics, 1999, 111, 10681-10688.
. K Yamada, M Nonomura, T Ohta, Macromolecules. 37K. Yamada, M. Nonomura and T. Ohta, Macromolecules, 2004, 37, 5762-5777.
. W Li, F Qiu, Y Yang, A.-C Shi, Macromolecules. 43W. Li, F. Qiu, Y. Yang and A.-C. Shi, Macromolecules, 2010, 43, 1644-1650.
. N Xie, W Li, F Qiu, A.-C Shi, Soft Matter. 9N. Xie, W. Li, F. Qiu and A.-C. Shi, Soft Matter, 2013, 9, 536-542.
. T Ohta, K Kawasaki, Macromolecules. 19T. Ohta and K. Kawasaki, Macromolecules, 1986, 19, 2621-2632.
. L.-Q Chen, Annu. Rev. Mater. Res. 32L.-Q. Chen, Annu. Rev. Mater. Res., 2002, 32, 113-140.
. M Z Miskin, H M Jaeger, Nature Materials. 12M. Z. Miskin and H. M. Jaeger, Nature Materials, 2013, 12, 326-331.
. A R Oganov, A O Lyakhov, M Valle, Accounts of chemical research. 44A. R. Oganov, A. O. Lyakhov and M. Valle, Accounts of chemical research, 2010, 44, 227-237.
. E Bianchi, G Doppelbauer, L Filion, M Dijkstra, G Kahl, J. Chem. Phys. 214102E. Bianchi, G. Doppelbauer, L. Filion, M. Dijkstra and G. Kahl, J. Chem. Phys., 2012, 134, 214102.
. M Dennison, K Milinković, M Dijkstra, J. Chem. Phys. 44507M. Dennison, K. Milinković and M. Dijkstra, J. Chem. Phys., 2012, 137, 044507.
. N Hansen, S D Müller, P Koumoutsakos, Evolutionary Computation. 11N. Hansen, S. D. Müller and P. Koumoutsakos, Evolutionary Computation, 2003, 11, 1-18.
|
[] |
[
"Magnetic Inhibition of Convection and the Fundamental Properties of Low-Mass Stars III. A Consistent 10 Myr Age for the Upper Scorpius OB Association",
"Magnetic Inhibition of Convection and the Fundamental Properties of Low-Mass Stars III. A Consistent 10 Myr Age for the Upper Scorpius OB Association"
] |
[
"Gregory A Feiden [email protected] \nDepartment of Physics & Astronomy\nUppsala University\nBox 516SE-751 20UppsalaSweden\n\nDepartment of Physics\nUniversity of North Georgia\n82 College Circle30597DahlonegaGAUSA\n"
] |
[
"Department of Physics & Astronomy\nUppsala University\nBox 516SE-751 20UppsalaSweden",
"Department of Physics\nUniversity of North Georgia\n82 College Circle30597DahlonegaGAUSA"
] |
[] |
When determining absolute ages of identifiably young stellar populations, results strongly depend on which stars are studied. Cooler (K, M) stars typically yield ages that are systematically younger than warmer (A, F, G) stars by a factor of two. I explore the possibility that these age discrepancies are the result of magnetic inhibition of convection in cool young stars by using magnetic stellar evolution isochrones to determine the age of the Upper Scorpius subgroup of the Scorpius-Centaurus OB Association. A median age of 10 Myr consistent across spectral types A through M is found, except for a subset of F-type stars that appear significantly older. Agreement is shown for ages derived from the Hertzsprung-Russell diagram and from the empirical mass-radius relationship defined by eclipsing multiple-star systems. Surface magnetic field strengths required to produce agreement are of order 2.5 kG and are predicted from a priori estimates of equipartition values. A region in the HR diagram is identified that plausibly connects stars whose structures are weakly influenced by the presence of magnetic fields with those whose structures are strongly influenced by magnetic fields. The models suggest this region is characterized by stars with rapidly thinning outer convective envelopes where the radiative core mass is greater than 75% of the total stellar mass. Furthermore, depletion of lithium predicted from magnetic models appears in better agreement with observed lithium equivalent widths than predictions from non-magnetic models. These results suggest that magnetic inhibition of convection plays an important role in the early evolution of low-mass stars and that it may be responsible for noted age discrepancies in young stellar populations.
|
10.1051/0004-6361/201527613
|
[
"https://arxiv.org/pdf/1604.08036v2.pdf"
] | 119,161,621 |
1604.08036
|
7c83014d4fd44991be20f7927e7a6643b3bce2b9
|
Magnetic Inhibition of Convection and the Fundamental Properties of Low-Mass Stars III. A Consistent 10 Myr Age for the Upper Scorpius OB Association
27 Jun 2016 June 28, 2016
Gregory A Feiden [email protected]
Department of Physics & Astronomy
Uppsala University
Box 516SE-751 20UppsalaSweden
Department of Physics
University of North Georgia
82 College Circle30597DahlonegaGAUSA
Magnetic Inhibition of Convection and the Fundamental Properties of Low-Mass Stars III. A Consistent 10 Myr Age for the Upper Scorpius OB Association
27 Jun 2016 June 28, 2016Submitted: 21 October 2015; Accepted: 27 June 2016.Astronomy & Astrophysics manuscript no. ms c ESO 2016stars: evolution -stars: low-mass -stars: magnetic fields -stars: pre-main-sequence -open clusters and associations: individual: Upper Scorpius
When determining absolute ages of identifiably young stellar populations, results strongly depend on which stars are studied. Cooler (K, M) stars typically yield ages that are systematically younger than warmer (A, F, G) stars by a factor of two. I explore the possibility that these age discrepancies are the result of magnetic inhibition of convection in cool young stars by using magnetic stellar evolution isochrones to determine the age of the Upper Scorpius subgroup of the Scorpius-Centaurus OB Association. A median age of 10 Myr consistent across spectral types A through M is found, except for a subset of F-type stars that appear significantly older. Agreement is shown for ages derived from the Hertzsprung-Russell diagram and from the empirical mass-radius relationship defined by eclipsing multiple-star systems. Surface magnetic field strengths required to produce agreement are of order 2.5 kG and are predicted from a priori estimates of equipartition values. A region in the HR diagram is identified that plausibly connects stars whose structures are weakly influenced by the presence of magnetic fields with those whose structures are strongly influenced by magnetic fields. The models suggest this region is characterized by stars with rapidly thinning outer convective envelopes where the radiative core mass is greater than 75% of the total stellar mass. Furthermore, depletion of lithium predicted from magnetic models appears in better agreement with observed lithium equivalent widths than predictions from non-magnetic models. These results suggest that magnetic inhibition of convection plays an important role in the early evolution of low-mass stars and that it may be responsible for noted age discrepancies in young stellar populations.
Introduction
The Upper Scorpius subgroup of the larger Scorpius-Centaurus OB association is estimated to have a median age between 5 and 11 Myr. Prior to 2012, there was a broad consensus that Upper Scorpius is roughly 5 Myr old based on age determinations from stellar evolution models (de Geus 1992;Preibisch et al. 2002;Slesnick et al. 2008). Crucially, Preibisch et al. (2002) showed that this age was consistent across the entire theoretical Hertzsprung-Russell (HR) diagram. However, it was quickly suggested that early-type stars in Upper Scorpius supported an older 8 -10 Myr age (Sartori et al. 2003). This revision was confirmed by Pecaut et al. (2012) who performed a comprehensive re-analysis of the B-, A-, F-, and G-type members of Upper Scorpius and found HR diagram ages of each stellar spectral type are consistent with an overall age of approximately 11 ± 3 Myr for the subgroup. Despite the older median age inferred from high mass members of Upper Scorpius, ages for cooler K-and M-type members remain steadfastly younger at 5 -7 Myr Rizzuto et al. 2015b,a), incompatible with a proposed age of 11 Myr.
This problem is not unique to Upper Scorpius and is characteristic of broader issues endemic to age dating young stellar populations with stellar evolution isochrones (e.g., Naylor 2009;Bell et al. 2012;. HR dia-gram analyses by Malo et al. (2014) and show that age estimates of mid-to late-M stars in nearby young stellar moving groups are typically younger by factor of two as compared to early-M and late-K stars. On the other hand, early-to late-K stars appear to have ages that are roughly consistent with each other. However, these ages are another factor of two younger than ages typically inferred from stars with spectral type G or earlier ).
This picture is consistent with findings from studies of open cluster color-magnitude diagrams. Pre-main-sequence stars typically appear a factor of 2 -5 younger than main-sequence stars in the same population (Naylor 2009;Bell et al. 2012Bell et al. , 2013. Age determinations appear to be a sensitive function of the stellar effective temperature range adopted in the analysis. The question is whether this is intrinsically real or an artifact of observational or modeling errors.
While other young stellar populations exhibit an age gradient as a function of effective temperature, Upper Scorpius poses a particularly interesting hurdle. Previous studies of this problem are confined to the theoretical HR diagram and suffer from significant uncertainties in the accuracy of inferred stellar properties. However, observations of Upper Scorpius by Kepler/K2 during Campaign 2 provide accurate photometric lightcurves of eclipsing multi-star systems Alonso et al. 2015;David et al. 2016;Lodieu et al. 2015). This opens up new A&A proofs: manuscript no. ms territory for assessing hypotheses as to the origin of the noted age discordance. Models can now be constrained in the HR diagram with added constraints from the eclipsing binary mass-radius relationship. The empirical mass-radius relationship is more reliable than the HR diagram, as masses and radii can be measured directly (e.g., Andersen 1991;Torres et al. 2010). Compare this to conversions from spectral types and photometric colors to T eff s and luminosities, which are subject to significant uncertainties, especially for stars with gravities intermediate between dwarf and giant scales (see, e.g., Pecaut & Mamajek 2013). Empirical mass-radius relationships should therefore provide stringent tests of stellar model accuracy. Kraus et al. (2015) examined stellar model agreement using the properties of a low-mass eclipsing binary, UScoCTIO5, that has two roughly 0.3 M ⊙ stars with precisely measured radii. What they found was that UScoCTIO5 has an estimated age of about 5 Myr if inferred from the HR diagram, consistent with the age of other low-mass stars in Upper Scorpius (Preibisch et al. 2002;Slesnick et al. 2008). Surprisingly, the age inferred for UScoCTIO5 from the mass-radius relationship implies UScoC-TIO5 is about 8 Myr old, more in line with Pecaut et al. (2012). There is some disagreement about the precise values for the masses and radii of UScoCTIO5 (David et al. 2016), but the mass-radius relationship nevertheless suggests an age of 7 -8 Myr. Stellar ages are not only dependent on the effective temperature of stars used in the analysis, but on the specific properties adopted. It appears that stellar evolution models are unable to provide consistent ages on multiple levels.
Unfortunately, a single eclipsing binary system with two near-equal-mass components only gives a narrow view of model reliability. More than one system is needed to draw more definitive conclusions. A second system was supplied by the characterization of the triply eclipsing hierarchical triple HD 144548 . HD 144548 provides three additional points that are located in a separate region of the massradius and HR diagrams than UScoCTIO5 (two near 1.0 M ⊙ and one near 1.4 M ⊙ ). Characterization of HD 144548 in combination with UScoCTIO5 allows for stricter tests of model consistency, assessing not just the ability of models to reproduce individual binary stars, but the more general slope of the mass-radius relationship concurrently with HR diagram fitting.
Here, I test the hypothesis that magnetic fields may inhibit convection in low-mass pre-main-sequence stars slowing their contraction along the Hayashi track. Slower contraction times for low-mass stars would give them larger radii at a given age, pushing inferred ages to older values (e.g., MacDonald & Mullan 2010;Malo et al. 2014). In Sect. 2, I describe the model setup and method used to predict stellar surface magnetic field strengths a priori. Section 3 presents the data adopted for use in HR and mass-radius diagrams. Results about the performance of standard and magnetic stellar evolution models are given in Sect. 4, followed by a discussion of potential uncertainties and weaknesses, as well as further evidence in support of the conclusions in Sect. 5. Finally, the main conclusions of the paper are summarized in Section 6.
Stellar Evolution Models
Models adopted in the analysis are magnetic Dartmouth stellar evolution models (Feiden & Chaboyer 2012). Basic physics used in these models are similar to those included in the original Dartmouth Stellar Evolution Program (DSEP; Dotter et al. 2008). There are noted improvements that allow for a more accurate treatment of young star evolution (see, e.g., Malo et al. 2014). Key improvements are the explicit treatment of deuterium burning in the nuclear reaction network and the prescription of surface boundary conditions at a location in the stellar envelope where the optical depth τ 5000 = 10.
Specification of surface boundary conditions requires the determination of the gas pressure P gas and gas temperature T gas at a given optical depth as a function of log g, T eff , and [m/H]. This is accomplished by extracting relevant quantities from stellar model atmosphere structures that are computed with the same solar abundance distribution and (hopefully) opacity data as the interior structure models. Here, model atmosphere structures are those from a custom PHOENIX AMES-COND grid (Hauschildt et al. 1999a,b;Dotter et al. 2008) that adopts the solar composition of Grevesse & Sauval (1998) and lowtemperature opacities of Ferguson et al. (2005).
Effects related to the interaction of magnetic fields and convection, as well as magnetic fields on the plasma equation of state, are included as described by Feiden & Chaboyer (2012. Radial magnetic field strength profiles are prescribed using a "Gaussian magnetic field strength profile" of the form
B(r) = B(R src ) exp − 1 2 R src − r σ g 2 (1)
where R src is the location of the inner convection zone boundary in fractional radii and σ g defines the width of the Gaussian (Feiden & Chaboyer 2013). The adopted radial magnetic field strength profile has a negligible impact on the results provided the peak magnetic energy density remains small compared to the thermal energy density (Feiden & Chaboyer 2013. Peak interior magnetic field strengths are defined at R src = 0.5 R ⋆ for fully convective models and at the maximum of R src = 0.5 R ⋆ and R src = R bcz (convection zone boundary) for partially convective stars. This latter condition allows magnetic models smoothly transition from a fully convective to partially convective configuration. The width of the Gaussian controls the magnitude of the peak magnetic field strength at the inner convection zone boundary. It was defined by Feiden & Chaboyer (2013) as
σ g = 0.2264 − 0.1776 (R src /R ⋆ ) .(2)
Feiden & Chaboyer (2013) provided this expression for σ g to keep the Gaussian localized (sharply peaked) in stars with thin convective envelopes and to make a more distributed (broadly peaked) magnetic field in fully convective stars. For stars with very thin convective envelopes or in stars where the convection envelope gradually disappears, σ g becomes increasingly small and produces large peak magnetic field strengths at the convection zone boundary. As a result, large magnetic pressures and magnetic energy densities are present in tenuous layers, dominating over the gas pressure in the plasma equation of state and leading to numerical convergence errors. Models with initial masses above about 1.15 M ⊙ are prevented from evolving through the final approach to the zero-age-main-sequence because of this problem. To mitigate the problem, a lower limit of σ g = 0.05 is enforced. The interior magnetic field strength profile requires a surface boundary condition: the average surface magnetic field strength B f . In previous studies, a range of values were explored and values that reproduce a given stellar fundamental property were eventually selected. Here, a different approach is used. Instead of adjusting models to find suitable magnetic field strengths, equipartition estimates of the maximum allowed average surface magnetic field strengths are adopted. The aim is to test the predictive abilities of magnetic stellar evolution models. Equipartition between the surface magnetic field pressure and the local gas pressure is assumed, meaning B eq, surf = 8πP gas, τ=1
1/2 .(3)
Surface gas pressures are defined using the same model atmosphere structures used to specify surface boundary conditions. The "surface" is defined as the optical photosphere where τ 5000 = 1. Values of the surface equipartition magnetic field strengths are listed in Table 1 for stars at an age of 10 Myr. Since surface gas pressures are dependent on log g and T eff , equipartition magnetic field strengths evolve toward greater values for stars undergoing quasi-hydrostatic contraction. This assumption breaks down for stars with masses above 1.55 M ⊙ , whose outer convection zones disappear within about 10 Myr. Magnetic field effects are switched on between the ages of 0.1 and 0.5 Myr years, perturbing an initial zero-magnetic-fieldstrength model. The precise age at which the magnetic field is switched on has no impact on the results (Feiden & Chaboyer 2012), provided there is a buffer between the perturbation age and ages under investigation (here, ages greater than 1 Myr). This buffer allows models to relax to a stable configuration after the perturbation. Typical peak magnetic field strengths in the models are of order 50 kG for models with B f eq, surf ∼ 2.5 kG. Conversely, magnetic field effects are turned off during a model computation once the outer convective envelope disappears. This eventually occurs for stars with M 1.15M ⊙ .
Models for this study are computed using solar metallicity, [m/H] = 0. Values of the initial helium and heavy element mass fractions (Y 0 and Z 0 , respectively) are taken from a solarcalibrated model. Similarly, a solar-calibrated mixing length parameter α MLT is adopted for all models. In this way, modifications to convection are only introduced through the addition of magnetic perturbations. Solar-calibrated values are determined by finding a combination of Y 0 , Z 0 , and α MLT that reproduce the solar radius, luminosity, and surface (Z/X) at the solar age for a model of 1.0 M ⊙ without a magnetic perturbation. Adopted solar
Properties of Upper Scorpius Members
Empirical data compared to stellar evolution models are taken from literature sources. Properties of stars with spectral type A, F, and G are drawn from Pecaut et al. (2012). These data were kinematically selected to be highly probable members of Upper Scorpius (de Zeeuw et al. 1999). In addition, data for K and M stars are drawn from the low-mass samples presented by Preibisch & Zinnecker (1999) and Preibisch et al. (2002). Data span logarithmic effective temperatures from about 3.5 (3 000 K) to 4.0 (10 000 K). From these data, median empirical stellar loci were defined using a moving median approach. A moving median of logarithmic luminosities as a function of log(T eff ) was performed by subdividing the temperature domain into 85 bins of equal width ∆ log(T eff )= 0.025 dex. This width ensured at least five stars were in each bin across the full temperature domain. Bins with fewer than three stars were rejected. A median log(L/L ⊙ ) value was estimated for each bin by finding the 50th percentile from the cumulative distribution of median values computed from a total of 10 000 bootstrap samples. For each bootstrap sample, the log(T eff ) and log(L/L ⊙ ) value for each data point was randomly perturbed by drawing values from a normal distribution centered on the measured value with a standard deviation equal to the quoted error. Thus, points were allowed to scatter in and out of neighboring bins according to their measured uncertainty. Points without uncertainties were assigned an optimistic uncertainty of 0.015 dex in both log(L/L ⊙ ) and log(T eff ). Different values for the number of bins, bin widths, and number of bootstrap samples were tested and found to not significantly impact the resulting median stellar locus. The resulting median locus and corresponding 99% confidence interval for the median value are shown in Figure 1.
In addition to the photometric samples, properties of two eclipsing multiple-star systems recently observed by Kepler/K2 are included. The first of these systems, UScoCTIO5, is a lowmass eclipsing binary (P orb = 34 days) with a M A = 0.329 ± 0.002M ⊙ primary and a M B = 0.317 ± 0.002M ⊙ secondary . Radii were also measured and found to be R A = 0.834 ± 0.006R ⊙ and R B = 0.810 ± 0.006R ⊙ for the primary and secondary, respectively. Extracting reliable effective temperatures for components of eclipsing binary systems is notoriously difficult. However, the similarity between the two stars in UScoCTIO5 (F B /F A ≈ T B /T A ≈ 1) allows Kraus et al. (2015) A&A proofs: manuscript no. ms (Preibisch & Zinnecker 1999;Preibisch et al. 2002;Pecaut et al. 2012) are shown as gray points. A median empirical locus for the population is shown as black solid line with the 99% confidence interval for the median value shown as a light-gray shaded region. Errorbars shown for the majority of stars with T eff < 5 500 K are prescribed as described in Sect. 3.
to apply several different techniques for determining component effective temperatures, including directly determining the total system bolometric flux [F bol = (2.02 ± 0.10) × 10 −10 erg s −1 ]. Assuming a distance of 145 ± 15 pc (de Zeeuw et al. 1999), they estimate that T eff,A,B = 3200 ± 75 K. Properties determined by David et al. (2016) will be discussed in Sect. 4.
The second system, HD 144548 (also, HIP 78977), was recently revealed to be a triply eclipsing hierarchical triple comprised of a solar-mass close binary system orbiting an F-type tertiary . The inner binary has an orbital period P orb = 1.63 days, compared to P orb = 33.9 days for the orbit of the inner binary with the tertiary. Through a combination of photodynamical, radial velocity, and observed minus calculated (O− C) modeling, Alonso et al. (2015) were able to measure masses and radii for all three stars. They find M A = 1.44 ± 0.04M ⊙ , M Ba = 0.984 ± 0.007M ⊙ , and M Bb = 0.944 ± 0.017M ⊙ for the tertiary and two close binary components, respectively. Radii for the stars were measured to be R A = 2.41 ± 0.03R ⊙ , R Ba = 1.319 ± 0.010R ⊙ , and R Bb = 1.330 ± 0.010R ⊙ . While they were unable to estimate temperatures for individual stars, they were able to estimate luminosity ratios L Bb /L Ba = 0.97 ± 0.05 and L Ba+Bb /L A = 0.0593 ± 0.0012. In addition, they quote an apsidal motion rateω = 0.0235 ± 0.002 deg day −1 , which can potentially be used as an independent age estimator (Feiden & Dotter 2013). Pecaut et al. (2012) estimated the temperature and luminosity of HD 144548 as though it were a single star. For consistency, their values of log(T eff ) = 3.788 ± 0.007 and log(L/L ⊙ ) = 0.83 ± 0.08 are adopted as the temperature and luminosity of HD 144548 A. A 6% correction was applied to the luminosity estimate to remove contribution from the close binary, as established by the luminosity ratio from photodynamical modeling . Complications arising with this assignment of log(T eff ) and log(L/L ⊙ ) are discussed in Section 5.3.
Fundamental properties for the eclipsing binary EPIC 203710387 (David et al. 2016;Lodieu et al. 2015) are not included in the main analysis because the component masses are uncertain at the 0.02 M ⊙ level. Discussion with respect to model predictions in light the two fundamental property determinations will be presented in Sect. 5.
Results
Standard Models
An HR diagram for Upper Scorpius is shown in Fig. 2a. Empirical data is represented by the median stellar locus (lightgray shaded region) and against three standard stellar evolution isochrones with ages of 5, 10, and 15 Myr (black dashed lines). Focusing on temperature regimes corresponding A-, F-, and Gtype stars (5 750 < T eff /K < 10 000), standard stellar isochrones predict ages between 5 and 15 Myr, with a median age estimate of approximately 10 Myr.
Careful analysis of Fig. 2b reveals that the empirical sequence closely matches models predictions at about 9 Myr where T eff > 8 000 K. Late A-type and early F-type stars with 7 500 < T eff /K < 8 000 agree better with a 9 -10 Myr isochrone, as does the early-G-star sequence around T eff ∼ 5 750 K. In contrast, Ftype stars with 6 000 < T eff /K < 7 500 appear marginally older and are best fit by an isochrone between 10 -15 Myr. These same features were noted by Pecaut et al. (2012) in their reanalysis of Upper Scorpius. Nevertheless, a 10 Myr isochrone lies almost entirely within the 99% confidence interval for the median value across the entire range, suggesting an overall age of 10 Myr is reasonable. One can also see that HD 144548 A lies close to the 10 Myr isochrone suggesting that, at the very least, HD 144548 is approximately 10 Myr old.
However, the median age inferred from low-mass stars is about 4 -5 Myr. M-type stars with T eff 4 000 K suggest an age of 5 Myr is appropriate. The empirical stellar locus plotted in Figs. 2a and 2c encompasses a 5 Myr standard stellar model isochrone in the M-star sequence down to T eff ∼ 3 100 K. Disagreement at the lowest temperatures is likely due to a bias in the computation of the empirical stellar locus caused by a steepening of the HR diagram (see Figure 1).
K-type stars with 4 000 < T eff /K < 5 250 suggest a younger age of around 4 Myr. Due to a lack of quoted uncertainties (see Sect. 3), it is difficult to rigorously assess the significance of the disagreement between the K-and M-star ages. The position of UScoCTIO5 in the HR diagram suggests an age that is slightly younger than 5 Myr, in line with the result of Kraus et al. (2015). Both points for UScoCTIO5 lie just above the standard stellar model isochrone at 5 Myr, but within the empirical stellar locus (Preibisch & Zinnecker 1999;Preibisch et al. 2002). Its age is thus consistent with the 4 -5 Myr age inferred from K-and Mtype stars.
A mass-radius diagram with the two eclipsing systems is shown in Fig. 3. Errorbars are plotted, but are generally smaller than the individual data points. It is apparent that ages inferred for the individual binary systems are not consistent with those from an HR diagram analysis. The age of HD 144548 from the Ba/Bb components is suggested to be around 5.5 Myr according to standard stellar evolution isochrones. While an age of 5.5 Myr agrees with quoted ages for the K-and M-type stars, it is in tension with the 10 Myr age for component A from the HR diagram. Curiously, HD 144548 A lies significantly above the model mass-radius relation. Although concerning, further discussion of HD 144548 A's radius is deferred until Sect. 5.3. The mass-radius diagram also suggests that UScoCTIO5 is 7.5 Myr old. This age is intermediate between the median age of the higher mass population and other low-mass stars, but it is impor- tant to note that this age is inconsistent with the HR diagram age of 5 Myr for UScoCTIO5.
Both eclipsing systems exhibit a disagreement between their HR and mass-radius diagram age. However, the disagreements do not immediately appear to be systematic. HD 144548 appears younger in the mass-radius plane than it does in the HR diagram by roughly a factor of two. In contrast, UScoCTIO5 appears older in the mass-radius plane than it does in the HR diagram-also by about a factor of two. Slopes of the theoretical T eff -luminosity and mass-radius relationships must be altered in different directions to produce an overall agreement.
Magnetic Models
Introducing a magnetic perturbation as described in Sect. 2, a 10 Myr isochrone is computed and overlaid in the HR and massradius diagrams in Figs. 2 and 3 (thick yellow solid line). An age of 10 Myr was chosen as a starting point based on the approximate median age of the A-, F-, and G-type stars determined from standard stellar evolution models.
A 10 Myr magnetic isochrone naturally explains the factor of two age difference observed between high-and low-mass stars, provided the high-mass stars are not appreciably affected by additional non-standard physics (e.g., rotation). Figures 2a and 2c (HR diagrams) show that a 10 Myr isochrone computed with equipartition magnetic fields is shifted toward cooler temperatures and higher luminosities compared to a 10 Myr nonmagnetic isochrone. Inhibition of convection by magnetic fields cools the stellar surface temperature thereby slowing the contraction rate of young stars. Stars have a larger radius and a higher luminosity at a given age, as a result. The combination of cooler surface temperatures and higher luminosities makes a 10 Myr magnetic isochrone look nearly identical to a 5 Myr nonmagnetic isochrone for stars with effective temperatures below about 5 000 K. The magnetic stellar model isochrone lies on top of the 5 Myr standard model isochrone in Figs. 2a and 2c and entirely within the empirical stellar locus for low-mass stars in Upper Scorpius.
The magnetic model isochrone also correctly converges toward the 10 Myr standard model isochrone at warmer effective temperatures. To some degree, this convergence reproduces an observed transition in the empirical HR diagram where the age inferred from standard model isochrones shifts from 5 Myr to 10 Myr. Below T eff ∼ 5 000 K, the magnetic isochrone closely matches predictions from a 5 Myr standard model isochrone. Compare this to the small segment of the magnetic isochrone above T eff ∼ 6 000 K. Here, the magnetic isochrone traces the 10 Myr standard model isochrone. Although the model predicted transition appears to occur over approximately the correct effective temperature domain, Fig. 2a shows a sharper transition from a magnetic to a non-magnetic sequence between 4 500 < T eff /K < 6 000 compared to model predictions. This is more clearly seen in Fig. 1, where the empirical median stellar locus exhibits a knee around T eff ∼ 4 500 K that roughly corresponds to the beginning of the transition region in Fig. 2a. A distinct knee feature is absent from the magnetic model isochrone.
Moreover, inspection of the mass-radius diagram in Figure 3 shows a 10 Myr magnetic isochrone provides reasonable agreement with the two low-mass eclipsing binary systems. It is apparent that magnetic inhibition of convection leads to more significant radius "inflation" at higher masses compared to at lower masses. This effect leads to a change in slope of the model predicted mass-radius relationship that is consistent with observations. Although agreement is good, it is not perfect. Depending on whether mass and radius estimates are adopted from Kraus et al. (2015) or David et al. (2016), at 10 Myr the radii of the primary and secondary in UScoCTIO5 are either 3.2% and 2.1% (Kraus et al.) Kraus et al. (2015) and Alonso et al. (2015), while gray points are taken from Lodieu et al. (2015) and David et al. (2016). Model predictions for magnetic and non-magnetic isochrones are the same as in Fig. 2. 6.6% larger (David et al.). Similarly, models at 10 Myr predict radii that are 1.4% too large and 1.5% too small for the Ba and Bb components of HD 144548 at 10 Myr, respectively. An age between 9.0 and 9.5 Myr is preferred for UScoCTIO5, while 10 Myr is the best fit age for HD 144548 Ba/Bb. This age spread could be indicative of an intrinsic age spread, but it is also possible to attribute this spread to errors in the mass and radius determinations of the eclipsing binary systems or to the fact that the low-mass stars in UScoCTIO5 may require marginally stronger magnetic field strengths than are predicted from equipartition arguments. Nevertheless, it is encouraging that models provide this level of agreement with an age and surface magnetic field strengths determined a priori.
Discussion
An age of 9 -10 Myr gives broad agreement across the HR and mass-radius diagrams. No stellar population in Upper Scorpius studied here robustly supports the possibility that the median age of the association is 5 Myr. There are, however, still age discrepancies that must be addressed. As noted above, the median age inferred from models for F-type stars appears to be a few million years older than the age inferred from models for stars of other spectral types. This discrepancy is perplexing and deserves further investigation, especially considering that physics potentially missing from the models would likely increase the inferred age for F-stars, widening the age discrepancy. It should be noted that most F-type stars in Upper Scorpius appear to be at a stage of evolution where models predict stellar radii increase and subsequently decrease over a period of roughly 4 -8 Myr. The timescale for this process is similar to the observed age discrepancy (see Fig. 5 in Sect. 5.3) and may be indicative of potential shortcomings in existing model physics such as radiative opacities or nuclear reaction cross-sections.
Age discrepancies also exist with two B-type members of Upper Scorpius: τ Sco and ω Sco. They exhibit ages between 2 and 5 Myr depending on whether rotation is accounted for in stellar evolution models (Pecaut et al. 2012). Those authors show that there are four other B-type stars that have inferred ages consistent with the 10 Myr median age proposed here, if rotation is taken into account. Notably the four stars that appear older are known binary systems while the two seemingly younger stars do not have a detected companion (Pecaut et al. 2012). The presence of two seemingly young B-type stars may be leveraged to suggest there exists a young population of stars with ages around 5 Myr thus supporting the younger age for the low-mass stellar population and thereby doing away with the need for a magnetic field explanation. However, that scenario requires that the low-mass stars form almost exclusively in a later star formation episode than the higher mass population. Although we cannot definitively rule out this scenario, we find it unlikely and maintain that the median age for the low-mass stars is too young compared to the rest of the stellar population in the absence of magnetic inhibition of convection. Figure 2 shows that a 10 Myr magnetic model isochrone agrees with predictions from a 5 Myr standard model isochrone for effective temperatures below about 5 000 K, but traces a 10 Myr standard model isochrone above 6 000 K. This suggests that magnetic inhibition of convection plays an important role in dictating the structure of cool low-mass stars, but the structure of warmer high-mass stars is relatively insensitive to the influence of magnetic fields. In between these two regimes, models predict a transition in which stars appear to exhibit a decreasing sensitivity to magnetic inhibition of convection as effective temperature increases.
Magnetic Field Evolution & Radiative Core Growth
As was noted in Sect. 4, Figs. 1 and 2a indicate that a similar transition occurs over roughly the same effective temperature domain in the empirical HR diagram of Upper Scorpius. Earlyto mid-G stars in Upper Scorpius (T eff ∼ 6 000 K) are best described by a 10 Myr standard stellar model isochrone. Instead of continuing to follow a 10 Myr standard model isochrone, late-G stars appear to be better represented by an isochrone intermediate between 5 and 10 Myr. This causes the empirical sequence to exhibit a shallower slope than standard model predictions between 5 750 > T eff /K > 4 500. That the shallower slope in the empirical HR diagram and the transition predicted by magnetic stellar models are coincident, suggests that the two phenomena may be connected.
The transition exhibited by theoretical magnetic models can be explained by a rapid thinning of outer convective envelopes in stars with masses above about 1.2 M ⊙ (e.g., Iben 1965). Once a star reaches an effective temperature of 6 000 K, it has a very small or non-existent convective envelope, as shown in Fig. 4(b). Figures 4(a) and (b) show this is independent of age and mass, a fact arising as a result of the hydrogen ionization zone becoming increasingly localized near the optical photosphere with increasing effective temperature. In the absence of a convective envelope, magnetic inhibition of convection is irrelevant. Therefore, any differences observed between magnetic and standard stellar model isochrones at this effective temperature are the result of the magnetic field's influence on stellar structure at earlier evolutionary phases (i.e., delayed contraction). This provides a natural explanation for the convergence of the 10 Myr magnetic and standard model isochrones that occurs near this effective temperature in Fig. 2(b). It then seems reasonable to posit that the shallow slope of the empirical HR diagram and the observed transition in the magnetic models trace the demise of outer convective envelopes and the growth of radiative cores in young, pre-mainsequence stars. While the demise of outer convection zones explains the final convergence of magnetic and standard models, it doesn't necessarily explain the beginning of the transition that occurs between T eff = 4 500 K and 5 000 K. In this transition region, magnetic models have masses between 1.26 M ⊙ (T eff = 4 500 K) and 1.40 M ⊙ (T eff =5 000 K). Magnetic models predict that these stars have convective envelope masses between 24% and 12% of their total stellar mass, respectively (see Figure 4a). The 1.40 M ⊙ model is of particular note, as Figure 3 suggests that 1.40 M ⊙ stars are entering a brief period of radius inflation that accompanies the increasing energy generation from the p-p chain and CN-cycle burning prior to the onset of core convection (e.g., Iben 1965;Bodenheimer et al. 1965). Photospheric gas pressures rapidly drop during this phase of evolution bringing about a rapid drop in surface equipartition magnetic field strengths. As expected, a rapid decrease in equipartition magnetic field strengths is observed among the data in Table 1 beginning around 1.40 M ⊙ . Stars above 1.40 M ⊙ become less sensitive to magnetic inhibition of convection because their magnetic fields are intrinsically weaker.
If the transition region begins around 4 500 K, as one might expect from the knee in the empirical HR diagram (Fig. 1), a rapid decrease in surface magnetic field strengths cannot fully explain the start of the transition. Note that Fig. 4(c) suggests that the mass difference between convective envelopes in magnetic and standard stellar models occurs around T eff ∼ 4 500 K, 1 suggesting there is something relatively unique occurring at that effective temperature. However, Fig. 2 shows no noticeable change in morphology at T eff ∼ 4 500 K. Another mechanism must be responsible for the failure of magnetic models to reproduce the sharp transition. 1 The slope in the convective envelope mass difference below 3 500 K is due to the fact that models predict stars are fully convective below that temperature, but cooling of the stellar photosphere by magnetic inhibition of convection means magnetic models have a larger mass at a given T eff than standard models.
The sharp knee at T eff = 4 500 K in Fig. 1 may be the result of a global change in magnetic field topology. Evidence for this comes from Gregory et al. (2012), who noted that largescale magnetic field topologies observed on pre-main-sequence stars appear to correlate with their predicted radiative core mass, or alternatively, their convective envelope mass. Specifically, Gregory et al. (2012) noted a shift from predominantly dipolar axisymmetric magnetic field topologies to multi-polar non-axisymmetric field topologies when the estimated radiative core mass M rad. core > 0.4 M ⋆ . It was noted above that at T eff = 4 500 K, magnetic models predict a star with M ⋆ = 1.26 M ⊙ and M conv. env. ≈ 0.24 M ⋆ (M rad. core ≈ 0.76 M ⋆ ), well below the predicted convective envelope mass where stars are expected to shift toward multi-polar non-axisymmetric field topologies.
However, Gregory et al. (2012) used non-magnetic models (Siess et al. 2000) to establish a relationship between magnetic field topology and stellar interior structure. Hypothetically, if they were to observe a star in Upper Scorpius with T eff = 4 500 K, their HR diagram analysis would suggest that the star is approximately 5 Myr old. Figures 4(a) and (b) reveal that a star with T eff = 4 500 K at 5 Myr is expected to have a mass of M ⋆ = 1.10 M ⊙ with M conv. env. ≈ 0.67 M ⊙ and thus M rad. core ≈ 0.43 M ⊙ (see Fig. 4(a)). This radiative core mass corresponds to 39% of the total stellar mass and is consistent with the boundary identified by Gregory et al. (2012).
The implications of this results are that: (1) the knee observed in Fig. 1 may be the result of a change in magnetic field topology and (2) the shift in magnetic field topology from a predominantly axisymmetric configuration to a non-axisymmetric configuration may occur when the radiative core mass is much larger-and convective envelope mass much smaller-than initially suggested. Magnetic models suggest the shift in global magnetic field topology may occur when M conv. env /M ⋆ ≈ 0.25 (M rad. core /M ⋆ ≈ 0.75) as compared to M conv. env. /M ⋆ ≈ 0.60 (M rad. core /M ⋆ ≈ 0.40; Gregory et al. 2012). A&A proofs: manuscript no. ms This discussion is premised on the reality of the apparent transition at T eff = 4 500 K and the demise of convective envelopes producing the shallower slope in the empirical HR diagram. The shallower slope may be an artifact of how the empirical locus was computed. The empirical HR diagram is somewhat under-populated in the immediate vicinity of T eff = 5 500 K, raising the possibility that the running median has artificially created a smooth transition between warmer and cooler stars. However, the HR diagram is well-populated at slightly warmer and slightly cooler temperatures, suggesting the two regions must be somehow connected. While the precise morphology of the HR diagram in this region may be subject to alteration with additional data, the general trend should be robust against errors in how the empirical locus is computed.
Errors transforming from photometric colors or spectral types to effective temperatures may provide a plausible explanation for the shallower slope of the HR diagram in the vicinity of 5 000 K. Below this temperature, stellar atmospheric opacity becomes increasingly dominated by molecular species, particularly TiO in the optical and H 2 O in the infrared, which poses problems when attempting to derive a robust effective temperature scale. This is particularly problematic for the HR diagram shown in Fig. 1 because stars above T eff ∼ 5 000 K were transformed from the observational to theoretical plane by different authors (Pecaut et al. 2012 at warmer temperatures and Preibisch & Zinnecker 1999 at cooler temperatures). If the effective temperature scale adopted by Preibisch & Zinnecker (1999) for cool stars is too cool by about 200 K, the observed transition might disappear and low-mass K-and M-type stars would fall closer to the 10 Myr standard model isochrone. Preibisch & Zinnecker (1999) constructed an spectral type to effective temperature transformation at a gravity intermediate between dwarf and giant scales using the transformation derived for luminosity class IV objects by de Jager & Nieuwenhuijzen (1987). This temperature scale is consistent with the more recent scale of who used PHOENIX BT-Settl model atmospheres (Allard et al. 2011) to infer effective temperatures from photometric colors. Agreement between empirical methods and theoretical predictions is reassuring. However, comparing these results to the 5 -30 Myr pre-mainsequence star temperature scale proposed by Pecaut & Mamajek (2013), one finds that the temperature scale adopted by Preibisch & Zinnecker is about 100 K cooler for early K-type stars, but the two temperature scales agree around spectral type M0. For early-to-late M-type stars, Preibisch & Zinnecker find temperatures about 100 K warmer than Pecaut & Mamajek (2013), a reversal from the comparison for late-K-type stars. Considering these differences, the sharp knee feature in the HR diagram may be accentuated by the adoption of an erroneous temperature scale, but it does not appear sufficient to fully explain the shallower slope observed in the HR diagram.
The observed knee and shallower slope in Fig. 1 appear to be genuine features. General agreement between the behavior of the magnetic model isochrone and the observed stellar locus provides evidence that magnetic inhibition of convection is the mechanism that is producing these effects. This means that magnetic fields may be responsible for the age discrepancy between the high-and low-mass stellar populations in young associations. However, it is clear that this statement only holds with respect to the plausible identification of a general mechanism as the models fail to reproduce the precise morphology at the "magnetic sequence turn-off." This is perhaps because magnetic models do not account for shifts in global magnetic field topology. If the age difference between warmer and cooler stars in Upper Scor-pius is the result of magnetic inhibition of convection, the transition region between 4 500 < T eff /K < 6 000 provides an excellent laboratory for studying magnetic dynamo evolution as a function of stellar surface convection zone properties.
Initial Mass Function
The presence of magnetic and non-magnetic sequences has implications for the (sub)stellar initial mass function of Upper Scorpius (Ardila et al. 2000;Preibisch et al. 2002). Section 4 described how magnetic inhibition of convection predominantly shifts stars of a given mass to cooler temperatures. As a result, stars with a given effective temperature and luminosity have a higher mass when magnetic inhibition of convection is considered. Furthermore, the presence of a transition region characterized by gradually declining influence of stellar magnetic fields that links the magnetic and non-magnetic sequences will tend spread stars with similar masses across a larger effective temperature domain.
For example, consider the predicted mass difference between a star with T eff = 6 000 K and one with T eff = 4 500 K. Nonmagnetic models predict a mass difference of 0.85 M ⊙ and 0.50 M ⊙ at 5 Myr and 10 Myr, respectively. However, magnetic models suggest that the mass difference is 0.25 M ⊙ at 10 Myr. This is at least a factor of two smaller than standard model predictions.
Neglecting magnetic inhibition of convection would produce stellar mass distributions that show a paucity of stars at higher masses compared to standard predictions for field and cluster initial mass function (e.g., Salpeter 1955;Kroupa 2002;Chabrier 2003). Preibisch et al. (2002) note a possible excess of lowmass stars compared to field initial mass functions (Scalo 1998;Kroupa 2002), but it's not immediately clear whether this potential excess of low-mass stars is fully consistent with predictions from magnetic model predictions. Assessing the impact of magnetic inhibition of convection on the (sub)stellar initial mass function of Upper Scorpius is reserved for future work. Figure 3 reveals that the radius of HD 144548 A is significantly larger than is expected from standard stellar evolutionary predictions. This poses a problem to claims of consistent ages across stellar populations in Upper Scorpius. However, it is not immediately clear that the observed radius error is a true astrophysical problem. For instance, adopting the effective temperature and luminosity estimate from Pecaut et al. (2012), one derives a radius of R = 2.2 ± 0.2R ⊙ . This is formally consistent with the radius derived by , but the mean value is more consistent with estimates from stellar evolution isochrones around 10 Myr. Stellar models show an increase in stellar radius for stars with masses in the vicinity of M = 1.5M ⊙ , illustrated as a function of stellar mass and age in Fig. 5. As described in Sect. 5.1, this radius increase is due to the convective envelope responding to increased energy generation from CN cycle ignition.
Radius of HD 144548 A
Assessing in detail whether the larger radius of HD 144548 A is real is beyond the scope of this work. However, if it requires revision, the result is unlikely to affect the inferred masses and radii derived for the close binary. investigations to ensure there are no adverse effects on the radii of the B components. HD 144548 appears to be in a fairly rapid phase of evolution given either the radius estimate from Pecaut et al. (2012) or Alonso et al. (2015). This phase of evolution, described in Section 5.1, is sensitive to radiative opacities, individual abundances of carbon and nitrogen, and nuclear reaction cross sections. Radiative opacities and p-p chain reaction cross sections determine when radius inflation begins, while CN-cycle reactions largely control when radius inflation ends. The maximum radius reached by a star during this phase of evolution is sensitive to the adopted CN-cycle cross sections and individual abundances of carbon and nitrogen. Therefore, it may be possible to use HD 144548 A to constrain these physics or to constrain the C/N ratio in Upper Scorpius. This requires that the properties of HD 144548 A, and several other F-type stars, be accurately determined.
Perhaps there are other physics missing for HD 144548 A that result in the radius discrepancy. Magnetic inhibition of convection does not appear to play a significant role in governing the radius at this phase of evolution. Convective envelopes contain very little mass (M cz < 10 −3 M ⊙ ) and models indicate that, during the brief period radius inflation, magnetic and nonmagnetic mass tracks tend to converge (see Fig. 3). However, the exact role that magnetic fields may play in governing the premain-sequence contraction of young A-, F-, and G-type stars remains relatively unexplored. While magnetic fields are included in models of these stars, the equipartition magnetic field strength is fixed to the predicted value at 10 Myr. Stronger surface magnetic fields may exist at younger ages, which might delay contraction and lead to larger radii at a given age. Initial tests suggest this is likely not a significant factor in the observed radius discrepancy between models and HD 144548 A.
Effects of strong magnetic fields on convection in stellar cores are also neglected. Strong magnetic fields may be generated by the interaction of a fossil field and vigorous dynamo action (e.g., Featherstone et al. 2009); a scenario supported by the observation of suppressed asteroseismic dipolar and quadrupo-lar modes in the cores of a fraction of evolved intermediate mass stars (Fuller et al. 2015;Stello et al. 2016;Cantiello et al. 2016). Core magnetic fields in stellar evolution models used in this work are below 1 G, which is insufficient to influence core convection. The impact of significantly stronger magnetic fields on core convection-especially the onset of core convection-and the subsequent effect on the properties of young intermediate mass stars is reserved for a future investigation.
On Starspots
Starspots have been suggested as an alternative mechanism to slow the contraction of young low-mass stars (Jackson et al. 2009;MacDonald & Mullan 2010;Jackson & Jeffries 2014b;Somers & Pinsonneault 2015). Assuming starspots are analogous to sunspots, spots can be understood to be local manifestations of convective inhibition near the optical photosphere (Biermann 1941;Deinzer 1965). In other words, the two phenomena are closely related. However, the primary difference between theoretical treatments starspots and a more general magnetic inhibition of convection in stellar evolution models is that spots are assumed to be strong localizations covering some fraction of the stellar surface whereas magnetic inhibition of convection is assumed (for simplicity) to be globally pervasive. Starspots have qualitatively and quantitatively similar effects on stellar fundamental properties (radius, T eff , luminosity ;Spruit 1982;Spruit & Weiss 1986;Somers & Pinsonneault 2015), but differ in their impact on photometric properties. Colors of spotted stars are the result of a superposition of surface regions of different effective temperatures (Spruit & Weiss 1986), whereas a global inhibition of convection still assumes a single temperature optical photosphere (Jackson & Jeffries 2014b).
Agreement between ages and stellar fundamental properties of low-mass stars predicted by magnetic stellar evolution isochrones presented in this work with ages of higher mass stars in Upper Scorpius may indicate that a global inhibition of convection accurately captures relevant physics involved in halting contraction of low-mass stars. If starspots are also to be considered, one might expect to find that young stars in Upper Scorpius have a high coverage fraction of spots ( f ∼ 1). In this limit, a global inhibition of convection and spotted models (e.g., Somers & Pinsonneault 2015) should provide equivalent explanations for observed HR and mass-radius diagram discrepancies.
Photometric brightness modulation is observed in lightcurves of UScoCTIO5 and HD 144548, suggesting spots are present on their optical photospheres. However, it may be that a large fraction of the surface is covered by a non-zero magnetic field of near-equipartition value, accompanied by smaller regions of strong local magnetic flux concentration, such as observed on the young star AD Leo (Shulyak et al. 2014). The latter can lead to the appearance of cooler spots, although only covering a small fraction of the stellar surface. If spots have a small areal coverage, it is more likely that they will have a negligible impact on stellar structure as compared to global inhibition of convection from the more ubiquitous non-zero background magnetic field.
Lithium Depletion
An approach to validating magnetic stellar models is the determination of lithium abundances (MacDonald & Mullan 2015), or observation of lithium equivalent widths. Magnetic inhibition of convection produces a cooler tempera- 6. Lithium abundance as a function of effective temperature at 10 Myr. Predictions are shown from standard (light-blue dashed line) and magnetic (yellow solid line) stellar evolution isochrones. Models were computed using the setup described in Sect. 2. Lithium abundance is defined as A( 7 Li) = log 10 (N Li /N H ) + 12. ture structure in fully convective pre-main-sequence stars, thereby delaying lithium burning and preserving lithium to older ages (Ventura et al. 1998;D'Antona et al. 2000;MacDonald & Mullan 2010;Feiden & Chaboyer 2013;Malo et al. 2014;MacDonald & Mullan 2015). Starspots have a similar affect on lithium burning timescales (Jackson & Jeffries 2014a;Somers & Pinsonneault 2015), which is perhaps not surprising given the deep connection between starspots and magnetic inhibition of convection (Biermann 1941;Deinzer 1965). Nevertheless, if magnetic fields are directly influencing the structure of young stars, they should leave an imprint on the observed lithium abundance distribution in young stellar populations. Figure 6 demonstrates the impact that magnetic inhibition of convection has on predicted lithium abundances for stars in Upper Scorpius at 10 Myr. The lithium depletion boundary at cooler temperatures remains largely unchanged. However, lithium depletion is not expected to occur in warmer stars with effective temperatures exceeding about 4500 K, assuming they possess a strong magnetic field. Therefore, K-stars are not expected to exhibit signatures of lithium depletion, in contrast to standard model predictions, which suggest stars with spectral types later than G are expected to show signatures of lithium depletion.
Stars with effective temperatures around 3550 K are predicted to exhibit the greatest amount of lithium depletion, as compared to a prediction of 3750 K for standard models. Some depletion is also expected to occur for stars with 3300 < T eff /K < 3500, which standard models suggest should not be observed. This prediction shows a strong similarity to trends in lithium equivalent widths for M-stars in Upper Scorpius (Rizzuto et al. 2015b). The observed lithium equivalent width minimum is located between T eff = 3300 -3400 K (Rizzuto et al. 2015b). While comparing lithium equivalent widths and predictions of lithium abundances is not completely correct, within the narrow range of temperatures surrounding the lithium abundance minimum, equivalent widths should accurately reflect trends in overall lithium abundances. Accounting for uncertainties in the SpT-T eff conversion, observed lithium equivalent widths support the validity of magnetic models and an inferred age of 10 Myr for the low-mass stars in Upper Scorpius.
Critically, average surface magnetic field strengths are determined a priori via equipartition arguments, highlighting the potential predictive power of magnetic stellar models for young stars.
Results from this work and Pecaut et al. (2012) strongly support the adoption of an approximately 10 Myr median age for the Upper Scorpius subgroup of the Scorpius-Centaurus OB association. There is no population of stars in the Upper Scorpius that unambiguously supports an age of 5 Myr. Nevertheless, additional eclipsing binary systems are needed to better populate the mass-radius relationship to solidify Upper Scorpius as a 10 Myr old stellar association and to verify predictions from magnetic stellar evolution models across the full mass spectrum.
Further work is needed to explore whether magnetic inhibition of convection serves as a robust solution for noted age gradients as a function of stellar spectral type (e.g., Naylor 2009;. Previous work determining ages for the β-Pictoris moving group from magnetic stellar models suggests the solution is robust (MacDonald & Mullan 2010;Malo et al. 2014;Binks & Jeffries 2016), but additional confirmation using stellar populations with a wider variety of ages is required. These results also highlight the need to explore how the inclusion of magnetic inhibition of convection affects the (sub)stellar initial mass function for young stellar populations.
In the meantime, the use of magnetic stellar models as a component for isochrone analyses for young low-mass stars is recommended. Consistency across the HR and mass-radius diagrams exhibited by magnetic stellar evolution isochrones suggest that magnetic isochrones will provide a more accurate characterization of transiting exoplanet host stars in young stellar populations, especially those observed by Kepler/K2 in Upper Scorpius. By adopting an incorrect age, a reasonable estimate of stellar radii can be found, but stellar masses will be systematically underestimated by roughly 30%. Similar arguments apply to the characterization of young low-mass stars hosting directly imaged giant planets or brown dwarfs. Ages inferred from standard model isochrones may be systematically too young by a factor of two resulting in a systematic underestimation of giant planet and brown dwarfs masses by roughly 35% (Baraffe et al. 2003). To facilitate their adoption, all models used in this study are available online. 2 A small grid of solar-metallicity magnetic stellar mass tracks and isochrones is also under construction, as is a web-interface to compute custom magnetic models.
Fig. 1 .
1Hertzsprung-Russell (HR) diagram for Upper Scorpius. A-type through M-type members of Upper Scorpius
Fig. 2 .
2(a) Comparison of theoretical Hertzsprung-Russell (HR) diagram predictions to the observational locus of Upper Scorpius. The median empirical locus calculated in Section 3 is shown by the light gray shaded region. Black points signify HR diagram positions of eclipsing binary members for which masses and radii are known. Over-plotted are predictions from Dartmouth stellar evolution isochrones. Standard (i.e., nonmagnetic) isochrones at 5 Myr, 10 Myr, and 15 Myr are shown as short-dashed, dash-dotted, and long-dashed lines, respectively. A single magnetic isochrone at 10 Myr is plotted as a solid yellow line. Values in parentheses along the magnetic isochrone are predicted radiative core mass fractions (M rad. core /M ⋆ ) at the circled points. All models are computed with a solar metallicity. (b) Zoom-in on the high-mass region of the HR diagram. (c) Zoom-in on the low-mass region.
Fig. 4 .
4Mass fraction of the stellar convective envelopes as a function of (a) total stellar mass and (b) effective temperature. (c) Difference in convective envelope mass between magnetic and standard stellar evolution models as a function of effective temperature. The difference is shown as the convective envelope mass of a magnetic model minus the convective envelope mass of a standard stellar model at the same effective temperature, (M conv. env., mag − M conv. env, std ), shown for two ages of standard models (5 Myr, 10 Myr).
Table 1 .
1Equipartition magnetic field strength predictions at 10 Myr.Mass
T eff
log g
B f eq, surf
(M ⊙ )
(K)
(dex)
(kG)
0.1
3060
4.16
2.64
0.2
3261
4.19
2.51
0.3
3396
4.20
2.44
0.4
3517
4.22
2.41
0.5
3639
4.24
2.38
0.6
3760
4.26
2.34
0.7
3888
4.28
2.29
0.8
4031
4.29
2.22
0.9
4195
4.30
2.14
1.0
4397
4.30
2.04
1.1
4641
4.30
1.92
1.2
4910
4.28
1.83
1.3
5214
4.25
1.73
1.4
5569
4.19
1.58
1.5
5995
4.06
1.29
1.6
6618
3.96
0.94
1.7
7403
4.15
0.76
Table 2 .
2Adopted solar calibration properties.Parameter
Adopted
Model
Ref
Age (Gyr)
4.567
· · ·
1
M ⊙ (g)
1.9891 × 10 33
· · ·
2
R ⊙ (cm)
6.9598 × 10 10
log(R/R ⊙ ) = −6 × 10 −5 3, 1
L ⊙ (erg s −1 )
3.8418 × 10 33
log(L/L ⊙ ) = −1 × 10 −4
1
R bcz /R ⊙
0.713 ± 0.001
0.715
4, 5
(Z/X) surf
0.0231
0.0231
6
Y ⊙, surf
0.2485 ± 0.0034
0.2460
7
References. (1) Bahcall et al. (2005); (2) IAU 2009 a (3) Neckel (1995);
(4) Basu & Antia (1997); (5) Basu (1998); (6) Grevesse & Sauval
(1998); (7) Basu & Antia (2004).
a http://maia.usno.navy.mil/NSFA/NSFA_cbe.html
quantities are given in Table 2. Final calibration parameters are
Y 0 = 0.2755, Z 0 = 0.01876, and α MLT = 1.884.
larger than model predictions or 5.7% and A&A proofs: manuscript no. msFig. 3. Mass-radius diagram for stars in eclipsing systems. Black points are fromIsochrone Ages: 5, 10, 15 Myr
(top to bottom)
Radius (R
⊙ )
Mass (M ⊙ )
10.0 Myr: 〈Βƒ〉 = B eq
0.40
0.80
1.20
1.60
2.00
2.40
0.00
0.50
1.00
1.50
Those largely depend on photodynamical modeling of the Kepler/K2 lightcurve during events where Ba and Bb eclipse one another, not where the pair eclipse the tertiary component. Results in Sect. 4 should be robust. Still, confirmation of the star's radius would be beneficial for future G. A. Feiden: A Consistent 10 Myr Age for Upper ScorpiusNon−magnetic Dartmouth mass tracks
[m/H] = 0.0
Radius (R
⊙ )
Age (Myr)
1.60 M ⊙
1.55 M ⊙
1.50 M ⊙
1.45 M ⊙
1.4
1.6
1.8
2.0
2.2
8
10
12
14
16
Fig. 5. Brief period of radius inflation described in Sect. 5.3 for stars
with masses of 1.60M ⊙ (blue solid), 1.55M ⊙ (green short-dashed),
1.50M ⊙ (blue dash-dotted), 1.45M ⊙ (yellow long-dashed). Mass tracks
are computed without a magnetic field (i.e., non-magnetic configura-
tion). Effective temperatures at these masses are characteristic of F stars
at 10 Myr.
A&A proofs: manuscript no. msUpper Scorpius
[m/H] = 0.0
A( 7
Li )
Effective Temperature (K)
10 Myr: non−magnetic
5 Myr: non−magnetic
10 Myr: 〈Βƒ〉 = B eq
1.0
1.5
2.0
2.5
3.0
3.5
3000
4000
5000
6000
7000
8000
Fig.
Acknowledgements. G.A.F. thanks Alexis Lavail, Eric Mamajek, James Silvester, and Eric Stempels for reading and commenting on an early version of this manuscript as well as Bengt Gustafsson and Thomas Nordlander for interesting discussions that helped improved the manuscript. G.A.F also thanks the anonymous reviewer for posing interesting questions that led to significant improvements in the manuscript. The magnetic Dartmouth stellar evolution code was originally developed with support of National Science Foundation (NSF) grant AST-0908345. This work made use of NASA's Astrophysics Data System (ADS), the VizieR catalogue access tool by CDS in Strasbourg, France(Ochsenbein et al. 2000), and IPython with Jupyter notebooks(Pérez & Granger 2007). Figures in this manuscript were produced with Gnuplot 5(Williams et al. 2015).
Summary & Conclusions Stellar evolution models and literature data are used to estimate median ages for stars with spectral types A through M in Upper Scorpius. A median HR diagram age of 9 -10 Myr is found for A-, F-, and G-type stars, with the exception of a population of F-type stars that appear roughly 4 -5 Myr older. These results agree with the revised 11 ± 3 Myr age proposed by. HR diagram age of 4 -5 Myr is confirmed for K-and M-type stars using standard stellar evolution isochrones. Pecaut et al.At the same time. Preibisch. Slesnick et al.Summary & Conclusions Stellar evolution models and literature data are used to esti- mate median ages for stars with spectral types A through M in Upper Scorpius. A median HR diagram age of 9 -10 Myr is found for A-, F-, and G-type stars, with the exception of a population of F-type stars that appear roughly 4 -5 Myr older. These results agree with the revised 11 ± 3 Myr age proposed by Pecaut et al. (2012). At the same time, an HR diagram age of 4 -5 Myr is confirmed for K-and M-type stars using stan- dard stellar evolution isochrones (Preibisch 2012; Slesnick et al.
Notably, isochronal ages of K-and M-type stars in eclipsing multiple-star systems observed by. & Herczeg, Hillenbrand, Kraus, Kepler/K2Herczeg & Hillenbrand 2015). Notably, isochronal ages of K-and M-type stars in eclipsing multiple-star systems observed by Kepler/K2 (Kraus et al. 2015; Alonso et al. 2015; David et al.
an age of 9 -10 Myr is found for K-and M-type stars in Upper Scorpius. Magnetic inhibition of convection consistently explains: 1. the age discrepancy between high-and low-mass stars in the HR diagram (Herczeg & Hillenbrand 2015), 2. the observed slope of the low-mass stellar mass-radius relationship, 3. stellar age differences observed between HR diagram and mass-radius relationship determinations. Using stellar evolution models that account for magnetic inhibition of convection (Feiden & Chaboyer. Kraus et al. 2015), 4. and the lithium depletion pattern inferred from observed lithium equivalent widths. Rizzuto et al. 2015b)stars. Using stellar evolution models that account for magnetic in- hibition of convection (Feiden & Chaboyer 2012, 2013), an age of 9 -10 Myr is found for K-and M-type stars in Upper Scor- pius. Magnetic inhibition of convection consistently explains: 1. the age discrepancy between high-and low-mass stars in the HR diagram (Herczeg & Hillenbrand 2015), 2. the observed slope of the low-mass stellar mass-radius rela- tionship, 3. stellar age differences observed between HR diagram and mass-radius relationship determinations (Kraus et al. 2015), 4. and the lithium depletion pattern inferred from observed lithium equivalent widths (Rizzuto et al. 2015b).
F Allard, D Homeier, B Freytag, 16th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun. C. Johns-Krull, M. K. Browning, & A. A. West44891Allard, F., Homeier, D., & Freytag, B. 2011, in Astronomical Society of the Pa- cific Conference Series, Vol. 448, 16th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun, ed. C. Johns-Krull, M. K. Browning, & A. A. West, 91
. R Alonso, H J Deeg, S Hoyer, A&A. 5848Alonso, R., Deeg, H. J., Hoyer, S., et al. 2015, A&A, 584, L8
. J Andersen, A&A Rev. 391Andersen, J. 1991, A&A Rev., 3, 91
. D Ardila, E Martín, G Basri, AJ. 120479Ardila, D., Martín, E., & Basri, G. 2000, AJ, 120, 479
. J N Bahcall, A M Serenelli, S Basu, ApJ. 62185Bahcall, J. N., Serenelli, A. M., & Basu, S. 2005, ApJ, 621, L85
. I Baraffe, G Chabrier, T S Barman, F Allard, P H Hauschildt, A&A. 402701Baraffe, I., Chabrier, G., Barman, T. S., Allard, F., & Hauschildt, P. H. 2003, A&A, 402, 701
. S Basu, MNRAS. 298719Basu, S. 1998, MNRAS, 298, 719
. S Basu, H M Antia, MNRAS. 287189Basu, S. & Antia, H. M. 1997, MNRAS, 287, 189
. S Basu, H M Antia, ApJ. 60685Basu, S. & Antia, H. M. 2004, ApJ, 606, L85
. C P M Bell, T Naylor, N J Mayne, R D Jeffries, S P Littlefair, MNRAS. 4243178Bell, C. P. M., Naylor, T., Mayne, N. J., Jeffries, R. D., & Littlefair, S. P. 2012, MNRAS, 424, 3178
. C P M Bell, T Naylor, N J Mayne, R D Jeffries, S P Littlefair, MNRAS. 434806Bell, C. P. M., Naylor, T., Mayne, N. J., Jeffries, R. D., & Littlefair, S. P. 2013, MNRAS, 434, 806
. L Biermann, Vierteljahresschrift der Astronomischen Gesellschaft. 76194Biermann, L. 1941, Vierteljahresschrift der Astronomischen Gesellschaft, 76, 194
. A S Binks, R D Jeffries, MNRAS. 4553345Binks, A. S. & Jeffries, R. D. 2016, MNRAS, 455, 3345
. P Bodenheimer, J E Forbes, N L Gould, L G Henyey, ApJ. 1411019Bodenheimer, P., Forbes, J. E., Gould, N. L., & Henyey, L. G. 1965, ApJ, 141, 1019
. M Cantiello, J Fuller, L Bildsten, arXiv:1602.03056Cantiello, M., Fuller, J., & Bildsten, L. 2016, arXiv: 1602.03056
. G Chabrier, PASP. 115763Chabrier, G. 2003, PASP, 115, 763
. F D'antona, P Ventura, I Mazzitelli, ApJ. 54377D'Antona, F., Ventura, P., & Mazzitelli, I. 2000, ApJ, 543, L77
. T J David, L A Hillenbrand, A M Cody, J M Carpenter, A W Howard, ApJ. 81621David, T. J., Hillenbrand, L. A., Cody, A. M., Carpenter, J. M., & Howard, A. W. 2016, ApJ, 816, 21
. E J De Geus, A&A. 262258de Geus, E. J. 1992, A&A, 262, 258
. C De Jager, H Nieuwenhuijzen, A&A. 177217de Jager, C. & Nieuwenhuijzen, H. 1987, A&A, 177, 217
. P T De Zeeuw, R Hoogerwerf, J H J De Bruijne, A G A Brown, A Blaauw, AJ. 117354de Zeeuw, P. T., Hoogerwerf, R., de Bruijne, J. H. J., Brown, A. G. A., & Blaauw, A. 1999, AJ, 117, 354
. W Deinzer, ApJ. 141548Deinzer, W. 1965, ApJ, 141, 548
. A Dotter, B Chaboyer, D Jevremović, ApJS. 17889Dotter, A., Chaboyer, B., Jevremović, D., et al. 2008, ApJS, 178, 89
. N A Featherstone, M K Browning, A S Brun, J Toomre, ApJ. 7051000Featherstone, N. A., Browning, M. K., Brun, A. S., & Toomre, J. 2009, ApJ, 705, 1000
. G A Feiden, B Chaboyer, ApJ. 76130Feiden, G. A. & Chaboyer, B. 2012, ApJ, 761, 30
. G A Feiden, B Chaboyer, ApJ. 779183Feiden, G. A. & Chaboyer, B. 2013, ApJ, 779, 183
. G A Feiden, B Chaboyer, ApJ. 78653Feiden, G. A. & Chaboyer, B. 2014a, ApJ, 786, 53
. G A Feiden, B Chaboyer, A&A. 57170Feiden, G. A. & Chaboyer, B. 2014b, A&A, 571, A70
. G A Feiden, A Dotter, ApJ. 76586Feiden, G. A. & Dotter, A. 2013, ApJ, 765, 86
. J W Ferguson, D R Alexander, F Allard, ApJ. 623585Ferguson, J. W., Alexander, D. R., Allard, F., et al. 2005, ApJ, 623, 585
. J Fuller, M Cantiello, D Stello, R A Garcia, L Bildsten, Science. 350423Fuller, J., Cantiello, M., Stello, D., Garcia, R. A., & Bildsten, L. 2015, Science, 350, 423
. S G Gregory, J.-F Donati, J Morin, ApJ. 75597Gregory, S. G., Donati, J.-F., Morin, J., et al. 2012, ApJ, 755, 97
. N Grevesse, A J. ; P H Sauval, F Allard, E Baron, Space Sci. Rev. 85377ApJGrevesse, N. & Sauval, A. J. 1998, Space Sci. Rev., 85, 161 2 https://github.com/gfeiden/MagneticUpperSco/ Hauschildt, P. H., Allard, F., & Baron, E. 1999a, ApJ, 512, 377
. P H Hauschildt, F Allard, J Ferguson, E Baron, D R Alexander, ApJ. 525871Hauschildt, P. H., Allard, F., Ferguson, J., Baron, E., & Alexander, D. R. 1999b, ApJ, 525, 871
. G J Herczeg, L A Hillenbrand, ApJ. 80823Herczeg, G. J. & Hillenbrand, L. A. 2015, ApJ, 808, 23
L A Hillenbrand, A Bauermeister, R J White, 14th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun. Belle384200Astronomical Society of the Pacific Conference SeriesHillenbrand, L. A., Bauermeister, A., & White, R. J. 2008, in Astronomical So- ciety of the Pacific Conference Series, Vol. 384, 14th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun, ed. G. van Belle, 200
. Jr Iben, I , ApJ. 141993Iben, Jr., I. 1965, ApJ, 141, 993
. R J Jackson, R D Jeffries, MNRAS. 4454306Jackson, R. J. & Jeffries, R. D. 2014a, MNRAS, 445, 4306
. R J Jackson, R D Jeffries, MNRAS. 4412111Jackson, R. J. & Jeffries, R. D. 2014b, MNRAS, 441, 2111
. R J Jackson, R D Jeffries, P F Maxted, MNRAS. 33989Jackson, R. J., Jeffries, R. D., & Maxted, P. F. L. 2009, MNRAS, 339, L89
. A L Kraus, A M Cody, K R Covey, ApJ. 8073Kraus, A. L., Cody, A. M., Covey, K. R., et al. 2015, ApJ, 807, 3
. P Kroupa, Science. 29582Kroupa, P. 2002, Science (New York, N.Y.), 295, 82
. N Lodieu, R Alonso, J I González Hernández, A&A. 584128Lodieu, N., Alonso, R., González Hernández, J. I., et al. 2015, A&A, 584, A128
. J Macdonald, D J Mullan, ApJ. 7231599MacDonald, J. & Mullan, D. J. 2010, ApJ, 723, 1599
. J Macdonald, D J Mullan, MNRAS. 448MacDonald, J. & Mullan, D. J. 2015, MNRAS, 448, 2019
. L Malo, R Doyon, G A Feiden, ApJ. 79237Malo, L., Doyon, R., Feiden, G. A., et al. 2014, ApJ, 792, 37
. T Naylor, MNRAS. 399432Naylor, T. 2009, MNRAS, 399, 432
. H Neckel, Sol. Phys. 1567Neckel, H. 1995, Sol. Phys., 156, 7
. F Ochsenbein, P Bauer, J Marcout, A&AS. 14323Ochsenbein, F., Bauer, P., & Marcout, J. 2000, A&AS, 143, 23
. M J Pecaut, E E Mamajek, ApJS. 2089Pecaut, M. J. & Mamajek, E. E. 2013, ApJS, 208, 9
. M J Pecaut, E E Mamajek, E J Bubar, ApJ. 746154Pecaut, M. J., Mamajek, E. E., & Bubar, E. J. 2012, ApJ, 746, 154
. F Pérez, B E Granger, Computing in Science and Engineering. 921Pérez, F. & Granger, B. E. 2007, Computing in Science and Engineering, 9, 21
. T Preibisch, Research in Astronomy and Astrophysics. 121Preibisch, T. 2012, Research in Astronomy and Astrophysics, 12, 1
. T Preibisch, A G A Brown, T Bridges, E Guenther, H Zinnecker, AJ. 124404Preibisch, T., Brown, A. G. A., Bridges, T., Guenther, E., & Zinnecker, H. 2002, AJ, 124, 404
. T Preibisch, H Zinnecker, AJ. 1172381Preibisch, T. & Zinnecker, H. 1999, AJ, 117, 2381
. A C Rizzuto, M J Ireland, T J Dupuy, A L Kraus, arXiv:1512.05371Rizzuto, A. C., Ireland, M. J., Dupuy, T. J., & Kraus, A. L. 2015a, arXiv: 1512.05371
. A C Rizzuto, M J Ireland, A L Kraus, MNRAS. 4482737Rizzuto, A. C., Ireland, M. J., & Kraus, A. L. 2015b, MNRAS, 448, 2737
. E E Salpeter, ApJ. 121161Salpeter, E. E. 1955, ApJ, 121, 161
. M J Sartori, J R D Lépine, W S Dias, A&A. 404913Sartori, M. J., Lépine, J. R. D., & Dias, W. S. 2003, A&A, 404, 913
J Scalo, The Stellar Initial Mass Function (38th Herstmonceux Conference). G. Gilmore & D. Howell142201Astronomical Society of the Pacific Conference SeriesScalo, J. 1998, in Astronomical Society of the Pacific Conference Series, Vol. 142, The Stellar Initial Mass Function (38th Herstmonceux Conference), ed. G. Gilmore & D. Howell, 201
. D Shulyak, A Reiners, U Seemann, O Kochukhov, N Piskunov, A&A. 56335Shulyak, D., Reiners, A., Seemann, U., Kochukhov, O., & Piskunov, N. 2014, A&A, 563, A35
. L Siess, E Dufour, M Forestini, A&A. 358593Siess, L., Dufour, E., & Forestini, M. 2000, A&A, 358, 593
. C L Slesnick, L A Hillenbrand, J M Carpenter, ApJ. 688377Slesnick, C. L., Hillenbrand, L. A., & Carpenter, J. M. 2008, ApJ, 688, 377
. G Somers, M H Pinsonneault, ApJ. 807174Somers, G. & Pinsonneault, M. H. 2015, ApJ, 807, 174
. H C Spruit, A&A. 108348Spruit, H. C. 1982, A&A, 108, 348
. H C Spruit, A Weiss, A&A. 166167Spruit, H. C. & Weiss, A. 1986, A&A, 166, 167
. D Stello, M Cantiello, J Fuller, Nature. 529364Stello, D., Cantiello, M., Fuller, J., et al. 2016, Nature, 529, 364
. G Torres, J Andersen, A Giménez, A&A Rev. 1867Torres, G., Andersen, J., & Giménez, A. 2010, A&A Rev., 18, 67
. P Ventura, A Zeppieri, I Mazzitelli, F & D'antona, A&A. 3311011Ventura, P., Zeppieri, A., Mazzitelli, I., & D'Antona, F. 1998, A&A, 331, 1011
T Williams, C Kelley, Gnuplot 5.0: an interactive plotting program. Williams, T., Kelley, C., & et al. 2015, Gnuplot 5.0: an interactive plotting pro- gram, http://www.gnuplot.info/
|
[
"https://github.com/gfeiden/MagneticUpperSco/"
] |
[
"Symmetry constraints on phonon dispersion in graphene",
"Symmetry constraints on phonon dispersion in graphene"
] |
[
"L A Falkovsky \nL.D. Landau Institute for Theoretical Physics\n117334MoscowRussia\n\nInstitute of the High Pressure Physics\n142190TroitskRussia\n"
] |
[
"L.D. Landau Institute for Theoretical Physics\n117334MoscowRussia",
"Institute of the High Pressure Physics\n142190TroitskRussia"
] |
[] |
Taking into account the constraints imposed by the lattice symmetry, we calculate the phonon dispersion for graphene with interactions between the first, second, and third nearest neighbors in the framework of the Born-von Karman model. Analytical expressions obtained for the dispersion of the out-of-plane (bending) modes give the nonzero sound velocity. The dispersion of four in-plane modes is determined by coupled equations. Values of the force constants are found in fitting with frequencies at critical points and with elastic constants measured on graphite.
|
10.1016/j.physleta.2008.05.085
|
[
"https://arxiv.org/pdf/0802.0912v1.pdf"
] | 15,345,698 |
0802.0912
|
8662651de28f96177e8309a72255801d637fc822
|
Symmetry constraints on phonon dispersion in graphene
7 Feb 2008
L A Falkovsky
L.D. Landau Institute for Theoretical Physics
117334MoscowRussia
Institute of the High Pressure Physics
142190TroitskRussia
Symmetry constraints on phonon dispersion in graphene
7 Feb 2008numbers: 6320Dj8105Uw7115Mb
Taking into account the constraints imposed by the lattice symmetry, we calculate the phonon dispersion for graphene with interactions between the first, second, and third nearest neighbors in the framework of the Born-von Karman model. Analytical expressions obtained for the dispersion of the out-of-plane (bending) modes give the nonzero sound velocity. The dispersion of four in-plane modes is determined by coupled equations. Values of the force constants are found in fitting with frequencies at critical points and with elastic constants measured on graphite.
I. INTRODUCTION
Since the pioneering experiments on graphene (a single atomic layer of graphite) 1,2 , main attention has been devoted to its electronic properties. More recently, Raman spectroscopy 3 extends to investigations of the lattice dynamics of graphene. It was found that the frequency (≈ 1590 cm −1 ) of the Raman mode in graphene agrees with its value in graphite. Also, the overtone of the D mode visible almost in all carbon-consisting materials was observed at about 2600 cm −1 . However, this information is very meagre and does not provide a way to describe the lattice dynamics. The detailed knowledge of the lattice dynamics and electron-phonon interactions 4 is needed for interpretations of the Raman scattering as well as of the transport phenomena.
Several models 5,6,7,8,9,10,11 have been proposed to predict the phonon dispersion in graphene and bulk graphite from empirical force-constant calculations. A simplest approach assumes the diagonal form of the force-constant matrix which contains three constants for the interaction of an atom with all its nth-nearest neighbor. Thus, we meet 12 constants for graphene in the popular 4thnearest neighbor approach or 15 constants in the 5thnearest neighbor one 12 . The number of constants could be diminished if the model interactions are used 13,14,15 or if the phonon dispersion is considered only for the distinctive directions in the Brillouin zone 16 .
On the other hand, we can use the most recent results 12,17,18,19,20,21 of the first-principal calculations for the phonon dispersion in graphene and graphite. Comparison of that results for the high-frequency modes (see Table 1) shows disagreements as large as 50 cm −1 between the various approaches. The discrepancies could come either from an assumption that the force-constant matrix for the atom-neighbor interaction has a diagonal form or from an overestimation of the low-frequency modes. It is evident that atoms move more freely in outof-pane direction in graphene than in graphite. Therefore, the frequencies of the out-of-plane mode in graphene should be less than the corresponding frequencies in graphite. Moreover, if the stiffness of the graphene layer is neglected, the dispersion of the acoustic out-of-plane mode becomes quadratic as seen from the equation of elasticity (see also Ref. 22 ). The interaction between layers in graphite can be estimated from the splitting of the low-frequency ZA and ZO ′ modes in graphite. One can see, for instance, from Ref. 20 that the value of the splitting is as much as 130 cm −1 . It means that (i) the result of graphene stiffness cannot be larger than that interaction and (ii) the agreement between the theory for graphene and the experimental low-frequency data for graphite cannot be better than about 130 cm −1 .
Here we present an analytical description of the phonon dispersion in graphene. This is done within the framework of the Born-von Karman model for the honeycomb graphene lattice including interactions only with first, second, and third nearest neighbors and taking the constraints imposed by the lattice symmetry into account. We show that the out-of-plane (bending) and inplane modes are decoupled from each other. The out-ofplane modes are described by three force-constants determined in fitting with the Raman frequency and smallest elastic constant C 44 . In the narrow wave-vector interval near the Γ point, the acoustic out-of-plane mode has a linear dispersion with the nonzero sound velocity. This means that a single graphene layer possesses the small but finite stiffness in contradiction with results of Ref. 22 We should emphasize that the quadratic dispersion of the acoustic mode leads to the large contribution (proportional to the sample size squared of the long-range fluctuations, that is much stronger than the logarithmic function in the case of the linear dispersion. Six force-constants describing the in-plane modes are found in fitting with their frequencies in the critical points and elastic constants C 11 and C 12 of graphite.
II. PHONON DYNAMICS IN NEAREST NEIGHBOR APPROXIMATION
The equations of motion in the harmonic approximation are written in the well-known form
j,m,κ ′ Φ κκ ′ ij (a n − a m )u κ ′ j (a m ) = ω 2 u κ i (a n ),(1)
where the vectors a n numerate the lattice cells, the superscripts κ, κ ′ note two sublattices A and B, and the subscripts i, j = x, y, z take three values corresponding to the space coordinates. Since the potential energy is the quadratic function of the atomic displacements u A i (a n ) and u B i (a n ), the force-constant matrix can be taken in the symmetric form, Φ AB ij (a n ) = Φ BA ji (−a n ), and its Fourier transform, i.e. the dynamical matrix, is a Hermitian matrix.
Each atom, for instance, A 0 (see Fig. 1) has three first neighbors in the other sublattice, i.e. B, with the rela-
tive vectors B 1 = a(1, 0), B 2,3 = a(−1, ± √ 3)/2, where a = 1.42Å is the carbon-carbon distance. The sec- ond neighbors are in the same sublattice A at distances √ 3a with the relative vectors A 1,4 = ±a(0, √ 3), A 2,5 = ±a(−3, √ 3)/2, A 3,6 = ∓a(3, √ 3)/2. The distance 2a to the third neighbors B ′ 1 = a(2, 0), B ′ 2,3 = a(1, ∓ √ 3)
is slightly larger. The distance to the fourth neighbors is √ 7a = 2.65a. So, the difference between distances to the third and to fourth neighbors is nearly the same as the difference between distances to the first and to second ones. We will see that the force-constants become less by factor 5 while going from the first to the second neighbors (see Table 3). Therefore, we do not include the fourth neighbors into consideration.
For the first and third neighbors (in the B sublattice), the dynamical matrix has the form
φ AB ij (q) = 3 κ=1 Φ AB ij (B κ ) exp(iqB κ ) + 3 κ=1 Φ AB ij (B ′ κ ) exp(iqB ′ κ ),(2)
and for the second neighbors (in the A sublattice)
φ AA ij (q) = Φ AA ij (A 0 ) + 6 κ=1 Φ AA ij (A κ ) exp(iqA κ ),(3)
where A 0 indices the atom chosen at the center of the coordinate system in the A sublattice and the wave vector q is taken in units of 1/a. The point group D 6h of the honeycomb lattice is generated by {C 6 , σ v , σ z }, where σ z is a reflection z → −z by the plane that contains the graphene layer, C 6 is a rotation by π/3 around the z axis, and σ v is a reflection by the xz plane. The transformations of the group impose constraints on the dynamical matrix. To obtain them, we introduce variables ξ, η = x ± iy transforming under the rotation C 3 around the z-axis (taken at the A 0 atom) as follows (ξ, η) → (ξ, η) exp(±2πi/3). In the rotation, the atoms change their positions 6 . Therefore, all the force constants Φ AB ξη (B κ ) with the different κ (as well as Φ AB zz (B κ )) are equal to one another, but the force constants with the coincident subscripts ξ or η transform as covariant variables.
B 1 → B 2 → B 3 , A 1 → A 3 → A 5 , and A 2 → A 4 → AFor instance, Φ AB ξξ (B 1 ) = Φ AB ξξ (B 2 ) exp (2πi/3) = Φ AB ξξ (B 3 ) exp (−2πi/3). The relation between Φ AA ξξ (A κ ) with the points A 1 , A 3 , A 5 (and also between A 4 , A 2 , A 6 ) has the same form. The constants α z = Φ AB zz (B 1 ), γ z = Φ AA zz (A 1 ), α ′ z = Φ AB zz (B ′ 1 ), α = Φ AB ξη (B 1 ), α ′ = Φ AB ξη (B ′ 1 ), and γ = Φ AA ξη (A 1 ) are evidently real. The constant β = Φ AB ξξ (B 1 ) as well as β ′ = Φ AB ξξ (B ′ 1 )) is real because the reflection (x, y) → (x, −y) with B 1 → B 1 , B ′ 1 → B ′ 1
belongs to the symmetry group. Besides, we have one complex force constant δ = Φ AA ξξ (A 1 ). Two force constants Φ AA zz (A 0 ) and Φ AA ξη (A 0 ) for the atom A 0 can be excluded in the ordinary way with the help of conditions imposed by invariance with respect to the translations of the layer as a whole in the x/z directions. Using the equations of motion (1) and Eqs. (2), (3), we find this stability condition Φ AA ξη (A 0 ) + 6Φ AA ξη (A 1 ) + 3Φ AB ξη (B 1 ) + 3Φ AB ξη (B ′ 1 ) = 0 and the similar form for the zz components.
A.
Dispersion of bending out-of-plane modes
The out-of-plane vibrations u A z , u B z in the z direction are not coupled with the in-plane modes because the force constants of type Φ xz or Φ yz equals zero due to the reflection z → −z. The corresponding dynamical matrix for the out-of-plane modes has the form
φ AA zz (q) φ AB zz (q) φ AB zz (q) * φ AA zz (q) ,(4)
where
φ AA zz (q) = −3(α z + α ′ z ) +2γ z [cos ( √ 3q y ) + 2 cos (3q x /2) cos ( √ 3q y /2) − 3] , φ AB zz (q) = α z [exp (iq x ) + 2 exp (−iq x /2) cos ( √ 3q y /2)] +α ′ z [exp (−2iq x ) + 2 exp (iq x ) cos ( √ 3q y )] .(5)
The phonon dispersion for the out-of-plane modes is found
ω ZO,ZA (q) = φ AA zz (q) ± |φ AB zz (q)|.(6)
The equations allow us to express the phonon frequencies of the out-of-plane branches at the critical points Γ, K, and M in terms of the force constants:
ω ZO (Γ) = [−6(α z + α ′ z )] 1/2 ω ZO, ZA (K) = [−3(α z + α ′ z ) − 9γ z ] 1/2 ω ZO (M ) = [−4α z − 8γ z ] 1/2 ω ZA (M ) = [−2α z − 6α ′ z − 8γ z ] 1/2 .(7)
Expanding Eq. (6) in powers of the wave vector q, we find the velocity of the acoustic out-of-plane mode propagating in the layer
s z = a [−0.75α z − 3α ′ z − 4.5γ z ] 1/2 = C 44 /ρ,(8)
where we use the well-known formula for the velocity of the acoustic z-mode propagating in the x-direction in terms of the elastic constant C 44 and density ρ of a hexagonal crystal. Because the interaction between the layers in graphite is weak, we can correspond the values of C 44 and ρ to graphite.
B. Dispersion of in-plane modes
The dynamical matrix for the in-plane vibrations has the form similar to that for the in-plane mode (4), but instead of the functions φ AA zz (q) and φ AB zz (q) we have to substitute correspondingly the 2 × 2 matrices
φ AA ξη (q) φ AA ξξ (q) φ AA ξξ (q) * φ AA ξη (q) , φ AB ξη (q) φ AB ξξ (q) φ AB ηη (q) φ AB ξη (q)
. (9) The matrix elements φ AA ξη (q) and φ AB ξη (q) are obtained from φ AA zz (q) and φ AB zz (q), Eqs. (5), correspondingly, with substitutions γ, α, and α ′ instead of γ z , α z , and α ′ z . The off-diagonal elements are given by
φ AA ξξ (q) = δ[exp(i √ 3q y ) + 2 cos (3q x /2 + 2π/3) exp(−i √ 3q y /2)] + δ * [exp(−i √ 3q y ) + 2 cos (3q x /2 − 2π/3) exp (i √ 3q y /2)], φ AB ξξ (q) = β[exp (iq x ) + 2 exp (−iq x /2) cos ( √ 3q y /2 − 2π/3)] +β ′ [exp (−2iq x ) + 2 exp (iq x ) cos ( √ 3q y + 2π/3)].
The matrix elements for the B sublattice can be obtained from that for the A sublattice by C 2 rotation (x, y) → −(x, y) of the graphene symmetry group. The optical phonon frequencies for the in-plane branches at Γ and K are found
ω in−pl 1,2 (Γ) = [−6(α + α ′ )] 1/2 , doublet, ω in−pl 1,2 (K) = [−3(α + α ′ ) − 9γ] 1/2 , doublet, (10) ω in−pl 3,4 (K) = [−3(α + α ′ ) − 9γ ± 3(β + β ′ )] 1/2 .
An algebraic equation of the forth order have to be solved in order to find the phonon frequencies at the M point as well as at points of the general position.
The in-plane vibrations make a contribution into the elastic constants C 11 and C 12 . The corresponding relation between the dynamic matrix elements and the elastic constants can be deduced taking the long-wavelength limit (q → 0) in the matrices (9). In this limit, separating the acoustic vibrations u ac from the optical modes, we obtain the equation of motion in the matrix form
(φ AA + φ AB + φ BB + φ BA )/2 + φ AB 1 (φ AB 0 ) −1 φ AB 1 − ω 2 u ac = 0,(11)
where the subscripts 0 and 1 mean that the terms of the zero and first order in q should, correspondingly, be kept in the matrices (9), but the expansion to the second order is used in other terms. We find the matrix factor of u ac III: Phonon frequencies at critical points in cm −1 ; z and stand for the out-of-plane and in-plane branches, respectively.
Γ [0 0] M [1 √ 3]π/3a K [0 1]4π/3 √ 3a ω ω z ω 1 ω 2 ω 3 ω 4 ω z 1 ω z 2 ω 1 ω 2,3 ω 4 ω z 1,s 1 q 2 − ω 2 s 2 q 2 + s 2 q 2 − s 1 q 2 − ω 2 ,(12)
where
s 1 = − 9 2 γ − 3 4 α − 3α ′ + 3 8 (β − 2β ′ ) 2 /(α + α ′ ), s 2 = 9 4 Re(δ) − 3 8 β − 3 2 β ′ .
With the help of Eq. (12), we obtain the velocities of longitudinal and transverse acoustic in-plane modes
s LA = a √ s 1 + s 2 = C 11 /ρ, s TA = a √ s 1 − s 2 = (C 11 − C 12 )/2ρ,(13)
corresponding them to the elastic constants C 11 , C 12 and density ρ of graphite.
III. RESULTS AND DISCUSSIONS
The calculated phonon dispersion is shown in Fig. 2. Notice, first, that the sound velocities (for long waves, q → Γ) are isotropic in the xy plane as it should be appropriate for the symmetry of graphene. Second, the in-plane LO/TO modes at Γ, the in-plane LO/LA modes at K, and the out-of-plane ZA/ZO modes at K are doubly degenerate because graphene is the non-polar crystal and the C 3v symmetry of these points in the Brillouin zone admits the two-fold representation (observation of splitting of that modes in graphene would display the symmetry braking of the crystal).
Because of the lack of information on graphene, we compare the present theory with experiments on graphite. Thus, we have only three force constants α z , γ z , and α ′ z to fit four frequencies of the out-of-plane modes at the critical points Γ, M , and K. We must keep in mind that the frequencies in graphene for the out-of-plane branches could be less than their values in graphite since the atoms are more free to move in the z direction in graphene comparatively with graphite. It is evident that the adjacent layers in graphite affect the low frequencies more intensively. The interaction of the adjacent layers can be estimated from the ZA -ZO ′ splitting about 130 cm −1 given, for instance, in Ref. 20 . These modes become degenerate when the inter-layer interaction is switched off. Therefore, the lowest frequencies of out-of-plane modes calculated at the M and K points are considerably less than the corresponding frequencies observed in graphite (see Table 3).
Furthermore, the force constants determine the velocity s z , Eq. (8), of the acoustic out-of-plane mode along with the elastic constant C 44 . We see that the velocity has the nonzero value unless a definite condition is satisfied for the force constants. Using the values of force constants obtained in fitting with the experimental data (see Table 1), we find the value of the sound velocity s z = 1.59 km/s for the out-of-plane mode. This result is contradictory to the statement of Ref. 22 that the acoustic out-of-plane mode has a quadratic dispersion. The fact that the sound velocity s z is very sensitive to the small variation of γ z indicates that graphene is nearly unstable with respect to transformation into a phase of the lower symmetry group at Γ.
For the in-plane modes, we have to fit eight frequencies at the critical points and two elastic constants. Equations (10) and (13) can be used as a starting point. Fitting of the in-plane branches is insensitive to the imaginary part of the constant δ. Therefore, it is taken as a real parameter. Results of the fit are presented in Fig. 2 and Tables. Notice, that the extent of agreement of the present theory with the data obtained for graphite corresponds to the comparison level between the first-principle calculations for graphite in Ref. 19 and their experimental data (see Table 3). The largest disagreement of 5% between our calculations and experiments on graphite for the highest phonon mode occurs at the K point. This is result of the Kohn anomaly due to the electron-phonon interaction 27 which reduces the phonon frequency at K. The same reason explains some overbending observed probably in graphite along the Γ − M direction.
IV. CONCLUSIONS
We calculate the phonon dispersion in graphene using the Born-von Karman model with the first-, second-, and third-neighbor interactions imposed by the symmetry constraints. The bending (out-of-plane) modes are not coupled with the in-plane branches and indicate the latent instability of graphene with respect to transformation into a lower-symmetry phase. The acoustic ZA mode has the linear dispersion in a small wave-vector interval near the Γ point. The optical frequencies of these modes are less than the corresponding values in graphite. For the higher in-plane modes, the fit shows good agreement between the experimental and calculated values of optical frequencies, elastic constants, and acoustic velocities.
FIG. 1 :
1First, second, and third neighbors in the graphene lattice.
FIG. 2 :
2Calculated phonon dispersion for graphene; the force constants, elastic constants, and phonon frequencies at critical points are listed in Tables 1, 2, and 3 correspondingly.
TABLE I :
IForce constants in 10 5 cm −2 : α, β, and αz for the first neighbors ; γ, δ, and γz for the second neighbors ; α ′ , α ′ z , and β ′ for the third neighbors.α
β
γ
δ
α ′
β ′
αz
γz
α ′
z
-4.095 -1.645 -0.209 0.690 -0.072 0.375 -1.415 0.171 0.085
TABLE II :
II± 2 a 18 ± 2 a 0.45 ± .05 a ≈ 24 b 14 b a Reference 23 , b Reference 24,25 ,Elastic constants (in 10 GPa) and the sound ve-
locities (in km/s) calculated (theo) and observed (exp) .
C11
C12
C44
sLA
sTA
sz
theo
86
18
0.57
19.5 12.2 1.59
exp 106
TABLE
AcknowledgmentsThe work was supported by the Russian Foundation for Basic Research (grant No.07-02-00571).
. K S Novoselov, A K Geim, S V Morozov, Science. 306666K.S. Novoselov, A.K. Geim, S.V. Morozov et al., Science, 306, 666 (2004);
. K S Novoselov, Nature. 438197K.S. Novoselov et al., Nature, 438, 197 (2005).
. Y Zhang, J P Small, M E S Amory, P Kim, Phys. Rev. Lett. 94176803Y. Zhang, J.P. Small, M.E.S. Amory, and P.Kim, Phys. Rev. Lett. 94, 176803 (2005).
. C C Ferari, J C Meyer, V Scardaci, C Caseraghi, M Lazzeri, F Mauri, S Piscanec, D Jiang, K S Novoselov, S Roth, A K Geim, Phys. Rev. Lett. 97187401C.C. Ferari, J.C. Meyer, V. Scardaci, C. Caseraghi, M. Lazzeri, F. Mauri, S. Piscanec, D. Jiang, K.S. Novoselov, S. Roth, and A.K. Geim, Phys. Rev. Lett. 97, 187401 (2006).
. A H Castro Neto, F Guinea, Phys. Rev. B. 7545404A.H. Castro Neto, F. Guinea, Phys. Rev. B 75, 045404 (2007).
. J De Launay, Solid State Phys. 3203J. De Launay, Solid State Phys. 3, 203 (1957).
. R Nicklow, W Wakabayashi, H G Smith, Phys. Rev. B. 54951R. Nicklow, W. Wakabayashi, and H.G. Smith, Phys. Rev. B 5, 4951 (1972).
. A A Ahmadieh, H A Rafizadeh, Phys. Rev. B. 74527A.A. Ahmadieh and H.A. Rafizadeh, Phys. Rev. B 7, 4527 (1973).
. A P P Nicholson, D J Bacon, J. Phys. C. 102295A.P.P. Nicholson and D.J. Bacon, J. Phys. C 10, 2295 (1977).
. M Maeda, Y Kuramoto, C Horie, J. Phys. Soc. Jpn. Lett. 47337M. Maeda, Y. Kuramoto, and C. Horie, J. Phys. Soc. Jpn. Lett. 47, 337 (1979).
. R Al-Jishi, G Dresselhaus, Phys. Rev. B. 264514R. Al-Jishi and G. Dresselhaus, Phys. Rev. B 26, 4514 (1982).
. H Gupta, J Malhotra, N Rani, B Tripathi, Phys. Rev. B. 337285H. Gupta, J. Malhotra, N. Rani, and B. Tripathi, Phys. Rev. B 33, 7285 (1986).
. M Mohr, J Maultzsch, E Dobardzić, I Milosević, M Damnjanović, A Bosak, M Krish, C Thomsen, Phys. Rev. B. 7635439M. Mohr, J. Maultzsch, E. Dobardzić, I. Milosević, M. Damnjanović, A. Bosak, M. Krish, and C. Thomsen, Phys. Rev. B 76, 035439 (2007).
. L Lang, S Doyen-Lang, A Charlier, M F Charlier, Phys. Rev. B. 495672L. Lang, S. Doyen-Lang, A. Charlier, and M.F. Charlier, Phys. Rev. B 49, 5672 (1994).
. G Benedek, G Onida, Phys. Rev. B. 4716471G. Benedek and G. Onida, Phys. Rev. B 47, 16471 (1992).
. C Mapelli, C Castiglioni, G Zerbi, K Müllen, Phys. Rev. B. 6012710C. Mapelli, C. Castiglioni, G. Zerbi, and K. Müllen, Phys. Rev. B 60, 12710 (1999).
. T Aizava, R Souda, S Otani, Y Ishizava, C Oshima, Y Samiyosh, Phys. Rev. B. 4211469T. Aizava, R. Souda, S.Otani, Y. Ishizava, and C. Os- hima,Y. Samiyosh, Phys. Rev. B 42, 11469 (1990).
. O Dubay, G Kresse, Phys. Rev. B. 6735401O. Dubay and G. Kresse, Phys. Rev. B 67, 035401 (2003).
. L Wirtz, A Rubio, Solid State Commun. 131141L. Wirtz and A. Rubio, Solid State Commun. 131, 141 (2004).
. J Maultzsch, S Reich, C Thomsen, H Reequardt, P Ordejon, Phys. Rev. Lett. 9275501J. Maultzsch, S. Reich, C. Thomsen, H. Reequardt, and P. Ordejon, Phys. Rev. Lett. 92, 075501 (2004).
. N Mounet, N Marzari, Phys. Rev. B. 71205214N. Mounet and N. Marzari, Phys. Rev. B 71, 205214 (2005).
. V N Popov, P Lambin, Phys. Rev. B. 7385407V.N. Popov and P. Lambin, Phys. Rev. B 73, 085407 (2006).
Physical Properties of Carbon Nanotubes. R Saito, G Dresselhaus, M S Dresselhaus, Imperial College Press170LondonR. Saito, G. Dresselhaus, and M.S. Dresselhaus, Physical Properties of Carbon Nanotubes, p. 170, (Imperial College Press, London, 2003).
Graphite and Precursors. P. Delhaes (Gordon and Breach, AustraliaGraphite and Precursors, ed. by P. Delhaes (Gordon and Breach, Australia, 2001), Chap. 6.
. D Sánchez-Portal, E Artacho, J M Soler, A Rubio, P Ordejón, Phys. Rev. B. 5912678D. Sánchez-Portal, E. Artacho, J.M. Soler, A.Rubio, and P. Ordejón, Phys. Rev. B 59, 12678 (1999).
. C Oshima, T Aizava, R Souda, Y Ishizava, Y Samiyosh, Solid. State. Commun. 651601C. Oshima, T. Aizava, R. Souda, Y. Ishizava, and Y. Samiyosh, Solid. State. Commun. 65, 1601 (1988).
. H Yanagisawa, T Tanaka, Y Ishida, M Matsue, E Rokuta, S Otani, C Oshima, Surf. Interface Anal. 37133H. Yanagisawa, T. Tanaka, Y. Ishida, M. Matsue, E. Rokuta, S. Otani, and C. Oshima, Surf. Interface Anal. 37, 133 (2005).
. S Piscanec, M Lazzeri, F Mauri, A C Ferrari, J Robertson, Phys. Rev. Lett. 93185503S. Piscanec, M. Lazzeri, F. Mauri, A.C. Ferrari, and J. Robertson, Phys. Rev. Lett. 93, 185503 (2004).
|
[] |
[
"Speaker Recognition in the Wild",
"Speaker Recognition in the Wild"
] |
[
"Neeraj Chhimwal \nThoughtworks\n\n",
"Anirudh Gupta \nThoughtworks\n\n",
"Rishabh Gaur \nThoughtworks\n\n",
"Harveen Singh Chadha \nThoughtworks\n\n",
"Priyanshi Shah \nThoughtworks\n\n",
"Ankur Dhuriya [email protected] \nThoughtworks\n\n",
"Vivek Raghavan \nEkstep Foundation\n\n"
] |
[
"Thoughtworks\n",
"Thoughtworks\n",
"Thoughtworks\n",
"Thoughtworks\n",
"Thoughtworks\n",
"Thoughtworks\n",
"Ekstep Foundation\n"
] |
[] |
In this paper, we propose a pipeline to find the number of speakers, as well as audios belonging to each of these now identified speakers in a source of audio data where number of speakers or speaker labels are not known a priori. We used this approach as a part of our Data Preparation pipeline for Speech Recognition in Indic Languages.1To understand and evaluate the accuracy of our proposed pipeline, we introduce two metrics-Cluster Purity, and Cluster Uniqueness. Cluster Purity quantifies how "pure" a cluster is. Cluster Uniqueness, on the other hand, quantifies what percentage of clusters belong only to a single dominant speaker. We discuss more on these metrics in section 4.Since we develop this utility to aid us in identifying data based on speaker IDs before training an Automatic Speech Recognition (ASR) model, and since most of this data takes considerable effort to scrape, we also conclude that 98% of data gets mapped to the top 80% of clusters (computed by removing any clusters with less than a fixed number of utterances-we do this to get rid of some very small clusters and use this threshold as 30), in the test set chosen.
|
10.48550/arxiv.2205.02475
|
[
"https://arxiv.org/pdf/2205.02475v1.pdf"
] | 248,524,769 |
2205.02475
|
ffcacfc8921eabba35af655a2e2f6ea4d6598700
|
Speaker Recognition in the Wild
Neeraj Chhimwal
Thoughtworks
Anirudh Gupta
Thoughtworks
Rishabh Gaur
Thoughtworks
Harveen Singh Chadha
Thoughtworks
Priyanshi Shah
Thoughtworks
Ankur Dhuriya [email protected]
Thoughtworks
Vivek Raghavan
Ekstep Foundation
Speaker Recognition in the Wild
Index Terms: speaker clusteringspeaker recognitionhuman- computer interactioncomputational paralinguistics
In this paper, we propose a pipeline to find the number of speakers, as well as audios belonging to each of these now identified speakers in a source of audio data where number of speakers or speaker labels are not known a priori. We used this approach as a part of our Data Preparation pipeline for Speech Recognition in Indic Languages.1To understand and evaluate the accuracy of our proposed pipeline, we introduce two metrics-Cluster Purity, and Cluster Uniqueness. Cluster Purity quantifies how "pure" a cluster is. Cluster Uniqueness, on the other hand, quantifies what percentage of clusters belong only to a single dominant speaker. We discuss more on these metrics in section 4.Since we develop this utility to aid us in identifying data based on speaker IDs before training an Automatic Speech Recognition (ASR) model, and since most of this data takes considerable effort to scrape, we also conclude that 98% of data gets mapped to the top 80% of clusters (computed by removing any clusters with less than a fixed number of utterances-we do this to get rid of some very small clusters and use this threshold as 30), in the test set chosen.
Introduction
Given a source of audio data, with no prior knowledge of number of speakers or speaker labels, our goal is to find the number of speakers and a mapping of these identified speaker labels to audio utterances in a corpus. In a supervised setting, where the goal is to either recognise or verify a speaker among a group of pre-enrolled speakers, is also an active area of research where deep learning techniques have become the new state-of-the-art for these tasks [1] . Common tasks in supervised approach can be grouped into two major categories-Speaker Recognition and Speaker Verification. Speaker Recognition is the identification of a person from characteristics of voices and is used to answer the question "Who is speaking?". Speaker verification is the verification of a speaker's claim of their identity. It is used to answer the question "Is this really the mentioned speaker speaking?" However, in an unsupervised setting, which is what we're dealing with in our use case, both of the above mentioned tasks aren't possible.
In our case, the data is scraped from the web with almost no metadata about speakers. We're talking about unlabeled audios belonging to unknown, previously unidentified speakers. We'll still be solving a speaker recognition task, but completely unsupervised. This is formally known as Speaker Clustering. Speaker clustering is the task of identifying the unique speakers in a set of audio recordings without knowing who and how many speakers are present in the entire data. We assume that each of these audio recordings (referred to as utterance in the paper) belongs to exactly one speaker. We achieve this by using Voice Activity Detection on the full audios to break them into smaller chunks or utterances, as a part of our data preparation steps. This ensures that the audio is short enough to accommodate only one unique speaker, as it is cut from one silence to the next. We use an open-sourced pre-trained Neural Network 2 as an embedding technique and generate deep embeddings for every utterance. These embeddings are then fed to our clustering stage where we primarily use HDBSCAN [2] with some nuances to tackle the large number of tunable hyper-parametersthis is done to reduce effort in finding optimal parameters for every new source of audio data we scrape. All the steps involved will be explained in detail in section 2.
Pipeline
Deep Embeddings
We use an open-sourced pre-trained Neural Network as an embedding technique 2 . Which means that given an audio file of speech, this network creates a summary vector of 256 values (also known as "embedding") that summarizes the characteristics of the voice spoken. This model is speaker-discriminative and has been trained on a very particular text-independent speaker verification task, trained to optimize Generalized End to End loss for Speaker Verification [3].
The inputs to this model are 40-channels log-mel spectrograms with a 25ms window width and a 10ms step. The output is the L2-normalized hidden state of the last layer, which is a vector of 256 elements. However, the data used to train this model belongs to only one language-English, while our corpus belongs to Indic lan-guages. This model was trained using more than 1000 hours of English data from the sources: LibriSpeech-other, VoxCeleb1, and VoxCeleb2, and contains audios belonging to 1.8 thousand speakers. To ascertain that the embeddings are also able to encode speaker information for Hindi, we did the following experiment: We used Resemblyzer's trained voice encoder model to generate embeddings for all audios in our test dataset comprising of 20 hours of audios belonging to 80 speakers. On plotting a distribution plot of the cosine similarities between embeddings belonging to same speakers, and those belonging to different speakers, it's clear that the model trained on English data is still able to encode speaker information for Hindi.
Clustering algorithm
We use Hierarchical Density-Based Spatial Clustering of Applications with Noise [2] (HDBSCAN) as the main clustering algorithm.
HDBSCAN supports a special metric called "precomputed". So if we create the clusterer with the metric set to precomputed then the clusterer will assume that, rather than being handed a vector of points in a vector space, it is receiving an all pairs distance matrix. This is the approach we use. So essentially, instead of handling a 256 dimensional vector for each utterance, we calculate pairwise cosine distances between each embeddings and use this pre-computed distance matrix (which is a square matrix) as our input to the clusterer.
We found that for sources with more than 15-20 hours of data, this was not ideal as the clustering algorithm runs out of memory while trying to fit these data points. Another issue was that some "big" clusters contained majority of data, whereas some clusters, although belonging to the same original speaker were present as different "small" clusters. So we devised a strategy to tackle all of these issues.
1. Since many of our sources contained data in hundreds of hours, we decided to implement a partial set strategy where clusters will be computed for this partial set (roughly corresponding to 20 hours of data at once, this number can be customized).
2. Once all the clusters from these partial sets have been found, we apply repetitive merging of clusters over a range of cosine similarities with decay (default merging starts from 96% and goes down to 90% at 1% intervals to accommodate for clusters that can be merged after every iteration), where smaller homogeneous clusters belonging to the same speaker but from same/different partial sets can merge to form bigger clusters for single speakers. The way cosine similarities work in this case is, we compute a mean cluster embedding for each cluster, and then compute similarities between each cluster as if it was just one centroid point of 256 dimensions. 3. After this step, we compute mean cluster size and define a method to identify "big" clusters based on variance from this mean cluster size. These clusters, in our experiments were usually the ones with a large number of speakers grouped as one (could be some speakers belonging to the same gender). But this was not always the case, as sometimes some speakers actually had a lot of data, and we would not like to break these well formed groups. To tackle this issue, we run clustering again on these big clusters but with a slight difference: we change the cluster selection method, which determines how flat clusters are selected from the cluster tree hierarchy. The default method is 'eom' for Excess of Mass, which is not always the most desirable approach to cluster selection. In cases where we are more interested in having small homogeneous clusters, we may find that Excess of Mass has a tendency to pick one or two large clusters and then a number of small extra clusters. We choose 'leaf' as a cluster selection method while splitting these big clusters, to allow the algorithm to select leaf nodes from the tree, producing many small homogeneous clusters. In our experiments, we verify that big clusters with just one speaker were not split using this method, which allows some mistakes in identifying non-homogeneous big clusters since big homogeneous clusters seem to be immune to splitting. 4. After step 3, we have a list of clusters that were unchanged plus the new list of leaf clusters. At this step, we again do the same repetitive merging to let these new smaller clusters combine to form better groupings for individual speakers. 5. Since HDBSCAN also takes into account the "noise", meaning not every point is allocated to some cluster. At every clustering step, some points get classified as noise and we keep a track of these points. In this step, we allow these noise points to be merged with the clusters they are closest to, as long as this similarity is greater than the "fit noise point on similarity" parameter. This "noise" is not to be confused with environmental noise or any audio specific noise we may encounter, this strictly refers to data points that could not be fit into any cluster.
Hyperparameters
We keep these parameters fixed in order to automate our pipeline with many moving parts (such as gender identification and language identification for each utterance). But you can play around if you know what kind of data you are expecting in terms of a rough idea about average speaker duration, etc.
1. partial set size (default=10,000): if the number of utterances in the source is greater than partial set size, clustering is done over these partial sets for step 1. We use 10,000 as it roughly corresponds to 20 hours of data for us. 2. min cluster size (default=4): this is the smallest size grouping that you wish to consider a cluster. 4. metric (default='precomputed'): we use 'precomputed' since we and pass a distance matrix as input to HDB-SCAN.
5. fit noise on similarity (default=0.8): since all the clustering steps will lead to some points not being mapped to any cluster, we try to fit these in clusters with a similarity score higher than the value of fit noise on similarity, in order to reduce data loss.
Metrics proposed
We propose two metrics to evaluate the clusters formed which can help you in getting a sense of the speaker clusters identified.
Cluster Purity
Cluster Purity quantifies how "pure" a cluster is. We define Cluster Purity as the ratio of number of audio utterances belonging to the dominant speaker in a cluster to the total number of audio utterances present in the same cluster. For every speaker cluster: CP = num utterances belonging to dominant speaker total utterances in the cluster (1)
Cluster Uniqueness
Cluster Uniqueness quantifies what percentage of clusters belong only to single dominant speakers. In our experiments, we identified that some clusters even when belonging to the same speaker ID, were not able to merge because these two clusters had a low cosine similarity score. This may happen if there is noise in the background for some audios or if the audio isn't representative enough (has less content). Since some speakers may be present in many clusters at once, this metric helps in identifying what percentage of the original true speakers were grouped in their respective clusters only.
CU = num speakers with only one dominant cluster total number of clusters (2)
Results
Below are the results on our private test set of 80 speakers containing 20 hours of audio data. This dataset has a 50-50 gender split, meaning 40 voices are male and 40 are female. On running any source through our clustering, there are some very small clusters that we manually remove. For this source, the number of clusters identified was 104, but on removing clusters with less than 30 utterances (average length of an utterance is 6 seconds), we get cluster count as 79. This is one part where manual effort can be used to find a better threshold for minimum cluster size allowed, depending on the task.
Discussion
In this paper, we have identified a way to leverage an opensourced speaker-discriminative and text-independent Speaker Verification based neural network, trained for English, as a Voice Encoder to use for the downstream task of clustering Hindi audios. But we can run into many issues with the clusters formed, depending on the audio quality of the source-as this is something we didn't explore. Our use case allowed us to cap the per-speaker audio data at 90 minutes per speaker identified. In our experiments, we found that this approach worked well for most of the sources, meaning all the components were mostly working as hypothesized.
Conclusion & Future work
In this direction, an area that can be directly explored is the effect on speaker separation when all the speakers in a source belong to one gender only-male or female voices. This is important since in our experiments, some clusters contained two female speakers with similar voices. This similarity can be explored further to mitigate this effect. In this approach, we derive our input from a deep neural network and then use a classical unsupervised clustering algorithm with added nuances. Our clustering algorithm is model-free and works on a pre-defined distance measure that we provide as a distance matrix between our high dimensional embeddings. Future work can explore both the voice encoder training step for Indic languages as well as deep clustering approaches to minimize the dependency on clusterer related hyper parameters. A detailed look into the various classical clustering algorithms [4,5,6,7,8] and loss functions [3,9,10], will also help in identifying the best approach possible, with respect to compute and time constraints as well as accuracy.
Figure 1 :
1System overview of GE2E extracted from (Wan et al., 2017). Different colors indicate utterances/embeddings from different speakers.
Figure 2 :
2Cosine Similarity between same vs different speaker embeddings
3. min samples (default=1): this is the number of points required in the neighbourhood of a point to be considered a core point. min samples value can range from 1 to min cluster size. Smaller values of min samples have an effect of lowering the number of points classified as noise.
Table 1 :
1Results on test setMetric
Result
num clusters identified
79
average cluster purity
96.00 %
num speakers present in only one cluster
67
cluster uniqueness
84.81 %
utterances classified as noise
1.35 %
https://github.com/Open-Speech-EkStep/ vakyansh-wav2vec2-experimentation
https://github.com/resemble-ai/Resemblyzer
AcknowledgmentAll authors gratefully acknowledge Ekstep Foundation for supporting this project financially and providing infrastructure. A special thanks to Dr. Vivek Raghavan for constant support, guidance and fruitful discussions. We also thank Nikita Tiwari, Ankit Katiyar, Heera Ballabh, Niresh Kumar R, Sreejith V, Soujyo Sen, Amulya Ahuja and Rajat Singhal for helping out when needed and extending infrastructure support for data processing and model training.
Deep learning methods in speaker recognition: a review. D Sztahó, G Szaszák, A Beke, arXiv:1911.06615arXiv preprintD. Sztahó, G. Szaszák, and A. Beke, "Deep learning methods in speaker recognition: a review," arXiv preprint arXiv:1911.06615, 2019.
Density-based clustering based on hierarchical density estimates. R J Campello, D Moulavi, J Sander, Pacific-Asia conference on knowledge discovery and data mining. SpringerR. J. Campello, D. Moulavi, and J. Sander, "Density-based clus- tering based on hierarchical density estimates," in Pacific-Asia conference on knowledge discovery and data mining. Springer, 2013, pp. 160-172.
Generalized end-to-end loss for speaker verification. L Wan, Q Wang, A Papir, I L Moreno, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. L. Wan, Q. Wang, A. Papir, and I. L. Moreno, "Generalized end-to-end loss for speaker verification," in 2018 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 4879-4883.
Discriminative neural clustering for speaker diarisation. Q Li, F L Kreyssig, C Zhang, P C Woodland, 2021 IEEE Spoken Language Technology Workshop (SLT). IEEEQ. Li, F. L. Kreyssig, C. Zhang, and P. C. Woodland, "Discrim- inative neural clustering for speaker diarisation," in 2021 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2021, pp. 574-581.
Ward's hierarchical agglomerative clustering method: which algorithms implement ward's criterion?. F Murtagh, P Legendre, Journal of classification. 313F. Murtagh and P. Legendre, "Ward's hierarchical agglomera- tive clustering method: which algorithms implement ward's cri- terion?" Journal of classification, vol. 31, no. 3, pp. 274-295, 2014.
A tutorial on spectral clustering. U , Von Luxburg, Statistics and computing. 174U. Von Luxburg, "A tutorial on spectral clustering," Statistics and computing, vol. 17, no. 4, pp. 395-416, 2007.
A comparison of spectral clustering algorithms. D Verma, M Meila, University of Washington Tech Rep UWCSE030501. 1D. Verma and M. Meila, "A comparison of spectral clustering al- gorithms," University of Washington Tech Rep UWCSE030501, vol. 1, pp. 1-18, 2003.
Unsupervised methods for speaker diarization: An integrated and iterative approach. S H Shum, N Dehak, R Dehak, J R Glass, IEEE Transactions on Audio, Speech, and Language Processing. 2110S. H. Shum, N. Dehak, R. Dehak, and J. R. Glass, "Unsuper- vised methods for speaker diarization: An integrated and iterative approach," IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 10, pp. 2015-2028, 2013.
Deep clustering: Discriminative embeddings for segmentation and separation. J R Hershey, Z Chen, J Le Roux, S Watanabe, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEJ. R. Hershey, Z. Chen, J. Le Roux, and S. Watanabe, "Deep clustering: Discriminative embeddings for segmentation and sep- aration," in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016, pp. 31-35.
Robust and discriminative speaker embedding via intra-class distance variance regularization. N Le, J.-M Odobez, Interspeech. N. Le and J.-M. Odobez, "Robust and discriminative speaker em- bedding via intra-class distance variance regularization." in Inter- speech, 2018, pp. 2257-2261.
|
[
"https://github.com/Open-Speech-EkStep/",
"https://github.com/resemble-ai/Resemblyzer"
] |
[
"The self-tuned sensitivity of circadian clocks",
"The self-tuned sensitivity of circadian clocks"
] |
[
"Kabir Husain \nDepartment of Physics\nJames Franck Institute\nUniversity of Chicago\nChicago ILUSA\n",
"Weerapat Pittayakanchit \nDepartment of Physics\nJames Franck Institute\nUniversity of Chicago\nChicago ILUSA\n",
"Gopal Pattanayak \nDepartment of Molecular Genetics and Cell Biology\nUniversity of Chicago\nChicagoILUSA\n",
"Michael J Rust \nDepartment of Molecular Genetics and Cell Biology\nUniversity of Chicago\nChicagoILUSA\n",
"Arvind Murugan \nDepartment of Physics\nJames Franck Institute\nUniversity of Chicago\nChicago ILUSA\n"
] |
[
"Department of Physics\nJames Franck Institute\nUniversity of Chicago\nChicago ILUSA",
"Department of Physics\nJames Franck Institute\nUniversity of Chicago\nChicago ILUSA",
"Department of Molecular Genetics and Cell Biology\nUniversity of Chicago\nChicagoILUSA",
"Department of Molecular Genetics and Cell Biology\nUniversity of Chicago\nChicagoILUSA",
"Department of Physics\nJames Franck Institute\nUniversity of Chicago\nChicago ILUSA"
] |
[] |
Living organisms need to be sensitive to a changing environment while also ignoring uninformative environmental fluctuations. Here, we show that the circadian clock in Synechococcus elongatus can naturally tune its environmental sensitivity, through a clock-metabolism coupling quantified in recent experiments. The metabolic coupling can detect mismatch between clock predictions and the day-night light cycle, and temporarily raise the clocks sensitivity to light changes and thus entrain faster. We also analyze analogous behaviour in recent experiments on switching between slow and fast osmotic stress response pathways in yeast. In both cases, cells can raise their sensitivity to new external information in epochs of frequent challenging stress, much like a Kalman filter with adaptive gain in signal processing. Our work suggests a new class of experiments that probe the history-dependence of environmental sensitivity in biophysical sensing mechanisms.Living organisms do not perceive their environment in an objective manner but often in the context of prior expectations or predictions of what the environment might be. Many examples of such prior expectations -i.e., internal models of the external world -are found in neuroscience [1], but can also be found in metabolic dynamics of yeast [2] and bacteria [3], the rhythms generated by free-running circadian clocks [4], receptor signalling cascades, and the immune system[5].Combining predictions with measurements requires care, as both data might be unreliable. In the 1960s, Kalman [6] introduced a simple iterative scheme to optimally update predictions with measurements that has found applications from Apollo 11 [7] to particle tracking in microscopy [8] and synthetic genetic circuits in living cells[9]. While the exact mathematics of Kalman filters is unlikely to apply to biology, the Bayesian idea at the heart of Kalman filtering is broadly applicablei.e. predictions must be updated by measurements using an iteratively computed weight that reflects their respective unreliabilities. However, it is not clear whether a Kalman-like adaptive sensitivity to new external information can be easily implemented at the cellular level. Indeed, unlike routine feedback regulation [10], the quantity of physiological interest -e.g., osmotic pressure, circadian time -is not itself regulated in a Kalman strategy, but rather the rate at which that quantity is updated by new information.Here, by analyzing two disparate systems, we argue that the ingredients needed for self-tuned sensitivity to new environmental information are readily found in biology. We first analyze recent quantitative experiments on the interaction between circadian clocks and metabolism in the photosynthetic cyanobacterium Synechococcus elongatus[11]. Here, the free running KaiABC-protein based circadian clock serves as an unreliable internal model of the external 24 hour day-night cycle of light on earth, 'entrained' by periodic changes in * [email protected] external light.We interpret recent experiments to argue the sensitivity of the clock to light is tunable, since this sensitivity is controlled by the cell's metabolic state, in particular the availability of energy storage compounds such as glycogen. We further demonstrate that, since glycogen metabolism is controlled by the clock, the metabolicallycoupled clock effectively tunes its own sensitivity, reaching values appropriate for different environmental conditions.We then discuss similar behavior in stress response pathways in yeast. Recent experiments show how information from fast and slow osmolarity sensing pathways are combined to show the high speed of the fast pathway but retain the low error of the slow pathway[12]. We find that this behavior can be explained if the balance between these two pathways switches in a time-dependent manner.We conclude with general results on when Kalman-like tunable sensitivity is biologically advantageous. We show that self-tuned sensitivity can break speed-accuracy (or gain-bandwidth) trade-offs in sufficiently heterogeneous environments, e.g., when the circadian clock switches between distinct epochs of high and low noise. Each distinct epoch needs to persist long enough to allow selftuning mechanisms such as metabolic feedback or osmolarity mismatch to raise or lower the sensitivity as needed. Taken together, our results suggest new kinds of experiments that can reveal the phenotypic adaptation of sensitivity to new information in biophysical sensing.
| null |
[
"https://arxiv.org/pdf/1903.07103v1.pdf"
] | 81,976,955 |
1903.07103
|
247a3c95bef93802ee80c0ab745dd10d19f471a9
|
The self-tuned sensitivity of circadian clocks
Kabir Husain
Department of Physics
James Franck Institute
University of Chicago
Chicago ILUSA
Weerapat Pittayakanchit
Department of Physics
James Franck Institute
University of Chicago
Chicago ILUSA
Gopal Pattanayak
Department of Molecular Genetics and Cell Biology
University of Chicago
ChicagoILUSA
Michael J Rust
Department of Molecular Genetics and Cell Biology
University of Chicago
ChicagoILUSA
Arvind Murugan
Department of Physics
James Franck Institute
University of Chicago
Chicago ILUSA
The self-tuned sensitivity of circadian clocks
Living organisms need to be sensitive to a changing environment while also ignoring uninformative environmental fluctuations. Here, we show that the circadian clock in Synechococcus elongatus can naturally tune its environmental sensitivity, through a clock-metabolism coupling quantified in recent experiments. The metabolic coupling can detect mismatch between clock predictions and the day-night light cycle, and temporarily raise the clocks sensitivity to light changes and thus entrain faster. We also analyze analogous behaviour in recent experiments on switching between slow and fast osmotic stress response pathways in yeast. In both cases, cells can raise their sensitivity to new external information in epochs of frequent challenging stress, much like a Kalman filter with adaptive gain in signal processing. Our work suggests a new class of experiments that probe the history-dependence of environmental sensitivity in biophysical sensing mechanisms.Living organisms do not perceive their environment in an objective manner but often in the context of prior expectations or predictions of what the environment might be. Many examples of such prior expectations -i.e., internal models of the external world -are found in neuroscience [1], but can also be found in metabolic dynamics of yeast [2] and bacteria [3], the rhythms generated by free-running circadian clocks [4], receptor signalling cascades, and the immune system[5].Combining predictions with measurements requires care, as both data might be unreliable. In the 1960s, Kalman [6] introduced a simple iterative scheme to optimally update predictions with measurements that has found applications from Apollo 11 [7] to particle tracking in microscopy [8] and synthetic genetic circuits in living cells[9]. While the exact mathematics of Kalman filters is unlikely to apply to biology, the Bayesian idea at the heart of Kalman filtering is broadly applicablei.e. predictions must be updated by measurements using an iteratively computed weight that reflects their respective unreliabilities. However, it is not clear whether a Kalman-like adaptive sensitivity to new external information can be easily implemented at the cellular level. Indeed, unlike routine feedback regulation [10], the quantity of physiological interest -e.g., osmotic pressure, circadian time -is not itself regulated in a Kalman strategy, but rather the rate at which that quantity is updated by new information.Here, by analyzing two disparate systems, we argue that the ingredients needed for self-tuned sensitivity to new environmental information are readily found in biology. We first analyze recent quantitative experiments on the interaction between circadian clocks and metabolism in the photosynthetic cyanobacterium Synechococcus elongatus[11]. Here, the free running KaiABC-protein based circadian clock serves as an unreliable internal model of the external 24 hour day-night cycle of light on earth, 'entrained' by periodic changes in * [email protected] external light.We interpret recent experiments to argue the sensitivity of the clock to light is tunable, since this sensitivity is controlled by the cell's metabolic state, in particular the availability of energy storage compounds such as glycogen. We further demonstrate that, since glycogen metabolism is controlled by the clock, the metabolicallycoupled clock effectively tunes its own sensitivity, reaching values appropriate for different environmental conditions.We then discuss similar behavior in stress response pathways in yeast. Recent experiments show how information from fast and slow osmolarity sensing pathways are combined to show the high speed of the fast pathway but retain the low error of the slow pathway[12]. We find that this behavior can be explained if the balance between these two pathways switches in a time-dependent manner.We conclude with general results on when Kalman-like tunable sensitivity is biologically advantageous. We show that self-tuned sensitivity can break speed-accuracy (or gain-bandwidth) trade-offs in sufficiently heterogeneous environments, e.g., when the circadian clock switches between distinct epochs of high and low noise. Each distinct epoch needs to persist long enough to allow selftuning mechanisms such as metabolic feedback or osmolarity mismatch to raise or lower the sensitivity as needed. Taken together, our results suggest new kinds of experiments that can reveal the phenotypic adaptation of sensitivity to new information in biophysical sensing.
Living organisms need to be sensitive to a changing environment while also ignoring uninformative environmental fluctuations. Here, we show that the circadian clock in Synechococcus elongatus can naturally tune its environmental sensitivity, through a clock-metabolism coupling quantified in recent experiments. The metabolic coupling can detect mismatch between clock predictions and the day-night light cycle, and temporarily raise the clocks sensitivity to light changes and thus entrain faster. We also analyze analogous behaviour in recent experiments on switching between slow and fast osmotic stress response pathways in yeast. In both cases, cells can raise their sensitivity to new external information in epochs of frequent challenging stress, much like a Kalman filter with adaptive gain in signal processing. Our work suggests a new class of experiments that probe the history-dependence of environmental sensitivity in biophysical sensing mechanisms.
Living organisms do not perceive their environment in an objective manner but often in the context of prior expectations or predictions of what the environment might be. Many examples of such prior expectations -i.e., internal models of the external world -are found in neuroscience [1], but can also be found in metabolic dynamics of yeast [2] and bacteria [3], the rhythms generated by free-running circadian clocks [4], receptor signalling cascades, and the immune system [5].
Combining predictions with measurements requires care, as both data might be unreliable. In the 1960s, Kalman [6] introduced a simple iterative scheme to optimally update predictions with measurements that has found applications from Apollo 11 [7] to particle tracking in microscopy [8] and synthetic genetic circuits in living cells [9]. While the exact mathematics of Kalman filters is unlikely to apply to biology, the Bayesian idea at the heart of Kalman filtering is broadly applicablei.e. predictions must be updated by measurements using an iteratively computed weight that reflects their respective unreliabilities. However, it is not clear whether a Kalman-like adaptive sensitivity to new external information can be easily implemented at the cellular level. Indeed, unlike routine feedback regulation [10], the quantity of physiological interest -e.g., osmotic pressure, circadian time -is not itself regulated in a Kalman strategy, but rather the rate at which that quantity is updated by new information.
Here, by analyzing two disparate systems, we argue that the ingredients needed for self-tuned sensitivity to new environmental information are readily found in biology. We first analyze recent quantitative experiments on the interaction between circadian clocks and metabolism in the photosynthetic cyanobacterium Synechococcus elongatus [11]. Here, the free running KaiABC-protein based circadian clock serves as an unreliable internal model of the external 24 hour day-night cycle of light on earth, 'entrained' by periodic changes in * [email protected] external light.
We interpret recent experiments to argue the sensitivity of the clock to light is tunable, since this sensitivity is controlled by the cell's metabolic state, in particular the availability of energy storage compounds such as glycogen. We further demonstrate that, since glycogen metabolism is controlled by the clock, the metabolicallycoupled clock effectively tunes its own sensitivity, reaching values appropriate for different environmental conditions.
We then discuss similar behavior in stress response pathways in yeast. Recent experiments show how information from fast and slow osmolarity sensing pathways are combined to show the high speed of the fast pathway but retain the low error of the slow pathway [12]. We find that this behavior can be explained if the balance between these two pathways switches in a time-dependent manner.
We conclude with general results on when Kalman-like tunable sensitivity is biologically advantageous. We show that self-tuned sensitivity can break speed-accuracy (or gain-bandwidth) trade-offs in sufficiently heterogeneous environments, e.g., when the circadian clock switches between distinct epochs of high and low noise. Each distinct epoch needs to persist long enough to allow selftuning mechanisms such as metabolic feedback or osmolarity mismatch to raise or lower the sensitivity as needed. Taken together, our results suggest new kinds of experiments that can reveal the phenotypic adaptation of sensitivity to new information in biophysical sensing.
I. MISMATCH SENSING THROUGH METABOLIC COUPLING
We consider free-running circadian clocks entrained to diurnal changes in light. Free running clocks show sustained periodic rhythms even in the absence of external periodic light or temperature signals and can be very complex, involving dozens of proteins and genes as in the case of the mammalian clock [13].
No matter how complex the clock, we can define an effective parameter -'sensitivity' γ -that quantifies the coupling of the clock to an external entraining signal such as light. For example, γ can be experimentally defined as the height of the 'phase response curve', i.e., as the largest clock phase change in response to a single dark pulse administered at different times of the day [4]. For simplicity, we consider light to be the only entraining signal for the clock. While the sensitivity is usually thought of as a fixed parameter, recent experiments on S. elongatus have identified components that suggest a dependence on the recent history of clock performance. In particular, the sensitivity is set by a metabolic variable, glycogen, which itself is regulated by the history of clock accuracy.
We quantify the link between clock and metabolism by analysing data from [11]. Both in vivo and in vitro data suggest that the difference between day and night time ATP levels sets γ ( Figure 1b); that is,
γ ∝ ∆ATP = ATP day − ATP night
While day-time ATP levels are set by the rate of photosynthesis, and thus light intensity, night time ATP is produced from the cell's intracellular storage form of glucose, glycogen [11]. Fig 1c shows in vivo data for the dependence of ∆ATP on glycogen: increased glycogen levels increases ATP night and thus reduces sensitivity γ.
Critically, glycogen levels are in turn affected by clockenvironment mismatch. Data shown in Fig .1c from S. elongatus [11] grown in constant light shows that glycogen is produced only when it is both objectively and subjectively day, and degraded otherwise. We model these facts using,
τ γ dGly dt = −λ Gly + α Θ[θ(t)]s(t) mismatch(1)
where Θ[θ(t)] = 1 if the clock state θ(t) corresponds to subjective day and Θ[θ(t)] = 0 otherwise and the external light s(t) = 1 during the day and s(t) = 0 otherwise. Thus the production term is present only when it is objectively day (s = 1) and also subjectively day (Θ(θ) = 1). If the clock is out of phase with the external day-night signal, the hours of sunlight when the clock is in the night state are wasted in terms of glycogen production. Thereby, clock-environment mismatch raises the sensitivity γ, Fig. 1d.
What are the benefits of self-tuned sensitivity in a circadian clock? We explored this in a minimal model of a generic circadian clock, consisting only of a phase oscillator θ(t) entrained by the external light signal s(t). First, we characterised fixed sensitivity clocks subject to internal fluctuations, modeled by discrete events that shift the clock phase by an amount σ int (Fig 2a). Such fluctuations can result from various forms of stress, such as periods of rapid cell division [14].
As shown in Fig. 2b,c, a large γ re-entrains the clock to external light quickly after a phase perturbation, but also rendering the clock sensitive to external fluctuations in light (say, due to weather patterns [15]). Conversely, a low γ clock is robust against light fluctuations but is slow to entrain. The resultant trade-off is a manifestation of speed-accuracy trade-offs seen in such disparate fields as photoreceptor signal transduction [16], neural decision making [17,18] , cellular concentration sensing [19,20], immunology [21], and control theory (e.g., the gainbandwidth tradeoff [22]). We reasoned that a dynamic sensitivity could overcome this tradeoff.
Inspired by the metabolic coupling in S. elongatus, we augmented the model with a dynamic γ(t) set by clockenvironment mismatch: that is, γ(t) is raised when clock phase θ(t) and the measured time of day s(t) differ significantly (see SI). Simulations show that this mismatch feedback lets ( well-entrained but transiently raises γ to re-entrain the clock when needed. In this way, modulating sensitivity γ by a memory of recent clock performance overcomes trade-offs inherent to fixed-γ clocks (Fig. 2c).
To understand the conditions under which the metabolic feedback in S. elongatus modulates clock gain, we fit the data in Fig. 1c to construct a minimal model of the Kai oscillator with the measured glycogen feedback and dynamics (SI). We keep as a free parameter the decay rate λ in Eq. 1, whose value sets the resting glycogen level in well-entrained cells. Subjecting simulated cells to transient periods of high internal fluctuations lasting time τ env , we find that repeated phase shifts compromise glycogen storage, Fig 2d. Measuring the phase response curve of our simulated cells, we find that the clock sensitivity correspondingly rises to significantly higher values (∼ 2 fold increase in PRC height), if these periods of repeated stress last long enough (large τ env ) and are intense enough (large σ int ) so as to significantly change glycogen levels away from their resting values; see Thus, while the phase response curve and sensitivity are usually thought of as fixed properties of a circadian clock [4], here we find that they can be tuned by the recent history of clock performance. Our framework generates testable predictions: while the perturbations in Fig 2 represent internal fluctuations, they could also represent 'jet lag', i.e., jumps in the phase of an artificial light signal in the lab. In the Discussion, we describe experimental protocols to detect such history-dependent sensitivity. In either case, whether it be internal fluctuations or an irregular external signal, the organism would benefit from a higher sensitivity γ and faster entrainment rates during such epochs.
II. SELF-TUNED SENSITIVITY IN OSMOLARITY RESPONSE
Self-tuned sensitivity to new environmental information is broadly applicable beyond metabolically-coupled clocks. Here we model recent experiments showing similarly tuned sensitivity in the osmolarity regulation pathway in the budding yeast, S. cerevisiae.
Sudden external changes in osmolyte concentration can lead to physical rupture of cells if not rapidly counteracted [23]. S. cerevisiae reacts to an osmotic shock by producing intracellular glycerol in response [24]. Interestingly, the signaling between membrane receptors and glycerol production occurs via two distinct upstream branches that converge on the MAP kinase Hog1 in a Yshaped motif. [25][26][27]. In isolation, one of the pathwaysthe two-component Sln-SSk1 phospho-transfer -leads to a fast but inaccurate response, while the other pathway -the Sho-Ste11 kinase cascade -is slow but accurate in restoring osmotic equilibrium [12]. Strikingly, the wildtype, which combines information from both pathways, manages to show the speed of the fast pathway but the error of the slow pathway [12].
Representing the signaling activity of each branch at time t by M sln (t) and M sho (t), we model the joint regulation of glycerol as,
∂ ∂t gly = α(t) γ sln M sln (t) + (1 − α(t)) γ sho M sho (t) (2)
where γ sln and γ sho are the response speeds of each pathway, with γ sho = γ sln /2 as in the experiments of [27]. Here, the weight factor 0 < α < 1 prescribes the influence of each upstream pathway (Fig. 3a). One can consider Dynamic switching between fast and slow osmotic pressure response pathways in S. cerevisiae. a External osmotic pressure Pext affects internal glycerol production through a fast (high-γ) phosphorelay pathway and a slow (low-γ) kinase cascade. We combine the two pathways using a time-dependent relative weight α(t) set by osmotic mismatch Pext − Pint. b The dynamic switching α(t) (orange) model filters fluctuations in Pext as effectively as the slow (purple) pathway operating in isolation, but also c matches the fast pathway's speed in recovering from an osmotic shock. d Hence the dynamic α(t) model beats the speed-accuracy trade-off inherent to any fixed-α circuit (black dots). e Experiments in [12] reveal a similar violation of the trade-off by wild type cells as compared to single pathway knockouts by measuring cell death in response to fluctuations and to step changes of Pext. more complex non-linear models of joint regulation; our results below only depend on whether the relative importance of the two pathways is static or dynamic.
Simulating the model, we reproduced the speed and accuracy behaviors of each branch in isolation by fixing α = 0 and 1 to emulate the Sln and Sho knockouts respectively; see Fig. 3c, d. We then explored joint, but static regulation of glycerol: a fixed α(t) = const. leads to a trade-off between speed and accuracy, just as with either pathway in isolation and unlike the experiment [12]; see Fig.3d and e. In the SI we argue that this limit corresponds to a single upstream pathway with a fixed, effective response speed between γ sln and γ sho .
We can explain the breaking of the trade-off by the wild-type if the information from the two pathways is integrated instead with a dynamic weight α(t) (Fig. 3c, d), that is raised by an osmotic pressure imbalance ∆P = P ext − P int (i.e., mismatch; Fig. 3c) and kept low otherwise (Fig. 3d). Thus, only by dynamically weighting inputs from each upstream pathway does the wild type leverage the desirable features of both.
This tunable speed of response in the yeast system is remniscent of the circadian clock presented above. Unlike with the clock-glycogen coupling, however, the exact molecular mechanism responsible for tuning α(t) is currently unknown. Our model regulates α(t) according to the mismatch P ext − P int . The model in [12] explains experimental data using mutual inhibition between the two arms; such inhibition effectively implements a timevarying factor α(t) as well. Independent of the detailed molecular mechanism, the experimental data of [12], replotted in Fig.3e, on speed and error for the wild type compared to knockouts presents a convincing case of selftuned sensitivity.
III. DISCUSSION
We have presented experimentally constrained quantitative models of two biological systems that navigate a trade-off between speed and accuracy by self-regulating their sensitivity. We now present a general framework for self-tuned sensitivity, based on Kalman filtering, that encompasses both systems. We use this simplified general framework to demonstrate what the important effective parameters are, on what timescales these self-tuned mechanisms are useful and what kinds of experiments can reveal them.
Kalman filtering is an iterative Bayesian approach to combine uncertain measurements of the environmental state with uncertain internal predictions (or expectations) of what the environmental state should be. Kalman filters are usually presented as a predictionmeasurement-update cycle. For simplicity, consider a (discrete-time) Kalman filter for tracking a particle moving in one dimension with average velocity v whose position is only periodically measured every δt seconds. Between these measurements, we can estimate (or predict) the particle position to be x P (t) =x(t − δt) + vδt. Such predictions are assumed to be unreliable with variance σ 2 int , e.g., because of fluctuations in particle velocity. At the end of this δt interval, the particle is measured to be at x M (t) with uncertainty σ 2 ext relative to the real position. Since measurements and predictions are both unreliable, predictions must be corrected by this measurement with a finite sensitivity γ,
x(t) = (1 − γ) x P (t) Prediction + γ x M (t) Measurement .
The process then repeats with the corrected estimate, (purple, green, orange are fixed low, fixed high, adaptive γ resp.). c Adaptive γ lowers error relative to fixed low γ (purple) only when τenv > τγ. We varied τenv (symbols) and τγ and collapsed by scaling τγ/τenv (inset: uncollapsed). Solid black curve is an approximate analytical calculation (see SI).
x(t).
Here, γ reflects sensitivity to new external information; large γ rapidly updates the internal state when internal and measured values disagree. Kalman's key idea was to iteratively update γ over time so as to reflect the relative unreliabilities of measurements x M and internal predictions x P . The literature contains numerous ways in which γ can be updated over time. Motivated by our biological examples, we focus on feedback based on mismatch (also called a generalized or adaptive Kalman filter [28]),
τ γ dγ dt = κ |x(t) − x M (t)| mismatch −γ(3)
With this general simplified setup, we investigate when such self-tuned sensitivity can provide an advantage. We compute the average tracking error Var(x(t) − x(t)) in a heterogeneous environment where predictions transiently have high error σ int for periods of length τ env . As shown in Fig.4b, the adaptive strategy initially idles at low γ but when predictions become noisy, γ starts to rise towards a high value γ hi , thus lowering error. However, if τ env is too short, γ cannot reach γ hi before the epoch ends. We find that the mismatch-mediated feedback is only effective when (see SI):
τ γ τ env < 1 + κ σ int 2 √ πγ 3/2 hi(4)
Thus, only when τ env is sufficiently long, and the stressful environment sufficiently adverse (high σ int ), does the adaptive Kalman filter leverage the benefits of a dynamic γ in the noisy environment, as seen in Figure 4(c).
The circadian clock and yeast stress response can be seen as examples of such generalized Kalman models. The clock corresponds to a model where x is periodic and v ∼ 1/24 hours. The epoch of high σ int could correspond to epochs of high internal fluctuations in clock phase (e.g., epochs of rapid growth [14]) that would benefit from fast and frequent re-entrainment. Osmolarity signaling corresponds to models with v = 0, i.e., the internal model assumes osmolarity is not changing in order to reject high frequency fluctuations in external pressure. Here, epochs of frequent real changes in external osmotic pressure are mathematically captured by epochs of high σ int in the Kalman framework. Finally, the experiments in [11] suggest τ γ of several days for clocks, while experiments in [12] suggest a fast τ γ for yeast that provides a benefit even for a single step change in osmolarity.
Our results here suggest how to design experiments to reveal self-tuned sensitivity mechanisms -experiments need to measure sensitivity after a period of priming τ env that lasts longer than the feedback timescale τ γ ; further, the intensity of perturbations σ int during this interval need to be strong enough.
In the context of the clock, experiments could, for example, measure the phase response curve after a period of priming. The priming protocol could use light-dark cycles, each an average of 12 hours, but where night falls at an unexpected time, i.e., not at subjective dusk. The difference between subjective dusk and arrival of dark sets σ int , while the total length of the protocol sets τ env . Our theory predicts that the measured sensitivity will be significantly greater after priming, if τ env > τ γ and for large enough σ int .
Feedback regulation is ubiquitous in biology. However, most known examples involve control or homeostasis problems where the quantity of physiological interest -e.g., osmotic pressure -is itself directly under feedback regulation; such regulation has been compared to PI controllers [10]. The Kalman-inspired feedback regulation of sensitivity discussed here is fundamentally distinct from such examples of control. Here, the sensitivity to new information (often called gain) is under feedback regulation and the quantity of interest such as osmotic pressure is regulated based on such information. Further, our work shows how self-regulation of sensitivity can naturally arise from inevitable couplings in the cell -in S. elongatus, the metabolic state is affected by clock performance and the metabolic state is, in turn, a globally relevant variable that affects clock sensitivity. We hope our work here will inspire experiments to test the historydependence of sensitivity to new external information in diverse biophysical sensing pathways. For pedagogical reasons, we begin our analysis with a generic model of a driven phase oscillator augmented with an adaptive gain circuit. The underlying equation of motion for the oscillator phase θ(t) is:
∂ ∂t θ(t) = ω 0 + γ cos θ s(t)(1)
Here, ω 0 is the intrinsic frequency of the oscillator. We measure time in units of days, and set ω 0 = 2π. The parameter γ, as discussed in the main text, is the gain or, alternatively, the magnitude of the infinitesmal phase response curve whose shape is cos θ. The external signal s(t) is decomposed into a regular signal and noise: s(t) = s 0 (t) + η(t). We take the regular signal s 0 (t) to be sinusoidal with frequency ω 0 : s 0 (t) = sin ω 0 t, and the corrupting noise signal to be white with variance σ 2 ext :
η(t) = 0 η(t)η(t ) = σ 2 ext δ(t − t )(2)
We implement an adaptive gain by the following dynamics for γ:
τ γ ∂ ∂t γ = − (γ − γ 0 ) + K mismatch 1 2 − s(t) sin θ M(θ,s(t))(3)
The parameter K mismatch quantifies the influence of the mismatch circuit on the gain dynamics. The form of the mismatch term M(θ, s(t)) is chosen such that, for sufficiently long τ γ , τγ 0 dt M(θ, s(t)) evaluates to 0 when s(t) and θ(t) are in-phase, and > 0 otherwise. γ 0 is the resting value of γ in the absence of mismatch feedback. For the panels presented in Figure 2 in the main text, we use the parameters: σ ext = 0.5, γ 0 = 0.3, τ γ = 10 days, and K mismatch = 3. The equations are solved by Euler's method with a fixed time step dt = 10 −2 .
The response time and error of the clock is assessed in the following simulation. N phase oscillators are allowed to entrain to the signal s(t), each being exposed to an independent realisation of the external noise η(t). The resting error of the clock is quantified by the difference between the internal and external time. Denoting by φ(t) = ω 0 t the phase of s(t), this is written:
Error: arccos (cos θ cos φ + sin θ sin φ)(4)
where the average is over the ensemble of N phase oscillators. The resultant error in radians is converted to hours. The recovery time of the phase oscillator is measured as follows. An initially entrained population of oscillators is perturbed at t = 0 by a phase shift ∆θ, chosen to be a uniformly distributed value in [−φ s , φ s ]. For the simulations presented in the main text, φ s = 12 hours. The system is then evolved under the external signal s(t) until the population variability , defined as = 1 − cos θ 2 + sin θ 2 , falls to within 10% of its resting value (indicating that the population is once again entrained).
II. METABOLIC FEEDBACK IN THE S. ELONGATUS CLOCK
Pertaining to Section I and Fig. 1
A. Model
To mimic the limit cycle clock of S. elongatus, we consider a Stuart-Landau oscillator of radius R around r 0 = (x 0 , y 0 ):
∂ ∂t x = −ω 0 (y − y 0 ) + α 1 − (x − x 0 ) 2 + (y − y 0 ) 2 R 2 (x − x 0 ) ∂ ∂t y = ω 0 (x − x 0 ) + α 1 − (x − x 0 ) 2 + (y − y 0 ) 2 R 2 (y − y 0 )(5)
When α > 0, and r 0 is constant, the system settles into a circular limit cycle with time period 2π/ω 0 . Once again, we measure time in units of days and set ω 0 = 2π. We fix the parameter α = 10.
The oscillator couples to the external signal s(t) through r 0 (t). We choose coordinates by setting y 0 = 0 and:
x 0 (t) = γ (1 − s(t))(6)
The external signal s(t) is taken to be a square wave (a '12-12 LD' cycle), with value 1 during the day and 0 during the night. Under these conditions, an entrained oscillator has y > 0 during the day and y < 0 during the night.
In S. elongatus, the Kai oscillator does not sense light directly but instead through metabolic intermediaries. We therefore take gain γ to be a function of the difference between day and night intracellular ATP levels:
γ = γ(∆ATP)(7)
The level of ATP during the day is fixed by photosynthesis, and therefore by light levels; however, ATP levels at night come from intracellular energy storage in glycogen. We therefore have:
∆ATP = ∆ATP(glycogen)(8)
Finally, glycogen itself is a dynamic quantity: made during the day and consumed during the night. As we describe in the main text, previous work has suggested that glycogen production occurs only when it is both subjectively and objectively day (i.e., s(t) = 1 and y > 0). We therefore write for its dynamics:
∂ ∂t glycogen = λ g α(t) − (1 − α(t)) k g × glycogen α(t) = Θ (s(t) × y(t))(9)
Here, λ g is the production rate of glycogen during the day, and k g is the degradation rate of glycogen during the night. The indicator variable α(t) is 1 when both s(t) and y(t) are positive, and 0 otherwise. First, we directly extract the functional form of ∆ATP(glycogen), finding:
∆ATP = 72% − 1.6% × glycogen(10)
where ∆ATP is measured in percentage and glycogen in µg per µg chlorophyll.
To extract the relationship Eq. 7, we need to know how PRC height varies as a function of γ in our model. We therefore perform simulated PRC 'experiments' with Eq. 5, mimicing the experimental protocol (i.e., LL interrupted by a five hour dark pulse [1]).
Matching values between the simulated PRC heights and the experimentally measured ones in Fig. 1, we compute a corresponding value of γ for each ∆ATP. We find that the relationship is roughly linear, and can be captured by:
γ = 0.26 + 1.2 × 10 −2 × ∆ATP(11)
Finally, we fix the dynamics of glycogen by measuring the gain in glycogen levels over a single day. A linear fit gives us λ g ≈ 16. We estimate the degradation rate by choosing one such that the average level of glycogen for a healthy, entrained cell is ∼ 40 µg per µg chlorophyll, i.e., near the upper range probed experimentally (see Fig. 1b); we choose k g ≈ 0.4.
With parameters fixed, we go on to expose our simulated cells to periods of stress. We use the Euler method to solve the equations of motion, with a time step dt = 10 −3 . N = 2000 cells are first entrained to a clean 12-12 LD signal s(t). Then, at t = 0, each cell is exposed to a stochastic protocol of stress events. Each stress event corresponds to a phase shift of magnitude ∆φ hours, uniformly distributed in the interval ∆φ ∈ [−σ int , σ int ]. An exponentially distributed waiting time of mean value 1 day separates stress events; the protocol lasts for time τ env . The external signal s(t) continues as a 12-12 square wave during this time.
At the end of the stress period, the cells are shifted to LL (i.e., the signal s(t) = 1). Each cell undergoes a standard PRC protocol (i.e., a 5 hour dark pulse) as described above, and the PRC height is normalised to the resting PRC height (i.e., that computed from a cell that has not undergone the stress procedure).
III. THE OSMOTIC CIRCUIT
Pertaining to Section II and Fig. 3 of the main text.
To model the regulation of glycerol, g(t), by the external osmotic pressure P ext , we first consider the simplest possible regulatory motif:
∂ ∂t g(t) = − 1 τ (g(t) − P ext (t))(12)
Here, the timescale τ is the response time of the pathway, with the response speed γ generically defined to be 1/τ . Solving,
g(t) = dt γe −γ(t−t ) P ext (t )(13)
A large γ corresponds to a rapid tracking of P ext (t) by g(t). However, if the external signal contains noise, P ext (t) → P ext (t) + η(t), a large γ (small τ ) is less able to average out the noise than a small γ circuit [2].
As discussed in the main text, glycerol receives inputs from both the fast and slow signalling pathways; the simplest model consistent with this is:
∂ ∂t g(t) = −α(t) 1 τ f (g(t) − P ext (t)) Slow pathway − Fast pathway (1 − α(t)) 1 τ s (g(t) − P ext (t))(14)
Here, τ f is the response time of the fast pathway and τ s the response time of the slow pathway. From experiments in [3], we estimate these to be 3s and 6s, respectively. The indicator variable α(t), which takes values between 0 and 1, decides which pathway regulates glycerol production. Finally, the external pressure is separated into a slowly changing component and rapid fluctuations, P ext (t) = P 0 (t) + η(t). For these simulations, we take the noise η(t) to be white, with variance σ 2 ext = 1. In Figure 3 in the main text, we contrast a static α (i.e., α(t) = const) circuit against a dynamic α(t) one. The former is equivalent to a single pathway regulating glycerol production, with an effective response timescale between τ f and τ s . The latter is implemented by writing α in terms of an auxiliary dynamical variable β, which measures the mismatch between internal and external pressure:
α(t) = β n K n β + β n ∂ ∂t β(t) = − 1 τ γ β(t) − (P ext (t) − g(t)) Mismatch (15)
The form of α's dependence on β is chosen to resemble a switch, such that for much of the time α ≈ 0 or 1; we choose the Hill co-efficient to be n = 4. The other parameters are set as K β = 1 and τ γ = 6s.
To measure resting error and response time, we simulate the response of N = 3000 cells (with a timestep of dt = 0.01s) to a jump in external pressure from P 0 (t) = 10 to P 0 (t) = 20. The recovery time is measured as the average time taken for glycerol to come within 20% of its new value. The resting error is quantified as the population variance in glycerol levels just prior to the pressure jump.
IV. ADAPTIVE KALMAN FILTER
Pertaining to Section III and Fig. 4 of the main text.
We implement a discrete time Kalman filter for a particle with constant velocity v as:
Prediction: x (P ) t =x t−1 + v∆t + η int (t) Measurement: x (M ) t = x t + η ext (t)
Update:x t = x (P ) t
+ γ t x (M ) t − x (P ) t(16)
Here, x t is the actual position of the particle at time t, x (M ) t is the measured position and x (P ) t is the predicted position. These are combined with the gain γ t to obtain the best estimatex t . η int and η ext represent the error in prediction and measurement, respectively; here, we take them to be normally distributed with mean 0 and variances σ 2 int and σ 2 ext
We prescribe for the gain γ t the following dynamics:
γ t+1 = γ t + ∆t τ γ κ |x t − x (M ) t | Mismatch M −γ t (17)
In the limit of large τ γ this is approximated by the differential equation:
τ γ dγ dt = κ|x t − x (M ) t | − γ(18)
which is shown in the main text. We propagate the Kalman filter, Eqs. 16 and 17, numerically, with parameters v = 0.1, ∆t = 1, κ = 0.2 and σ ext = 0.5. The filter is first equilibrated in a 'clean' environment with low internal noise, σ int = 0.01; then, σ int is raised to a high value, σ int = 1, for a time τ env .
Defining the instantaneous error σ ≡ (x t − x t ) 2 , we vary τ γ and τ env and compute the time-averaged error:
Tracking error:
1 τ env τenv 0 dt σ 2 (t) ≡ 1 τ env τenv 0 dt (x t − x t ) 2(19)
For each value of τ γ and τ env , the average is taken over an ensemble of N = 600 replicates. The resultant error is plotted in Figure 4c in the main text; we see that τ env needs to be long enough for the mismatch-mediated feedback to raise the gain γ and thereby lower the error.
To gain a more analytic understanding, we compute Eq. 19 from an approximate solution of Eqs. 16 and 17. The coupled dynamics ofx and γ are too difficult to solve directly; we instead work in an 'adiabatic' approximation -valid for large τ γ -in which the statistics ofx are always at steady state. That is, at time t and gain γ(t), the instantaneous variance ofx is, from the steady state of Eq. 16: σ 2 (t) = σ 2 (γ(t)) ≡ varx = γ 2 σ 2 ext + (γ − 1) 2 1 − (γ − 1) 2 σ 2 int (20) To solve for γ(t), we average the mismatch term in Eq. 18 to obtain: τ γ ∂ ∂t γ = −γ + κ 2 π (γ 2 + 1) σ 2 ext + (γ − 1) 2 1 − (γ − 1) 2 σ 2 int (21) For any particular values of σ ext and σ int , the steady state value of γ follows by setting the LHS to 0. In our numerical protocol above, we change the value of σ int over time. Let γ lo be the steady-state value of γ when σ int is low, and γ hi be the steady-state value of γ when σ int is high. At t = 0 the noise statistics change from a low to a high σ int . The relaxation of γ from γ lo to γ hi may be approximately computed by linearising Eq. 21 about γ hi :
∂ ∂t γ ≈ − 1 τ (γ − γ hi ) τ = τ γ 1 + κ π 2 (2 − γ hi ) 2 γ 3 hi σ 2 ext + (γ hi − 1)σ 2 int (2 − γ hi ) 2 γ 2 hi (1 + γ 2 hi )σ 2 ext − (1−γ hi ) 2 σ 2 int (γ hi −2)γ hi −1 ≈ τ γ 1 + κ σ int 2 √ πγ 3/2 hi −1(22)
where, in the last line, we have expanded around small γ hi . Only when τ env > τ , the relaxation time, does γ(t) reach γ hi and the error fall, as reported in the main text.
FIG. 2 .
2Fig. 2b) γ(t) idle at low sensitivity when Self-tuned sensitivity allows fast and yet accurate response in heterogeneous environments. a b Clock error (average error from objective time, top) and sensitivity γ(t) (bottom) in response to a sudden phase shift (red triangle, a). By raising sensitivity γ only when necessary, the self-tuned clock (orange) entrains as fast as a fixed high γ clock (green) but with the resting error of a low (purple) γ clock, thus c beating speed-accuracy trade-offs inherent to fixed-γ clocks (black dots). d Fitting the data inFig. 1, we simulated an epoch of random repeated shifts of clock state by a typical amount σint hours, causing repeated mismatch with environmental light. d Intracellular glycogen, averaged over N = 200 cells, falls during this epoch; thus, e clock sensitivity (quantified by PRC height, after a period τenv of disturbances, relative to sensitivity in undisturbed conditions) rises.
Fig 2e.
FIG. 3. Dynamic switching between fast and slow osmotic pressure response pathways in S. cerevisiae. a External osmotic pressure Pext affects internal glycerol production through a fast (high-γ) phosphorelay pathway and a slow (low-γ) kinase cascade. We combine the two pathways using a time-dependent relative weight α(t) set by osmotic mismatch Pext − Pint. b The dynamic switching α(t) (orange) model filters fluctuations in Pext as effectively as the slow (purple) pathway operating in isolation, but also c matches the fast pathway's speed in recovering from an osmotic shock. d Hence the dynamic α(t) model beats the speed-accuracy trade-off inherent to any fixed-α circuit (black dots). e Experiments in [12] reveal a similar violation of the trade-off by wild type cells as compared to single pathway knockouts by measuring cell death in response to fluctuations and to step changes of Pext.
FIG. 4 .
4Adaptive gain out-performs fixed-gain in sufficiently heterogeneous environments. a A generic Kalman filter iteratively estimates the position x(t) of a moving particle by correcting predictions (purple) with measurements (green) that are weighted by a finite gain γ(t). b (bottom) Adaptive γ(t) (orange, Eq. 3) rises in response to a noisy environment (gray box) lasting time τenv. (top) Resulting error in tracking.
ADAPTIVE GAIN IN CIRCADIAN OSCILLATORSPertaining to Section I andFig. 2of the main text.
B. Simulated Noisy Epochs in the S. elongatus clock Pertaining toFig. 2of the main text. Eqs. 5 and 9, along with the algebraic relations Eqs. 7 and 8, constitute our model for the S. elongatus circadian clock. To fix parameters and functional forms, we turn to experimental data. The data we have at hand are:• ∆ATP as a function of glycogen,Fig 1b.
•
Phase response curve (PRC) height as a function of ∆ATP, Fig 1c.
•
Time-series of glycogen over one day, Fig 1d.
The phase portrait of Eq. 5, showing the L (day, yellow) and D (night, grey) limit cycles. The sensitivity γ, in this model, is defined as the distance between the two limit cycles. b,c,d Experimental data from[1]; in b, dashed line shows Eq. 10, while in d the solid lines are predicted glycogen accumulation from Eq. 9. e A simulated PRC from Eqs. 5, with a fixed γ (here taken to be 0.8). The height of the simulated PRC is measured for varying γ and compared against the data in c to obtain f. The dashed line in f is Eq. 10.
FIG. 1. The clock-metabolism coupling in S. elongatus can self-regulate sensitivity to external light. a The light sensitivity γ quantifies how quickly a free-running circadian clock is phase entrained by the external day-night light cycle. b Sensitivity γ in S. elongatus is self-regulated by the clock-metabolism feedback with c experimentally quantified couplings[11]. (1) Phase response curve height (a proxy for γ) grows with ∆ATP = ATP day -ATP night . (2) ∆ATP falls with increasing intracellular glycogen levels. (3) Glycogen production is gated by the clock; hence glycogen levels fall during subjective night (gray) in constant light. d Consequently, sensitivity γ is dynamic, and can be tuned by clock accuracy, i.e. when clock output is mismatched with day-night light signals, glycogen falls and hence γ increases.a
&
(1)
(2)
(3)
b
c
20
40
60
2
4
6
8
10 (1)
10 20 30
20
30
40
50
60
(2)
(3)
0
10
20
30
40
5
10
15
20
(i)
(ii)
Clock accuracy
d
of the main text.arXiv:1903.07103v1 [q-bio.MN] 17 Mar 2019
...
...
Acknowledgements: KH thanks the James S Mc-Donnell Foundation for support via a Postdoctoral Fellowship. AM thanks the Simons Foundation for support. We are grateful to Amir Bitran, Ofer Kimchi, Mirna Kramar, Amanda Parker, and Ching-Hao Wang for early work on the project at the Cargese Summer School on Theoretical Biophysics (2017), to Catherine Triandafillou and Aaron Dinner for a careful reading of the manuscript, and to the Murugan and Rust groups for many critical discussions. We acknowledge the University of Chicago Research Computing Center for computing resources.Solving Eq. 22, we obtain:Inserting this expression into Eq. 20 and integrating from t = 0 to t = τ env , we obtain our (somewhat cumbersome) prediction for the average tracking error, Eq. 19:plotted inFig. 4cof the main text. Note that Eq. 24 is a function only of the ratio τ /τ env .
A simple coding procedure enhances a neuron's information capacity. S Laughlin, Z. Naturforsch. C. 369S Laughlin. A simple coding procedure enhances a neu- ron's information capacity. Z. Naturforsch. C, 36(9- 10):910-912, September 1981.
Oscillatory stress stimulation uncovers an achilles' heel of the yeast MAPK signaling network. Amir Mitchell, Ping Wei, Wendell A Lim, Science. 3506266Amir Mitchell, Ping Wei, and Wendell A Lim. Oscillatory stress stimulation uncovers an achilles' heel of the yeast MAPK signaling network. Science, 350(6266):1379-1383, December 2015.
Modeling the chemotactic response of escherichia coli to time-varying stimuli. Yuhai Tu, S Thomas, Howard C Shimizu, Berg, Proc. Natl. Acad. Sci. U. S. A. 10539Yuhai Tu, Thomas S Shimizu, and Howard C Berg. Modeling the chemotactic response of escherichia coli to time-varying stimuli. Proc. Natl. Acad. Sci. U. S. A., 105(39):14855-14860, September 2008.
The Geometry of Biological Time. Arthur T Winfree, Springer Science & Business MediaArthur T Winfree. The Geometry of Biological Time. Springer Science & Business Media, June 2001.
How a well-adapted immune system is organized. Andreas Mayer, Vijay Balasubramanian, Thierry Mora, Aleksandra M Walczak, Proceedings of the National Academy of Sciences. 11219Andreas Mayer, Vijay Balasubramanian, Thierry Mora, and Aleksandra M Walczak. How a well-adapted immune system is organized. Proceedings of the National Academy of Sciences, 112(19):5950-5955, May 2015.
A new approach to linear filtering and prediction problems. R E Kalman, Journal of Basic Engineering. 82135R. E. Kalman. A new approach to linear filtering and pre- diction problems. Journal of Basic Engineering, 82(1):35, 1960.
Applications of kalman filtering in aerospace 1960 to the present. M S Grewal, A P Andrews, IEEE Control Syst. 303historical perspectivesM S Grewal and A P Andrews. Applications of kalman filtering in aerospace 1960 to the present [historical per- spectives]. IEEE Control Syst., 30(3):69-78, June 2010.
Analysis of video-based microscopic particle trajectories using kalman filtering. Pei-Hsun Wu, Ashutosh Agarwal, Henry Hess, Pramod P Khargonekar, Yiider Tseng, Biophysical Journal. 9812Pei-Hsun Wu, Ashutosh Agarwal, Henry Hess, Pramod P. Khargonekar, and Yiider Tseng. Analysis of video-based microscopic particle trajectories using kalman filtering. Biophysical Journal, 98(12):2822-2830, jun 2010.
Molecular circuits for dynamic noise filtering. Christoph Zechner, Georg Seelig, Marc Rullan, Mustafa Khammash, Proc. Natl. Acad. Sci. U. S. A. 11317Christoph Zechner, Georg Seelig, Marc Rullan, and Mustafa Khammash. Molecular circuits for dynamic noise filtering. Proc. Natl. Acad. Sci. U. S. A., 113(17):4729-4734, April 2016.
Robust perfect adaptation in bacterial chemotaxis through integral feedback control. Y Huang, M I Simon, Doyle, National Acad Sciences. Y Huang, M I Simon, J Doyle Proceedings of the, and 2000. Robust perfect adaptation in bacterial chemotaxis through integral feedback control. National Acad Sci- ences, 2000.
Rhythms in energy storage control the ability of the cyanobacterial circadian clock to reset. Connie Gopal K Pattanayak, Michael J Phong, Rust, Curr. Biol. 2416Gopal K Pattanayak, Connie Phong, and Michael J Rust. Rhythms in energy storage control the ability of the cyanobacterial circadian clock to reset. Curr. Biol., 24(16):1934-1938, August 2014.
Distributing tasks via multiple input pathways increases cellular survival in stress. Alejandro A Granados, M Matthew, Luis F Crane, Reiko J Montano-Gutierrez, Margaritis Tanaka, Peter S Voliotis, Swain, Elife. 6Alejandro A Granados, Matthew M Crane, Luis F Montano-Gutierrez, Reiko J Tanaka, Margaritis Volio- tis, and Peter S Swain. Distributing tasks via multiple input pathways increases cellular survival in stress. Elife, 6, May 2017.
Toward a detailed computational model for the mammalian circadian clock. Jean- , Christophe Leloup, Albert Goldbeter, Proc. Natl. Acad. Sci. U. S. A. 10012Jean-Christophe Leloup and Albert Goldbeter. Toward a detailed computational model for the mammalian circa- dian clock. Proc. Natl. Acad. Sci. U. S. A., 100(12):7051- 7056, June 2003.
Robust circadian oscillations in growing cyanobacteria require transcriptional feedback. Shankar Shu-Wen Teng, Mukherji, R Jeffrey, Sophie Moffitt, Erin K O' De Buyl, Shea, Science. 3406133Shu-Wen Teng, Shankar Mukherji, Jeffrey R Moffitt, So- phie de Buyl, and Erin K O'Shea. Robust circadian oscil- lations in growing cyanobacteria require transcriptional feedback. Science, 340(6133):737-740, May 2013.
Weather and seasons together demand complex biological clocks. C Troein, Jcw Locke, M S Turner, A J Millar, Curr. Biol. 1922C Troein, Jcw Locke, M S Turner, and A J Millar. Weather and seasons together demand complex biolog- ical clocks. Curr. Biol., 19(22):1961-1964, January 2009.
Engineering aspects of enzymatic signal transduction: photoreceptors in the retina. S P B Detwiler, Ramanathan, B I Sengupta, Shraiman, Biophys. J. 796P B Detwiler, S Ramanathan, A Sengupta, and B I Shraiman. Engineering aspects of enzymatic signal trans- duction: photoreceptors in the retina. Biophys. J., 79(6):2801-2817, December 2000.
The speed-accuracy tradeoff: history, physiology, methodology, and behavior. Richard P Heitz, Frontiers in Neuroscience. 8Richard P. Heitz. The speed-accuracy tradeoff: his- tory, physiology, methodology, and behavior. Frontiers in Neuroscience, 8, jun 2014.
Rats adopt the optimal timescale for evidence integration in a dynamic environment. Alex T Piet, Ahmed El Hady, Carlos D Brody, Nature Communications. 91Alex T. Piet, Ahmed El Hady, and Carlos D. Brody. Rats adopt the optimal timescale for evidence integration in a dynamic environment. Nature Communications, 9(1), oct 2018.
Decisions on the fly in cellular sensory systems. D Eric, Massimo Siggia, Vergassola, Proceedings of the National Academy of Sciences. the National Academy of Sciences110Eric D Siggia and Massimo Vergassola. Decisions on the fly in cellular sensory systems. Proceedings of the Na- tional Academy of Sciences, 110(39):E3704-12, Septem- ber 2013.
Fundamental limits on sensing chemical concentrations with linear biochemical networks. C Christopher, Pieter Govern, Rein Ten Wolde, Phys. Rev. Lett. 10921218103Christopher C Govern and Pieter Rein ten Wolde. Fundamental limits on sensing chemical concentrations with linear biochemical networks. Phys. Rev. Lett., 109(21):218103, November 2012.
The case for absolute ligand discrimination: Modeling information processing and decision by immune t cells. Paul François, Grégoire Altan-Bonnet , Journal of Statistical Physics. 1625Paul François and Grégoire Altan-Bonnet. The case for absolute ligand discrimination: Modeling information processing and decision by immune t cells. Journal of Statistical Physics, 162(5):1130-1152, jan 2016.
Feedback for physicists: A tutorial essay on control. J Bechhoefer, Rev. Mod. Phys. J Bechhoefer. Feedback for physicists: A tutorial essay on control. Rev. Mod. Phys., January 2005.
Osmotic stress signaling and osmoadaptation in yeasts. S Hohmann, Microbiology and Molecular Biology Reviews. 662S. Hohmann. Osmotic stress signaling and osmoadapta- tion in yeasts. Microbiology and Molecular Biology Re- views, 66(2):300-372, jun 2002.
Response to hyperosmotic stress. H Saito, F Posas, Genetics. 1922H. Saito and F. Posas. Response to hyperosmotic stress. Genetics, 192(2):289-318, oct 2012.
Yeast HOG1 MAP kinase cascade is regulated by a multistep phosphorelay mechanism in the SLN1-YPD1-SSK1 "two-component" osmosensor. Francesc Posas, M Susannah, Tatsuya Wurgler-Murphy, Elizabeth A Maeda, Tran Cam Witten, Haruo Thai, Saito, Cell. 866Francesc Posas, Susannah M Wurgler-Murphy, Tatsuya Maeda, Elizabeth A Witten, Tran Cam Thai, and Haruo Saito. Yeast HOG1 MAP kinase cascade is reg- ulated by a multistep phosphorelay mechanism in the SLN1-YPD1-SSK1 "two-component" osmosensor. Cell, 86(6):865-875, sep 1996.
Osmotic activation of the HOG MAPK pathway via ste11p MAPKKK: Scaffold role of. F Posas, 2F. Posas. Osmotic activation of the HOG MAPK pathway via ste11p MAPKKK: Scaffold role of pbs2p
. Mapkk, Science, 276MAPKK. Science, 276(5319):1702-1705, jun 1997.
Signal processing by the HOG MAP kinase pathway. P Hersen, M N Mcclean, L Mahadevan, S Ramanathan, Proceedings of the National Academy of Sciences. 10520P. Hersen, M. N. McClean, L. Mahadevan, and S. Ra- manathan. Signal processing by the HOG MAP kinase pathway. Proceedings of the National Academy of Sci- ences, 105(20):7165-7170, may 2008.
Adaptive kalman filtering. Sarah C Rutan, Analytical Chemistry. 6322Sarah C. Rutan. Adaptive kalman filtering. Analytical Chemistry, 63(22):1103A-1109A, nov 1991.
Rhythms in energy storage control the ability of the cyanobacterial circadian clock to reset. Connie Gopal K Pattanayak, Michael J Phong, Rust, Curr. Biol. 2416Gopal K Pattanayak, Connie Phong, and Michael J Rust. Rhythms in energy storage control the ability of the cyanobacterial circadian clock to reset. Curr. Biol., 24(16):1934-1938, August 2014.
Engineering aspects of enzymatic signal transduction: photoreceptors in the retina. S P B Detwiler, Ramanathan, B I Sengupta, Shraiman, Biophys. J. 796P B Detwiler, S Ramanathan, A Sengupta, and B I Shraiman. Engineering aspects of enzymatic signal transduction: photoreceptors in the retina. Biophys. J., 79(6):2801-2817, December 2000.
Signal processing by the HOG MAP kinase pathway. P Hersen, M N Mcclean, L Mahadevan, S Ramanathan, Proceedings of the National Academy of Sciences. 10520P. Hersen, M. N. McClean, L. Mahadevan, and S. Ramanathan. Signal processing by the HOG MAP kinase pathway. Proceedings of the National Academy of Sciences, 105(20):7165-7170, may 2008.
|
[] |
[
"Security Analysis of a Password-Based Authentication Protocol Proposed to IEEE 1363",
"Security Analysis of a Password-Based Authentication Protocol Proposed to IEEE 1363"
] |
[
"Zhu Zhao es:[email protected] \nGanSu Province\nHexi University\nZhangYe CityP.R.China\n",
"Zhongqi Dong \nGanSu Province\nLanzhou University\nLanzhou CityP.R.China\n",
"Yongge Wang \nSIS Department\nUNC Charlotte\nCharlotteNCUSA\n"
] |
[
"GanSu Province\nHexi University\nZhangYe CityP.R.China",
"GanSu Province\nLanzhou University\nLanzhou CityP.R.China",
"SIS Department\nUNC Charlotte\nCharlotteNCUSA"
] |
[] |
In recent years, several protocols for password-based authenticated key exchange have been proposed. These protocols aim to be secure even though the sample space of passwords may be small enough to be enumerated by an off-line adversary. In Eurocrypt 2000, Bellare, Pointcheval and Rogaway (BPR) presented a model and security definition for authenticated key exchange. They claimed that in the idealcipher model (random oracles), the two-flow protocol at the core of Encrypted Key Exchange (EKE) is secure. Bellare and Rogaway suggested several instantiations of the ideal cipher in their proposal to the IEEE P1363.2 working group. Since then there has been an increased interest in proving the security of password-based protocols in the ideal-cipher model. For example, Bresson, Chevassut, and Pointcheval have recently showed that the OEKE protocol is secure in the ideal cipher model. In this paper, we present examples of real (NOT ideal) ciphers (including naive implementations of the instantiations proposed to IEEE P1363.2) that would result in broken instantiations of the idealized AuthA protocol and OEKE protocol. Our result shows that the AuthA protocol can be instantiated in an insecure way, and that there are no well defined (let alone rigorous) ways to distinguish between secure and insecure instantiations. Thus, without a rigorous metric for ideal-ciphers, the value of provable security in ideal cipher model is limited.
|
10.1016/j.tcs.2005.11.038
|
[
"https://arxiv.org/pdf/1207.5442v1.pdf"
] | 11,618,269 |
1207.5442
|
29614dbb248f584288cf6afb9c6d91eba8a4d6e4
|
Security Analysis of a Password-Based Authentication Protocol Proposed to IEEE 1363
23 Jul 2012
Zhu Zhao es:[email protected]
GanSu Province
Hexi University
ZhangYe CityP.R.China
Zhongqi Dong
GanSu Province
Lanzhou University
Lanzhou CityP.R.China
Yongge Wang
SIS Department
UNC Charlotte
CharlotteNCUSA
Security Analysis of a Password-Based Authentication Protocol Proposed to IEEE 1363
23 Jul 2012Preprint submitted to Theoretical Computer Science 2 May 2014arXiv:1207.5442v1 [cs.CR] (Zhongqi Dong), [email protected] (Yongge Wang).Password-based key agreementdictionary attacksAuthAEKE
In recent years, several protocols for password-based authenticated key exchange have been proposed. These protocols aim to be secure even though the sample space of passwords may be small enough to be enumerated by an off-line adversary. In Eurocrypt 2000, Bellare, Pointcheval and Rogaway (BPR) presented a model and security definition for authenticated key exchange. They claimed that in the idealcipher model (random oracles), the two-flow protocol at the core of Encrypted Key Exchange (EKE) is secure. Bellare and Rogaway suggested several instantiations of the ideal cipher in their proposal to the IEEE P1363.2 working group. Since then there has been an increased interest in proving the security of password-based protocols in the ideal-cipher model. For example, Bresson, Chevassut, and Pointcheval have recently showed that the OEKE protocol is secure in the ideal cipher model. In this paper, we present examples of real (NOT ideal) ciphers (including naive implementations of the instantiations proposed to IEEE P1363.2) that would result in broken instantiations of the idealized AuthA protocol and OEKE protocol. Our result shows that the AuthA protocol can be instantiated in an insecure way, and that there are no well defined (let alone rigorous) ways to distinguish between secure and insecure instantiations. Thus, without a rigorous metric for ideal-ciphers, the value of provable security in ideal cipher model is limited.
Introduction
Numerous cryptographic protocols rely on passwords selected by users (people) for strong authentication. Since the users find it inconvenient to remember long passwords, they typically select short easily-rememberable passwords. In these cases, the sample space of passwords may be small enough to be enumerated by an adversary thereby making the protocols vulnerable to a dictionary attack. It is desirable then to design password-based protocols that resist offline dictionary attacks.
The password-based protocol problem was first studied by Gong, Lomas, Needham, and Saltzer [10] who used public-key encryption to guard against off-line password-guessing attacks. In another very influential work [4], Bellovin and Merritt introduced Encrypted Key Exchange (EKE), which became the basis for many of the subsequent works in this area. These protocols include SPEKE [13] and SRP [25,26]. Other papers addressing the above protocol problem can be found in [7,9,11,16]. Bellare, Pointcheval, and Rogaway [2] defined a model for the password-based protocol problem and claimed that their model is rich enough to deal with password guessing, forward secrecy, server compromise, and loss of session keys. Then they claimed that in the ideal-cipher model (random oracles), the two-flow protocol at the core of Encrypted Key-Exchange (EKE) is secure. In addition, Bellare and Rogaway [3] suggested several instantiations (AuthA) of the ideal-cipher in their proposal to the IEEE P1363.2 Working Group. Recently, Bresson, Chevassut, and Pointcheval [8] proposed a simplified version of AuthA, which is called OEKE, and showed that OEKE achieves provable security against dictionary attacks in both the random oracle and ideal-cipher models under the computational Diffie-Hellman intractability assumption.
The ideal-cipher model was introduced by Bellare, Pointcheval, and Rogaway [2] as follows. Fix finite sets of strings G and C where |G| = |C|. In the idealcipher model, choosing a random function h from Ω amounts to giving the protocol (and the adversary) a perfect way to encipher strings in G: namely, for K ∈ {0, 1} * , we set E K : G → C to be a random bijective function, and we let D K : {0, 1} * → G defined by D K (y) be the value x such that E K (x) = y, if y ∈ C, and undefined otherwise. This paper studies the security issues with practical realization of the ideal cipher model by Bellare, Pointcheval, and Rogaway [2]. We show that for several instantiations of the ideal-cipher (including naive implementations of instantiations suggested in [12]), the instantiated Bellare and Rogaway's protocol (AuthA) is not secure against off-line dictionary attacks. Our results show that realizing the ideal-cipher of Bellare, Pointcheval, and Rogaway can be tricky. In particular, our results point out the weakness in the ideal-cipher methodology of Bellare, Pointcheval, and Rogaway. That is, without a robust measuring method for deciding whether a given cipher is a "good realization" of the ideal-cipher, ideal-cipher model analysis [2,8] of a password-based protocol can be of limited value. Indeed, there is no well defined (let alone rigorous) way in [2] to distinguish between secure and insecure instantiations of an ideal-cipher. Note that Black and Rogaway [5] have done some initial research on the potential implementations of ideal-ciphers with arbitrary finite domains. However, it is still far from a complete solution.
One of the main applications of password-based protocols is in the environment of wireless and other more constrained devices (e.g., secure downloading of private credentials: SACRED [20]). Elliptic Curve Cryptography (ECC) has been extensively used in these constrained devices. However, most of the suggested password-based protocols are described in the group (or subgroup of) G = Z * p , and are either non-friendly or non-secure for ECC-based groups. For example, SRP [25,26] is based on a field and used both field operations of addition and multiplication, but ECC groups only have one group operation. Several ECC-based SRP protocols have been introduced in Lee and Lee [14]. We will show that one of these protocols is completely insecure. We will also discuss the security issues related ECC-based SRP protocols. As an example, we will also present a variant SRP5 of the original SRP protocol.
The organization of the paper is as follows: In Section 2 we informally address the security problems of password-based protocols. We mount attacks on several instantiations of Bellare, Pointcheval, and Rogaway's AuthA protocol and on instantiations of Bresson, Chevassut, and Pointcheval's OEKE protocol in Sections 3 and 4 respectively. In Section 5 we briefly discuss instantiations of OEKE and the SRP protocol. We draw our conclusions in Section 6.
Security of password authentication
Halevi and Krawczyk [11, Sections 2.2-2.3] introduced a notion of security for password authentication. They provide a list of basic attacks that a passwordbased protocol needs to guard against. In the following, we provide the list of attacks. An ideal password protocol should be secure against these attacks and we will follow these criteria when we discuss the security of password protocols.
• Eavesdropping. The attacker may observe the communications channel.
• Replay. The attacker records messages she has observed and re-sends them at a later time. • Man-in-the-middle. The attacker intercepts the messages sent between the parties C and S and replaces these with her own messages. She plays the role of the client in the messages which it sends to the server, and at the same time plays the role of the server in the messages that she sends to the client. A special man-in-the-middle attack is the small subgroup attack [15,18,23]. We illustrate this kind of attack by a small example. Let g be a generator of the group G of order n = qt for some small t > 1. In a standard Diffie-Hellman key exchange protocol, the client C chooses a random x and sends g x to the server S, then S chooses a random y and sends g y to C. The shared key between C and S is g xy . Now assume that the attacker A intercepts C's message g x , replaces it with g xq , and sends it to S. A also intercepts S's message g y , replaces it with g yq , and sends it to C. In the end, both C and S compute the shared key g qxy . Since g qxy lies in the subgroup of order t of the group generated by g q , it takes on one of only t possible values. A can easily recover this g qxy by an exhaustive search. • Impersonation. The attacker impersonates the client or the server to get some useful information.
• Password-guessing. The attacker is assumed to have access to a relatively small dictionary of words that likely includes the secret password α. In an off-line attack, the attacker records past communications and searches for a word in the dictionary that is consistent with the recorded communications.
In an on-line attack, the attacker repeatedly picks a password from the dictionary and attempts to impersonate C or S. If the impersonation fails, the attacker removes this password from the dictionary and tries again, using a different password. • Partition attack. The attacker records past communications, then goes over the dictionary and deletes those words that are not consistent with the recorded communications from the dictionary. After several tries, the attacker's dictionary could become very small.
We now informally sketch the definition of security in [11] for a passwordbased protocol. The attacker A is allowed to watch regular runs of the protocol between the client C and the server S, and can also actively communicate with C and S in replay, impersonation, and man-in-the-middle attacks. A protocol is said to be secure in the presence of such an attacker if (i) whenever the server S accepts an authentication session with C, it is the case that C did indeed participate in the authentication session; and (ii) C accepts an authentication session with S, it is the case that S did indeed participate in the authentication session.
3 Security issues with practical realizations of the ideal cipher model: on Bellare and Rogaway's AuthA
In the remainder of this paper, we will use the following notations: By G = g , we denote a cyclic group generated by g, and by ord(g), we denote the order of g. For a symmetric encryption scheme E and a key π, E π (x) denotes the ciphertext of x. We also assume that the client C holds a password α and the server S holds a key β which is a known function of α. In a protocol for a symmetric model, the client and the server share the same password, that is, β = α. In this paper, we will abuse our notation by letting C and S also denote corresponding parties' identification strings. In a protocol for an asymmetric model, β will typically be chosen so that it is hard to compute α from C, S, and β. The password α might be a poor one. Probably the user selects some short easily-rememberable α and then installed β at the server. In the protocols, H is used to denote a secure hash function. We will also abuse our notation by using C (respectively, S) to denote the identity number of the client (respectively, the server).
The AuthA protocol
Bellare, Pointcheval, and Rogaway [2] defined a model for the password-based protocol problem and showed that their model is rich enough to deal with password guessing, forward secrecy, server compromise, and loss of session keys. Then they proved that in the ideal-cipher model (random oracles), the two-flow protocol at the core of Encrypted Key Exchange (EKE) is secure. In addition, Bellare and Rogaway [3] suggested several instantiations of the idealcipher in their proposal to IEEE P1363.2 working group. In the protocol, the server S stores the value C, β for each client C where β = g α . The protocol proceeds as follows:
(1) C chooses a random x ∈ [1, ord(g) − 1], computes g x , encrypts it with β, and sends the ciphertext E β (g x ) to the Server S. (2) S chooses a random y ∈ [1, ord(g) − 1], computes g y , encrypts it with β, and sends the ciphertext E β (g y ) to C. The authors of [2] claimed that if the encryption function E is given by an ideal-cipher (random oracle), then the first-two-step sub-protocol (of AuthA) at the core of EKE is provably secure in their model. In the following sections, we present examples of real (NOT ideal) ciphers (including two naive implementations of the three instantiations proposed to IEEE P1363.2) that would result in broken instantiations of the idealized AuthA protocol. Indeed, in [2], the authors warn that "incorrect instantiation of the encryption primitive, including instances which are quite acceptable in other contexts, can easily destroy the protocol's security". Our examples confirm this argument.
Instantiation
E β (X) = X · g H(β)
Assume that H is a random oracle. Bellare and Rogaway [3] suggested the instantiation E β (X) = X · H(β) of the ideal-cipher. Obviously, this is far from an ideal cipher. However, this misleading instantiation will give one impression that E β (X) = X · g H(β) could also be a "reasonable" instantiation of the idealcipher. Indeed, one may wonder, if E β (X) = X · H(β) is an ideal cipher, why E β (X) = X · g H(β) is not? In the following, we will describe our attack on the two-step protocol with this instantiation E β (X) = X · g H(β) .
No matter whether there is an authentication step (as in AuthA) or not, our attack works for the two-step protocol. If there is an authentication step, then the adversary A will launch impersonation attacks and use the authentication messages to verify whether the guessed password is a correct one. Without loss of generality, we assume that the server sends the first authentication message if any authentication message is ever sent between C and S (if the first authentication message is sent from client to server, then the following attack works when the adversary impersonate the server). If there is no authentication step, then the adversary could not check whether a guessed password is a correct one. However, in practice, the established session key will be used either to encrypt the actual data for the application protocol or to encrypt client's private credential (e.g., client's private key). In either case, the adversary A can verify whether the guessed password is a correct one by checking the redundancy in these encrypted data. Specifically, consider the following scenario. A impersonates the client, chooses a random z, and sends g z to the server. The server S chooses a random y, sends g y+H(β) to A, and computes the shared key K = H(C||S||g z−H(β) ||g y ||g (z−H(β))y ). A distinguishes the following three cases:
(1) This is an AuthA protocol and S sends H(K||2) to A for authentication. For each guessed β ′ , A computes
K ′ = H(C||S||g z−H(β ′ ) ||g y+H(β)−H(β ′ ) ||g (y+H(β)−H(β ′ ))(z−H(β ′ )) ).
Note that if β ′ = β, then K ′ = K and H(K||2) = H(K ′ ||2). Thus A can decide whether β ′ is the correct password. (2) S sends E K (m) to A, where m is some application data and has sufficient redundancy. For each guessed β ′ , A computes K ′ as in the above item 1 and decrypts E K (m) as m ′ = E −1 K ′ (E K (m)). If β ′ = β, then K ′ = K and m ′ = m. Thus by checking the redundancy in m ′ , A can decide whether she has guessed the password correctly.
(3) S sends E K (π) to A, where π is C's private key encrypted with C's password α. Similarly, for each guessed α ′ , A first computes β ′ , then computes K ′ as in the above item 1 and decrypts E K (π) as π ′ = E −1 K ′ (E K (π)). If β ′ = β, then K ′ = K and π ′ = π. A further decrypts π ′ with α ′ to see whether the decrypted value is the private key of C. Since A knows C's public key, she can easily verify this fact. Thus, A can decide whether she has guessed the password correctly.
The above attack demonstrates the inherent weakness in the "ideal-cipher model methodology" by Bellare, Pointcheval, and Rogaway [2]. That is, without a robust measuring method for deciding whether a given cipher is a "good realization", ideal-cipher model analysis of a password-based protocol can be of limited value. Indeed, there is no well defined (let alone rigorous) way in [2] to distinguish between secure and insecure instantiations of an ideal-cipher.
Instantiation E β (X) = X · H(β)
The first ideal-cipher instantiation for AuthA in [3] is: E β (X) = X · H(β). The authors suggested that the group G = g could be a group on which the Diffie-Hellman problem is hard:
...This group could be G = Z * p , or it could be a prime-order subgroup of this group, or it could be an elliptic curve group...(from [2]) After the introduction of the instantiation function, the authors [3] commented that "you apply the mask generation function H to β, interpret the result as a group element, and multiply by the plaintext". However, for most implementations, one may ignore this comment and just multiply the hash result with the plaintext. Naively, one can also interpret the hash result H(β) as a group element g H(β) . Then our attacks in Section 3.2 show that this instantiation is not secure. Indeed, from the ideal-cipher assumption, it is not clear that one needs to interpret the hash result as a group element other than g H(β) . One may feel that both X · H(β) and X · g H(β) can be regarded as acceptable instantiations of the ideal cipher over Z * p (why not?). In the following, we mount an off-line dictionary attack on this instantiation without interpreting the result as a group element.
Our attack in Section 3.2 does not work for AuthA with this instantiation. However, we can show that this instantiation will leak some information of the password α if the group is a subgroup of Z * p or an elliptic curve group. As an example, we illustrate the information leakage of AuthA with a subgroup of Z * p . Assume that p = tq + 1 with gcd(t, q) = 1. In practice, generally one chooses p = 2q + 1 for some large prime q (see, e.g., [17]), and ord(g) = q.
In the attack, the eavesdropper A intercepts the message g x · H(β), computes (H(β)) q = (g x · H(β)) q . For each guessed β ′ , A checks whether (H(β ′ )) q = (H(β)) q . If the equation does not hold, then A deletes β ′ from her dictionary. Since H is a random oracle, the value (H(x)) q is uniformly distributed over the set {g q 1 , g 2q 1 , . . . , g tq 1 } when x is chosen at random, where g 1 is a generator of Z * p . That is, Z * p = g 1 . Thus, log t bits information of the password is leaked for each communication between the client and the server with different Diffie-Hellman parameters. Thus, after ⌈ |α| log t ⌉ observations of communications between the client and the server with different Diffie-Hellman parameters, the adversary will recover the password with high probability.
Despite the above attacks, we feel that AuthA could be securely instantiated by the cipher: E β (X) = X · ı(H(β)), where H is a secure hash function and where ı maps a random string to a group element of order ord(g) by "increasing" the random string one by one until reaching a group element with the above given property. This instantiation should work both for ECC based groups and for subgroups of Z * p . But we would like to warn that we have not proved with reasonable assumptions that this is a secure instantiation of the ideal cipher. Of course, it has been proven [2] that if E β (X) = X · ı(H(β)) is an ideal cipher then the above instantiation is provable secure against off-line dictionary attacks. But we have no mechanisms to measure whether the above cipher is an ideal cipher.
Instantiation E β (X) = (r, X · H(r||β))
The second ideal-cipher instantiation for AuthA in [3] is: E β (X) = (r, X · H(r||β)), where r is independently chosen at random for each session. After the introduction of this instantiation, the authors [3] did not mention that the hash result H(r||β) should be interpreted as a group element before applying the multiplication. However we assume that the authors have this in mind when they introduce this instantiation. But this again shows that a naive implementation may multiply the hashing result with X directly without interpreting it as a group element since E β (X) = (r, X · H(r||β)) could be regarded as an acceptable ideal cipher. Indeed, the ideal cipher model does not address this tiny difference between the two implementations: interpreting the hashing result as a group element and not interpreting the hashing result as a group element.
Indeed this instantiation without interpreting the hashing result as a group element is completely insecure against partition attacks if the underlying group is a subgroup of Z * p or an elliptic curve group. The attack in Section 3.3 can be used to show that for each randomly chosen r, log t bits information of the password α is leaked. Thus after recording several communications with different r, the adversary can recover α.
Instantiation E β (X) by a cipher
The third ideal-cipher instantiation for AuthA in [3] is simply a cipher, e.g., E β (X) = AES β (X). AuthA with this instantiation is not secure against partition attacks if the underlying group is a subgroup of Z * p or an elliptic curve group. The insecurity of this instantiation has been observed by several authors, see, e.g., [19,6].
Firstly we assume that the underlying group G is a subgroup of Z * p . The eavesdropper A tries to decrypt E β (g x ) and E β (g y ) with different guessed β ′ (= g α ′ ). If either of the decrypted value
E −1 β ′ (E β (g x )) or E −1 β ′ (E β (g y ))
is not an element of G, then A knows that α ′ is not the correct password. Since E β (X) is an ideal cipher, only with probability ||G||/2 |p| 2 both E −1 β ′ (E β (g x )) and E −1 β ′ (E β (g y )) are elements of G, where |p| and ||G|| denote the length of p in binary representation and the cardinality of G respectively. Thus for each execution of the protocol, 2 log(2 |p| /||G||) bits information of the password α is leaked. After recording several executions of the protocol, A recovers the password.
Secondly assume that the underlying group G is an elliptic curve group. For an elliptic curve group E a,b (F * p ) = g over the field F * p , the element (x, y) ∈ g is denoted by its x and y coordinates. For a random chosen x ∈ F * p , the probability that there exists a y ∈ F * p such that (x, y) is a point on the curve is 1/2. Thus AuthA over elliptic curve groups with this instantiation is not secure against partition attacks.
Security issues with ideal ciphers in one encryption key exchange OEKE
Recently, Bresson, Chevassut, and Pointcheval [8] formally modeled the Au-thA protocol by the One-Encryption-Key-Exchange (OEKE): only one flow is encrypted (using either a symmetric-encryption primitive or a multiplicative function as the product of a Diffie-Hellman value with a hash of the password). The authors pointed out that the advantage of OEKE over the classical EKE, wherein the two Diffie-Hellman values are encrypted, is its easiness of integration. For example, in Transport Layer Security (TLS) protocol with password-based key-exchange cipher suits [21,22].
OEKE is similar to AuthA except that the first message is not encrypted. In particular, the protocol proceeds as follows:
(1) C chooses a random x ∈ [1, ord(g) − 1], computes g x and sends g x to the Server S. (2) S chooses a random y ∈ [1, ord(g) − 1], computes g y , encrypts it with β, and sends the ciphertext E β (g y ) to C.
(3) C computes Auth = H 1 (C||S||g x ||g y ||g xy ), and sends Auth to S. C also computes session key K = H 0 (C||S||g x ||g y ||g xy ). (4) S verifies that the value Auth is correct and computes the session key similarly.
Where H 0 and H 1 are two independent random oracles. The authors [8] show that the protocol OEKE achieves provable security against dictionary attacks in both the random oracle and ideal-cipher models under the computational Diffie-Hellman intractability assumption. The authors [8] also observed that a simple block-cipher could not be used for the instantiation of the ideal-cipher due to the partition attacks.
The authors recommended two instantiations of the ideal cipher. In the first method which is essentially from [1], one encrypts the element, and re-encrypts the result, until one finally falls in the group G. The second instantiation is the cipher E β (X) = X · H(β) that we have discussed in Section 3.3. That is (see [8]), "to instantiate the encryption primitive as the product of a Diffie-Hellman value with a hash of the password, as suggested in [3]". Obviously, if one dose not interpret the hashing output of password as a group element before applying the multiplication, then our attacks in Section 3.3 work for OEKE also. Thus we have the same concern for OEKE: the ideal cipher model does not directly address the issues of interpreting the hashing output as group elements. From the ideal cipher model viewpoints, the two instantiations (one with interpretation of group elements and one without interpretation of group elements) have no essential difference. However, one instantiation results in broken protocol. This observation strengthens our viewpoint: without a rigorous way to distinguish between secure and insecure instantiations of an ideal-cipher, the value of the provable security in ideal-cipher model is limited.
Secure Remote Password protocol (SRP)
If the underlying group G in OEKE is indeed a finite field, then one can instantiate the ideal-cipher with E β (X) = X + β and obtain the Secure Remote Password protocol (SRP6) [25,26]. But one needs to be careful that SRP protocol uses different values for the keying material computation which achieves stronger security. In the SRP6 protocol, the server S stores the value C, β, s for each client C, where β = g v , v = H(s||C||α), s is a random seed for C, and
(2) g α · g xy for EC-SRP2.
(3) (g x−α ) y for EC-SRP3.
(4) (g x−α+1 ) y for EC-SRP4.
The keying material K is the same as that in the original SRP protocol, i.e., K = SHA(g y(x+uv) ). It is straightforward to check that the protocol EC-SRP1 is insecure against off-line dictionary attacks.
Conclusions
In this paper, we presented several examples of real ciphers that would result in broken instantiations of the idealized AuthA and OEKE protocols.
Our results show that one should be extremely careful when designing or implementing password-based protocols with provable security in idea-cipher models: a provable security in ideal-cipher model does not necessarily say that the instantiation of the protocol is secure.
( 3 )
3AuthA authentication steps. Let K = H(C||S||g x ||g y ||g xy ). Then there are three authentication methods for AuthA: (a) The server authenticates himself by sending H(K||2) to C. (b) The client authenticates himself by sending H(K||g αy ) to S. (c) Both server and client achieve mutual authentication by sending both of the messages in the above two steps
AcknowledgementsThe third author would like to thank Prof. Alfred Menezes for many helpful discussions and comments over the reseach related to this paper. The authors would also like to thank the anonymous referees of this journal for detailed comments on this paper.H is a predetermined hash function. Assume that the underlying group for the protocol is G = Z * p = g . Then the protocol proceeds as follows:(1) C sends his name C to the server S.(2) S sends s to C.(3) C chooses a random x ∈ [1, ord(g) − 1] and sends g x to S. (4) S chooses a random y ∈ [1, ord(g) − 1] and sends 3β + g y to C. (5) Let u = H(g x ||3β + g y ). C sends M = H(g x ||3β + g y ||S) to S where S = g y(x+uv) . (6) S verifies that M is correct and sends H(g x ||M||S) to C.The role of u in SRP6 is to defeat an adversary A who may know β. If A knows β and u is fixed, she can impersonate C by sending g x · g −vu = g x−uv instead of g x in the third step. Then g y(x−uv+uv) = g xy , and K = H(g xy ). Note that this additional value u in the SRP protocol achieves stronger security against stolen β while OEKE does not have this level of security.If we instantiate the ideal cipher in OEKE with E β (X) = X · ı(H(β)) and use the SRP6 shared secret computation method, then we get a natural generalization of the SRP protocol, where ı "appropriately" maps a random string to a group element of order ord(g). For example, if we define ı(H(β)) by the following procedure, then we get the SRP5 protocol[24]which is currently under standardization in the IEEE 1363.2 standard working group.(1) Let x = H(β).(2) If x is a group element of order ord(g), then let ı(H(β)) = x. Otherwise, increase x by one and go to step(2). Note that the sentence "increase x by one" can be any natural interpretation of "add one to a group element" in a group.Since the original SRP protocol is based on a field and uses both field operations of addition and multiplication, there is no direct translation of SRP from the group Z * p to ECC-based groups. The above generalization SRP5 of SRP6 can be implemented over ECC groups.Lee and Lee[14]have tried to design ECC-based SRP protocols and introduced four ECC-based SRP protocols EC-SRP1, EC-SRP2, EC-SRP3, and EC-SRP4. They used completely different key authentication steps (that is, the steps (5) to(7)are different). The key steps in their protocols are the different instantiations of the ideal cipher. That is, they recommended replacing the message 3β + g y in the fourth step of the SRP protocol with the following messages:(1) g y for EC-SRP1.
Key-privacy in publickey encryption. M Bellare, A Boldyreva, A Desai, D Pointcheval, Asiancrypt '01. BerlinSpringer-Verlag2248M. Bellare, A. Boldyreva, A. Desai, and D. Pointcheval. Key-privacy in public- key encryption. In: Asiancrypt '01, LNCS 2248, pages 566-582. Springer-Verlag, Berlin, 2001.
Authenticated key exchange secure against dictionary attacks. M Bellare, D Pointcheval, P Rogaway, Advances in Cryptology -Eurocrypt' 2000. Springer-Verlag1807M. Bellare, D. Pointcheval, and P. Rogaway. Authenticated key exchange secure against dictionary attacks. Advances in Cryptology -Eurocrypt' 2000, LNCS 1807, pages 139-155, Springer-Verlag, 2000.
The AuthA protocol for password-based authenticated key exchange. Submission to IEEE P1363.2. M Bellare, P Rogaway, Available from [12M. Bellare and P. Rogaway. The AuthA protocol for password-based authenticated key exchange. Submission to IEEE P1363.2. March, 2000. Available from [12].
Encrypted key exchange: password-based protocols secure against dictionary attacks. S Bellovin, M Merritt, Proceedings of the 1992 IEEE Computer Society Conference on Research in Security and Privacy. the 1992 IEEE Computer Society Conference on Research in Security and PrivacyS. Bellovin and M. Merritt. Encrypted key exchange: password-based protocols secure against dictionary attacks. Proceedings of the 1992 IEEE Computer Society Conference on Research in Security and Privacy, pages 72-84, 1992.
Ciphers with arbitrary finite domains. Proceedings of CT-RSA. J Black, P Rogaway, J. Black and P. Rogaway. Ciphers with arbitrary finite domains. Proceedings of CT-RSA 2002, pages 114-130.
Elliptic curve based password authenticated key exchange protocols. C Boyd, P Montague, K Nguyen, ACISP '01. BerlinSpringer-Verlag2119C. Boyd, P. Montague, and K.Nguyen. Elliptic curve based password authenticated key exchange protocols. In: ACISP '01, LNCS 2119, pages 487- 501, Springer-Verlag, Berlin 2001.
Provable secure password authenticated key exchange using Diffie-Hellman. V Boyko, P Mackenzie, S Patel, Advances in Cryptology -Eurocrypt' 2000. Springer-Verlag1807V. Boyko, P. MacKenzie, and S. Patel. Provable secure password authenticated key exchange using Diffie-Hellman. Advances in Cryptology -Eurocrypt' 2000, LNCS 1807, pages 156-171, Springer-Verlag, 2000.
Security proofs for an efficient password-based key exchange. E Bresson, O Chevassut, D Pointcheval, Proc. of 10th ACM Conference on Computer and Communications Security. of 10th ACM Conference on Computer and Communications SecurityE. Bresson, O. Chevassut, and D. Pointcheval. Security proofs for an efficient password-based key exchange. Proc. of 10th ACM Conference on Computer and Communications Security, pages 241-250, 2003.
Secure password-based cipher suite for TLS. P Buhler, T Eirich, M Steiner, M Waidner, Proc. of Network and Distributed Systems Security Symposium. of Network and Distributed Systems Security SymposiumP. Buhler, T. Eirich, M. Steiner, and M. Waidner. Secure password-based cipher suite for TLS. Proc. of Network and Distributed Systems Security Symposium. February, 2000.
Protecting poorly chosen secrets from guessing attacks. L Gong, M Lomas, R Needham, J Saltzer, IEEE J. on Selected Areas in Communications. 115L. Gong, M. Lomas, R. Needham, and J. Saltzer. Protecting poorly chosen secrets from guessing attacks. IEEE J. on Selected Areas in Communications, 11(5):648-656, 1993.
Public-key cryptography and password protocols. S Halevi, H Krawczyk, 12] IEEE P1363.2ACM Transactions on Information and System Security. 23S. Halevi and H. Krawczyk. Public-key cryptography and password protocols. ACM Transactions on Information and System Security, 2(3):230-268, 1999. [12] IEEE P1363.2. http://grouper.ieee.org/groups/1363/passwdPK/submissions.html
Strong password-only authentication key exchange. D Jablon, ACM Computer Communications Review. 265This is also a submission to IEEE P1363.2. Available from [12D. Jablon. Strong password-only authentication key exchange. ACM Computer Communications Review, 26(5):5-26, October 1996. This is also a submission to IEEE P1363.2. Available from [12].
EC-SRP protocol: elliptic curve secure remote password protocol. Y Lee, J Lee, Korea Institute of Information Security and Cryptology. 91Y. Lee and J. Lee. EC-SRP protocol: elliptic curve secure remote password protocol. Korea Institute of Information Security and Cryptology, 9(1):85-102, 1999.
A key recovery attack on discrete log-based schemes using a prime order subgroup. C Lim, P Lee, Advances in Cryptology -Crypto' 97. Springer-Verlag1294C. Lim and P. Lee. A key recovery attack on discrete log-based schemes using a prime order subgroup. Advances in Cryptology -Crypto' 97, LNCS 1294, pages 249-263, Springer-Verlag, 1997.
Open key exchange: how to defeat dictionary attacks without encrypting public keys. S Lucks, Proc. of the Security Protocols Workshop. of the Security Protocols WorkshopSpringer-Verlag1361S. Lucks. Open key exchange: how to defeat dictionary attacks without encrypting public keys. Proc. of the Security Protocols Workshop, Lecture Notes in Computer Science 1361, Springer-Verlag, 1997.
Handbook of Applied Cryptography. A Menezes, P Van Oorschot, S Vanstone, CRC PressA. Menezes, P. van Oorschot, and S. Vanstone. Handbook of Applied Cryptography. CRC Press, 1996.
Key agreement and the need for authentication. A Menezes, M Qu, S Vanstone, Workshop records of PKS'95. Toronto, CanadaA. Menezes, M. Qu, and S. Vanstone. Key agreement and the need for authentication. Workshop records of PKS'95, Toronto, Canada.
Number theoretic attacks on secure password schemes. S Patel, Proceedings of the IEEE Symposium on Security and Privacy. the IEEE Symposium on Security and PrivacyIEEE PressS. Patel. Number theoretic attacks on secure password schemes. In Proceedings of the IEEE Symposium on Security and Privacy, pages 236-247, IEEE Press, 1997.
SACRED. Securely Available Credentials (sacred). SACRED. Securely Available Credentials (sacred), IETF Working Group. More information is available from: http://www.ietf.org/html.charters/sacred-charter.html
Secure password-based cipher suite for TLS. M Steiner, P Buhler, T Eirich, M Waidner, ACM Transactions on Information and System Security. 42M. Steiner, P. Buhler, T. Eirich, and M. Waidner. Secure password-based cipher suite for TLS. ACM Transactions on Information and System Security, 4(2):134-157, 2001.
Using SRP for TLS authentication. D Taylor, working in progress Internet DraftD. Taylor. Using SRP for TLS authentication, November 2002, working in progress Internet Draft.
On Diffie-Hellman key agreement with short exponents. P Van Oorschot, M Wiener, Advances in Cryptology -Eurocrypt' 96. Springer-Verlag1070P. van Oorschot and M. Wiener. On Diffie-Hellman key agreement with short exponents. Advances in Cryptology -Eurocrypt' 96, LNCS 1070, pages 332-343, Springer-Verlag, 1996.
Elliptic curve based SRP protocol. Submission to IEEE P1363.2. Y Wang, Available from [12Y. Wang. Elliptic curve based SRP protocol. Submission to IEEE P1363.2. May, 2002. Available from [12].
The secure remote password protocol. T Wu, Proceedings of the 1998 Internet Society Symposium on Network and Distributed Systems Security. the 1998 Internet Society Symposium on Network and Distributed Systems SecuritySan Diego, CAT. Wu. The secure remote password protocol. In: Proceedings of the 1998 Internet Society Symposium on Network and Distributed Systems Security, pages 97-111, San Diego, CA, 1998.
SRP6: Improvements and refinements to the secure remote password protocol. T Wu, T. Wu. SRP6: Improvements and refinements to the secure remote password protocol. http://srp.stanford.edu/
|
[] |
[
"TORSION IN BOUNDARY COINVARIANTS AND K-THEORY FOR AFFINE BUILDINGS",
"TORSION IN BOUNDARY COINVARIANTS AND K-THEORY FOR AFFINE BUILDINGS"
] |
[
"Guyan Robertson "
] |
[] |
[] |
Let (G, I, N, S)be an affine topological Tits system, and let Γ be a torsion free cocompact lattice in G. This article studies the coinvariants H 0 (Γ; C(Ω, Z)), where Ω is the Furstenberg boundary of G. It is shown that the class [1] of the identity function in H 0 (Γ; C(Ω, Z)) has finite order, with explicit bounds for the order.A similar statement applies to the K 0 group of the boundary crossed product C * -algebra C(Ω) ⋊ Γ. If the Tits system has type A 2 , exact computations are given, both for the crossed product algebra and for the reduced group C * -algebra.
|
10.1007/s10977-005-1448-8
|
[
"https://arxiv.org/pdf/math/0501330v1.pdf"
] | 59,443,506 |
math/0501330
|
4a0cbeae15098490d2dc657aa5f4a10ec72e7e04
|
TORSION IN BOUNDARY COINVARIANTS AND K-THEORY FOR AFFINE BUILDINGS
20 Jan 2005
Guyan Robertson
TORSION IN BOUNDARY COINVARIANTS AND K-THEORY FOR AFFINE BUILDINGS
20 Jan 2005
Let (G, I, N, S)be an affine topological Tits system, and let Γ be a torsion free cocompact lattice in G. This article studies the coinvariants H 0 (Γ; C(Ω, Z)), where Ω is the Furstenberg boundary of G. It is shown that the class [1] of the identity function in H 0 (Γ; C(Ω, Z)) has finite order, with explicit bounds for the order.A similar statement applies to the K 0 group of the boundary crossed product C * -algebra C(Ω) ⋊ Γ. If the Tits system has type A 2 , exact computations are given, both for the crossed product algebra and for the reduced group C * -algebra.
Introduction
This article is concerned with coinvariants for group actions on the boundary of an affine building. The results are most easily stated for subgroups of linear algebraic groups. Let k be a non-archimedean local field with finite residue field k of order q. Let G be the group of krational points of an absolutely almost simple, simply connected linear algebraic k-group. Then G acts on its Bruhat-Tits building ∆, and on its Furstenberg boundary Ω.
Let Γ be a torsion free lattice in G. The abelian group C(Ω, Z) of continuous integer-valued functions on Ω has the structure of a Γmodule. The module of Γ-coinvariants Ω Γ = H 0 (Γ; C(Ω, Z)) is a finitely generated group. We prove that the class [1] in Ω Γ of the constant function 1 ∈ C(Ω, Z) has finite order. If G is not one of the exceptional types E 8 , F 4 or G 2 , then the order of [1] is less than covol(Γ), where the Haar measure µ on G is normalized so that an Iwahori subgroup of G has measure 1. There is a weaker estimate for groups of exceptional type. If G has rank 2 then the estimates are significantly improved.
The topological action of Γ on the Furstenberg boundary is encoded in the crossed product C * -algebra A Γ = C(Ω) ⋊ Γ. Embedded in A Γ is the reduced group C * -algebra C * r (Γ), which is the completion of the complex group algebra of Γ in the regular representation as operators on ℓ 2 (Γ). The action of Γ on Ω is amenable, so the K-theory of A Γ is computable by known results, in contrast to that of C * r (Γ), which rests on the validity of the Baum-Connes conjecture. The natural embedding C(Ω) → A Γ induces a homomorphism ϕ : Ω Γ → K 0 (A Γ ) and ϕ([1]) = [1] K 0 , the class of 1 in the K 0 -group of A Γ . Therefore [1] K 0 has finite order in K 0 (A Γ ).
If Γ is a torsion free lattice in G = SL 3 (k) then exact computations can be performed. The Baum-Connes Theorem of V. Lafforgue [La] is used to compute K * (C * r (Γ)) and the results of [RS] are used to compute K * (A Γ ). In particular K 0 (C * r (Γ)) = Z χ(Γ) , a free abelian group, one of whose generators is the class [1]. The embedding of C * r (Γ) into A Γ induces a homomorphism ψ : K * (C * r (Γ)) → K * (A Γ ). This homomorphism is not injective, since [1] has finite order in K 0 (A Γ ). The computations at the end of the article suggest that the only reason for failure of injectivity of the homomorphism ψ is the fact that [1] has finite order in K 0 (A Γ ).
Much of this article considers the more general case where Γ is a subgroup of a topological group G with a BN-pair, and Γ acts on the boundary Ω of the affine building of G.
The results are organized as follows. Sections 2 and 3 state and prove the main result concerning the class [1] in Ω Γ . Section 4 gives improved estimates in the rank 2 case. Section 5 studies the connection with the K-Theory of the boundary algebra A Γ . Comparison with K-theory of the reduced C * -algebra C * r (Γ) is made in Section 6, which contains some exact computational results for buildings of type A 2 .
Torsion in Boundary Coinvariants
Let (G, I, N, S) be an affine topological Tits system [Gd,Definition 2.3]. Then G is a group with a BN-pair in the usual algebraic sense [Ti1, Section 2] and the Weyl group W = N/(I ∩ N) is an infinite Coxeter group with generating set S. The subgroup I of G is called an Iwahori subgroup. A subgroup of G is parahoric if it contains a conjugate of I. The topological requirements are that G is a second countable locally compact group and that all proper parahoric subgroups of G are open and compact [Gd,Definition 2.3].
Let n + 1 = |S| be the rank of the Tits system. The group G acts on the Tits complex ∆, which is an affine building of dimension n. It will be assumed throughout that ∆ is irreducible; in other words, the Coxeter group W is not a direct product of nontrivial Coxeter groups. Denote by ∆ i the set of i-simplices of ∆, (0 ≤ i ≤ n). The vertices of ∆ are the maximal proper parahoric subgroups of G, and a finite set of such subgroups spans a simplex in ∆ if and only if its intersection is parahoric. The action of G on ∆ is by conjugation of subgroups. The building ∆ is a union of n-dimensional subcomplexes, called apartments. Each apartment is a Coxeter complex, with Coxeter group W .
Associated with the Coxeter system (W, S) there is a Coxeter diagram of type X n (X = A, B, . . . , G), whose vertex set I is a set of n + 1 types, which are in natural bijective correspondence with the elements of S. Each vertex v ∈ ∆ 0 has a type τ (v) ∈ I. The type of a simplex in ∆ is the set of types of its vertices. By construction, the action of G on ∆ preserves types. A type t ∈ I is special if deleting t and all the edges containing t from the diagram of type X n results in the diagram of type X n (the diagram of the corresponding finite Coxeter group). A vertex v ∈ ∆ is said to be special if its type τ (v) is special [BT,1.3.7 ].
A simplex of maximal dimension n in ∆ is called a chamber. Every chamber has exactly one vertex of each type. If σ is any chamber containing the vertex v then the codimension-1 face of σ which does not contain v has type I − {τ (v)}.
The action of G on ∆ is strongly transitive, in the sense that G acts transitively on the set of pairs (σ, A) where σ is a chamber contained in an apartment A of ∆. The building ∆ is locally finite, in the sense that the number of chambers containing any simplex is finite, and it is thick, in the sense that each simplex of dimension n − 1 is contained in at least three chambers. If τ is a simplex in ∆ of dimension n − 1 and type I − {t}, then the number of chambers of ∆ which contain τ is q t + 1 where q t ≥ 2. The integer q t depends only on t; not on τ .
Associated with the group G there is also a spherical building, the building at infinity ∆ ∞ . The boundary Ω of ∆ is the set of chambers of ∆ ∞ , endowed with a natural compact totally disconnected topology, which we shall describe later on. Since G acts transitively on the chambers of ∆ ∞ , Ω may be identified with the topological homogeneous space G/B, where the Borel subgroup B is the stabilizer of a chamber of ∆ ∞ .
Example 2.1. A standard example is G = SL n+1 (Q p ), where Q p is the field of p-adic numbers. In this case B is the subgroup of upper triangular matrices in G, and Ω is the Furstenberg boundary of G.
If Γ is a subgroup of G, then Γ acts on Ω, and the abelian group C(Ω, Z) of continuous integer-valued functions on Ω has the structure of a Γ-module. The module of Γ-coinvariants, C(Ω, Z) Γ , is the quotient of C(Ω, Z) by the submodule generated by {g·f −f : g ∈ Γ, f ∈ C(Ω, Z)}. Recall that C(Ω, Z) Γ is the homology group H 0 (Γ; C(Ω, Z)). For the rest of this article, C(Ω, Z) Γ will be denoted simply by Ω Γ . Define c(Γ) ∈ Z + ∪ {∞} to be the number of Γ-orbits of chambers in ∆.
If Γ is a torsion free cocompact lattice in G, then c(Γ) is the number of n-cells of the finite cell complex ∆\Γ. Suppose that the Haar measure µ on G has the Tits normalization µ(I) = 1 [Ti2,§3.7]. Then c(Γ) = covol(Γ).
We shall see below that if Γ is a torsion free cocompact lattice in G then Ω Γ is a finitely generated abelian group. Note that such a torsion free lattice Γ acts freely and properly on ∆ [Gd,Lemma 2.6,Lemma 3.3]. If f ∈ C(Ω, Z) then [f ] will denote its class in Ω Γ . Also, 1 will denote the constant function defined by 1(ω) = ω for ω ∈ Ω.
Theorem 2.2. Let (G, I, N, S) be an affine topological Tits system and let Γ be a torsion free lattice in G. Then Ω Γ is a finitely generated abelian group and the following statements hold.
(1) The element [1] has finite order in Ω Γ .
(2) If s ∈ I is a special type, then the order of [1] in Ω Γ satisfies ord([1]) < q s · covol(Γ) .
(3) If, in addition, G is not one of the exceptional types G 2 , F 4 , E 8 , then ord([1]) < covol(Γ) .
Remark 2.3. A torsion free lattice in G is automatically cocompact [Se2,II.1.5].
Remark 2.4. Suppose that Γ is isomorphic to a subgroup of a group Γ ′ and that the action of Γ on Ω extends to an action of Γ ′ on Ω. Then there is a natural surjection Ω Γ → Ω Γ ′ . It follows that Theorem 2.2 remains true if Γ is replaced by any such group Γ ′ .
Remark 2.5. The group Ω Γ depends only on Γ and not on the ambient group G. This follows from the rigidity results of [KL], if n ≥ 2, and from [Gr] if n = 1.
We now describe briefly how Theorem 2.2 applies to algebraic groups. Let k be a non-archimedean local field and let G be the group of krational points of an absolutely almost simple, simply connected linear algebraic k-group : e.g. k = Q p , G = SL n+1 (Q p ). Associated with G there is a topological Tits system of rank n + 1, where G has k-rank n [IM]. Now G acts properly on the corresponding Bruhat-Tits building ∆ [Ti2,§2.1], and on the boundary Ω = G/B, where B is a Borel subgroup [BM,Section 5].
Let q be the order of the residue field k. For each type t ∈ I there is an integer d(t) such that q t = q d(t) . That is, any simplex τ of codimension one and type I − {t} is contained in q d(t) + 1 chambers [Ti2,§2.4]. Ti2,§3.5.4].
If G is k-split (i.e. there is a maximal torus T ⊂ G which is k-split) then d(t) = 1 for all t ∈ I [
If k has characteristic zero, then the condition that Γ is torsion free can be omitted from Theorem 2.2. Recall that a non-archimedean local field of characteristic zero is a finite extension of Q p , for some prime p.
Corollary 2.6. Let k be a non-archimedean local field of characteristic zero. Let G be the group of k-rational points of an absolutely almost simple, simply connected linear algebraic k-group. If Γ is a lattice in G, then the class [1] has torsion in Ω Γ .
Proof. A lattice Γ in G is automatically cocompact [M,Proposition IX,3.7]. By Selberg's Lemma [Gd,Theorem 2.7], Γ has a torsion free subgroup Γ 0 of finite index. Now Theorem 2.2 implies that [1] has finite order in Ω Γ 0 . The result follows from the observation that there is a natural surjection Ω Γ 0 → Ω Γ .
Proof of Theorem 2.2
Throughout this section, the assumptions of Theorem 2.2 are in force. Before proving Theorem 2.2, we require some preliminaries. Recall that a gallery of type i = (i 1 , . . . , i k ) is a sequence of chambers (σ 0 , σ 1 , . . . σ k ) such that each pair of successive chambers σ j−1 , σ j meet in a common face of type I − {i j }. Choose a special type s ∈ I, which will remain fixed throughout this section. Fix once and for all the following data.
• (A1) An apartment A in ∆.
• (A2) A sector S in A with base vertex v of type s and base chamber C. • (A3) The unique vertex v ′ ∈ S of type s, obtained by reflecting v in a codimension-1 face of C. • (A4) The unique chamber C ′ containing v ′ which is the base chamber of a subsector of S.
• (A5) A minimal gallery of type i = (i 1 , . . . , i k ) from C to C ′ , where i 1 = s. This minimal gallery necessarily lies inside S.
These data are illustrated by Figure 1, which shows part of an apartment in a building of type G 2 and a minimal gallery from C to C ′ . Special vertices are indicated by large points. Now let D = ∆ n /Γ, the set of Γ-orbits of chambers of ∆. Since Γ acts freely and cocompactly on ∆, D is finite and elements of D are in 1 − 1 correspondence with the set of n-cells of the finite cell complex ∆/Γ. If x, y ∈ D, let M i (x, y) denote the number of Γ-orbits of galleries of type i which have initial chamber in x and final chamber in y.
C C ′ v v ′ • • • • • . . .If σ 0 ∈ x is fixed then M i (x, y) is equal to the number of galleries (σ 0 , σ 1 , . . . σ k ) of type i with final chamber σ k ∈ y.
To see this, note that any gallery of type i with initial chamber in x and final chamber in y lies in the Γ-orbit of such a gallery (σ 0 , σ 1 , . . . σ k ). Moreover, two distinct galleries of this form lie in different Γ-orbits. For suppose that (σ 0 , σ 1 , . . . , σ j , τ j+1 , . . . τ k ) is another such gallery, with τ j+1 = σ j+1 , the first chamber at which they differ. Then τ j+1 and σ j+1 have a common face of codimension one, and so lie in different Γ-orbits, since the action of Γ is free. (If gτ j+1 = σ j+1 , then g must fix every point in the common codimension one face and so g = 1.) A similar argument shows that if σ k ∈ y is fixed then then M i (x, y) is equal to the number of galleries
(σ 0 , σ 1 , . . . σ k ) of type i with initial chamber σ 0 ∈ x. Cocompactness of the Γ-action implies that M i (x, y) is finite.
If σ is a chamber in ∆, then the number N i of galleries (σ 0 , σ 1 , . . . , σ k ) of type i, with final chamber σ k = σ, is independent of σ. This follows, since G acts transitively on the set ∆ n of chambers of ∆. Note that N i > 1, by thickness of the building ∆.
Two different galleries of type i which have final chamber σ are necessarily in different Γ-orbits, by freeness of the action of Γ. It follows that if y ∈ D, then the number of Γ-orbits of galleries (σ 1 , . . . , σ k ) of type i, with σ k ∈ y, is equal to N i . In other words, for each y ∈ D,
(1) x∈D M i (x, y) = N i .
Recall that if τ is a simplex of ∆ of codimension one and type I −{t}, then the number of chambers of ∆ which contain τ is q t + 1 where q t ≥ 2. Thus the number of galleries (σ 0 , σ 1 , . . . , σ k ) of type i, with final chamber σ k = σ (fixed, but arbitrary), is equal to q i k q i k−1 . . . q i 1 , where i 1 = s. On the other hand, this number is also equal to the number q i 1 q i 2 . . . q i k of galleries (σ 0 , σ 1 , . . . , σ k ) of type i, with initial chamber σ 0 = σ (fixed, but arbitrary). It follows that, for each x ∈ D,
(2) y∈D M i (x, y) = N i .
Definition 3.1. Fix a type s ∈ I. Let α s denote the number of chambers of ∆ which contain a fixed vertex u of type s. Since G acts transitively on the set of vertices of type s, α s does not depend on the choice of the vertex u.
Remark 3.2. The Iwahori subgroup I is a chamber of ∆. Let the parahoric subgroup P s < G be the vertex of the type s of I. Then P s is a maximal compact subgroup of G containing I and α s = |P s : I|.
(In [Gd, Section 3], α s is denoted τ {s} .)
Lemma 3.3. Let s ∈ I be a special type. Then
(3) N i < q s · α s .
Proof. Fix a chamber σ 0 . We must estimate the number of galleries (σ 0 , σ 1 , . . . , σ k ) of type i (with initial chamber σ 0 ). There are q s possible choices for σ 1 . Suppose that σ 1 has been chosen and let u be the vertex of σ 1 not belonging to σ 0 . By construction, σ k also contains u ( Figure 2) and so there are less than α s possible choices for σ k . (Note the σ k = σ 1 .) Once σ k has been chosen, there is a unique (minimal) gallery of type (i 2 , . . . , i k ) with initial chamber σ 1 and final chamber σ k . In other words, the gallery (σ 0 , σ 1 , . . . , σ k ) is uniquely determined, once σ 1 and σ k are chosen. There are therefore at most q s (α s − 1) choices for this gallery.
Remark 3.4. An easy calculation in A 2 buildings shows that the estimate (3) cannot be improved to N i ≤ α s .
Definition 3.5. Let Γ be a torsion free cocompact lattice in G. If s ∈ I, let n s (Γ) (or simply n s , if Γ is understood) denote the number of Γ-orbits of vertices of type s in ∆.
Recall that covol(Γ) is equal to the number of Γ-orbits of chambers in ∆.
Lemma 3.6. Fix a type s ∈ I. Then covol(Γ) = n s (Γ) · α s .
Proof. Choose a set S of representative vertices from the Γ-orbits of vertices of type s in ∆. Thus |S| = n s (Γ). For v ∈ S, let R v denote the set of chambers containing v. Each R v contains α s chambers. We claim that the number of chambers in R = v∈S R v equals covol(Γ).
Each chamber in ∆ is clearly in the Γ-orbit of some chamber in R. Moreover, any two distinct chambers in R lie in different Γ-orbits. For suppose that σ v ∈ R v , σ w ∈ R w and gσ v = σ w , where g ∈ Γ. Then gv = w, since the action of Γ is type preserving and every chamber contains exactly one vertex of type s. Therefore v = w, since distinct vertices in S lie in different Γ-orbits. Moreover g = 1, since the action of Γ is free. Thus σ v = σ w . This shows that there are covol(Γ) chambers in R.
Before proving Theorem 2.2, we provide more details of the structure of the boundary Ω. Let σ be a chamber in ∆ n and let s be a special vertex of σ. The codimension one faces of σ having s as a vertex determine roots containing σ, and the intersection of these roots is a sector in ∆ with base vertex s and base chamber σ. Two sectors are parallel if the Hausdorff distance between them is finite. This happens if and only if they contain a common subsector. The boundary Ω of ∆ is the set of parallel equivalence classes of sectors in ∆ [Ron,Chap. 9.3]. If ω ∈ Ω and if s is a special vertex of ∆ then there exists a unique sector [s, ω) in ω with base vertex s, [Ron,Lemma 9.7].
If σ ∈ ∆ n , let o(σ) denote the vertex of σ of type s. Recall that vertices of type s are special. Let Ω(σ) denote the set of boundary points ω whose representative sectors have base vertex o(σ) and base chamber σ. That is,
u • • • • • σ 0 σ 1 σ 2 σ k . . .Ω(σ) = {ω ∈ Ω : σ ⊂ [o(σ), ω)} .
The sets Ω(σ), σ ∈ ∆ n , form a basis for the topology of Ω. Moreover, each Ω(σ) is a clopen subset of Ω. Let γ i denote the set of ordered pairs (σ, σ ′ ) ∈ ∆ n × ∆ n such that there exists a gallery of type i from σ to σ ′ . Then for each σ ∈ ∆ n , Ω(σ) can be expressed as a disjoint union
(4) Ω(σ) = (σ,σ ′ )∈γ i Ω(σ ′ ) .
For if ω ∈ Ω(σ), then the sector [o(σ), ω) is strongly isometric, in the sense of [Gt,15.5] to the sector S in the apartment A, as described at the beginning of this section. Let σ ′ be the image under this strong isometry of the chamber C ′ in A. Then (σ, σ ′ ) ∈ γ i and ω ∈ Ω(σ ′ ).
Thus Ω(σ) is indeed a subset of the right hand side of (4). Conversely, each set Ω(σ ′ ) on the right hand side of (4) is contained in Ω(σ). For if (σ, σ ′ ) ∈ γ i and ω ∈ Ω(σ ′ ) then the strong isometry from [o(σ ′ ), ω) onto S ′ extends to a strong isometry from [o(σ), ω) onto S [Gt, §15.5 Lemma]. Thus ω ∈ Ω(σ).
To check that the union on the right of (4) is disjoint, suppose that
ω ∈ Ω(σ ′ 1 ) ∩ Ω(σ ′ 2 ), where (σ, σ ′ 1 ), (σ, σ ′ 2 ) ∈ γ i . Then the strong isom- etry from [o(σ ′ 1 ), ω) onto [o(σ ′ 2 )
, ω) extends to a strong isometry from [o(σ), ω) onto itself, which is necessarily the identity map. In particular, σ ′ 1 = σ ′ 2 . If σ ∈ ∆ n , let χ σ ∈ C(Ω, Z) denote the characteristic function of Ω(σ). That is
χ σ (ω) = 1 if ω ∈ Ω(σ), 0
otherwise.
Since χ σ − χ gσ = χ σ − g.χ σ for each g ∈ Γ, the class [χ σ ] of χ σ in Ω Γ depends only on the Γ-orbit of σ in ∆ n . If x = Γσ ∈ D, it therefore makes sense to define
(5) [x] = [χ σ ] ∈ Ω Γ .
Now it follows from (4) that, for each σ ∈ ∆ n ,
χ σ = (σ,σ ′ )∈γ i χ σ ′ = y∈D σ ′ ∈y (σ,σ ′ )∈γ i χ σ ′ .(6)
Passing to equivalence classes in Ω Γ gives, for each x ∈ D,
[x] = y∈D M i (x, y)[y] .(7)
We can now proceed with the proof of the Theorem 2.2. If s is a vertex of type s of ∆, then each element ω ∈ Ω lies in Ω(σ) where σ is the base chamber of the sector [s, ω). Moreover ω lies in precisely one such set Ω(σ), with σ ∈ ∆ n , o(σ) = s. Therefore
(8) 1 = σ∈∆ n o(σ)=s χ σ .
Since the action of Γ on ∆ is free and type preserving, no two chambers σ ∈ ∆ n with o(σ) = s lie in the same Γ-orbit. To simplify notation, let n s = n s (Γ), the number of Γ-orbits of vertices of type s in ∆.
If we choose a representative set S of vertices of type s in ∆ then the chambers containing these vertices form a representative set of chambers, by the proof of Lemma 3.6. It follows that in Ω Γ ,
n s · [1] = s∈S σ∈∆ n o(σ)=s [χ σ ] (by (8)) = x∈D [x] .
Therefore
n s · [1] = x∈D y∈D M i (x, y)[y]
(by (7)
) = y∈D x∈D M i (x, y) [y] = y∈D N i · [y] (by (1)) = N i n s · [1] .
It follows that
(9) n s (N i − 1) · [1] = 0,
which proves the first assertion of Theorem 2.2. Using Lemmas 3.3, 3.6, we can estimate the order of the element [1].
(10) n s (N i − 1) < n s · (q s α s − 1) = q s · covol(Γ) − n s . Proof. An examination of the possible Coxeter diagrams [Bou,Chap VI,No 4.4,Théorème 4] shows that if the diagram is not one of the types E 8 , F 4 , G 2 , then it contains at least two special types. Therefore every chamber of ∆ contains at least two special vertices. Choose two such vertices and suppose that they have types s and t, say. In that case the condition (A3) on the apartment A in ∆ can be changed to read:
• (A3 ′ ) The unique special vertex v ′ ∈ S of type t which lies in C.
Assume that the remaining conditions (A1), (A2), (A4), (A5) are unchanged. Figure 3 illustrates the setup in the B 2 case.
v The proof proceeds exactly as before, except that all the chambers in a gallery (σ 0 , σ 1 , . . . , σ k ) of type i now contain a common vertex u of type t. Therefore equation (3) becomes
v ′ C C ′ • • • • • • • • • • • • • • • .N i < α t .
Observe that one must be careful with the notation. For example in equation (6), the function χ σ on the left is now defined in terms of sectors based at the vertex of type s of σ, whereas the functions χ σ ′ on the right will now be defined in terms of sectors based at the vertex of type t of σ ′ . Equation (9) becomes
(11) (n t N i − n s ) · [1] = 0 .
The order of the element [1] is bounded by
(12) n t · α t − n s = covol(Γ) − n s .
Finally, we verify that Ω Γ is a finitely generated group. Sets of the form Ω(σ), σ ∈ ∆ n , form a basis of clopen sets for the topology of Ω. It follows that the abelian group C(Ω, Z) is generated by the set of characteristic functions {χ σ : σ ∈ ∆ n }. We show that Ω Γ is generated by {[x] : x ∈ D}.
Lemma 3.8. Every clopen set V in Ω may be expressed as a finite disjoint union of sets of the form Ω(σ), σ ∈ ∆ n .
Proof. Fix a special vertex s of type s in ∆. For each ω ∈ Ω, sets of the form Ω(σ) with σ ∈ ∆ n and σ ⊂ [s, ω) form a basic family of open neighbourhoods of ω. Therefore, for each ω ∈ V , there exists a chamber σ ω ∈ ∆ n with σ ω ⊂ [s, ω) and ω ∈ Ω(σ ω ) ⊆ V . The clopen set V , being compact, is a finite union of such sets:
V = Ω(σ ω 1 ) ∪ · · · ∪ Ω(σ ω k ) .
Fix a sector Q in ∆, with base vertex s. For each j, 1 ≤ j ≤ k, let C j be the chamber in Q which is the image of σ ω j , under the unique strong isometry from [s, ω j ) onto Q. Let Q j be the subsector of Q with base chamber C j (1 ≤ j ≤ k), and choose a chamber C in k j=1 Q j . Informally, C is chosen to be sufficiently far away from the base vertex s.
For 1 ≤ j ≤ k, let τ j be the chamber in [s, ω j ) which is the image of C under the strong isometry from Q onto [s, ω j ). For each ω ∈ Ω(σ ω j ) there is a retraction from [s, ω) onto [s, ω j ) [Gt,4.2]. Let τ j (ω) be the inverse image of the chamber τ j under this retraction. By local finiteness of ∆, there are only finitely many such chambers τ j (ω), ω ∈ Ω(σ ω j ). Call them τ j,l , 1 ≤ l ≤ n j . Thus Ω(σ ω j ) may be expressed as a finite disjoint union: Ω(σ j ) = l Ω(τ j,l ) .
Moreover, if ω ∈ Ω(τ j,l ) then the strong isometry from [s, ω) onto Q maps τ j,l to the chamber C. Finally, V may be expressed as a disjoint union: V = j,l Ω(τ j,l ) .
To check that this union is indeed disjoint, suppose that ω ∈ Ω(τ j,l ) ∩ Ω(τ r,s ). Then, under the strong isometry from Q onto [s, ω), the image of the chamber C is equal to both τ j,l and τ r,s . In particular, τ j,l = τ r,s .
Proposition 3.9. Let (G, I, N, S) be an affine topological Tits system, and let Γ be a subgroup of G. Then (a) The abelian group C(Ω, Z) is generated by the set of characteristic functions {χ σ : σ ∈ ∆ n }.
(b) Ω Γ is generated by {[x] : x ∈ D}.
Proof. (a) Any function f ∈ C(Ω, Z) is bounded, by compactness of Ω, and so takes finitely many values n i ∈ Z. Now V i = {ω ∈ Ω : f (ω) = n i } is a clopen set in Ω. It follows from the preceding Lemma that f may be expressed as a finite sum f = j m j χ σ j , with σ j ∈ ∆ n . (b) This is an immediate consequence of (a).
Further calculations in the rank 2 case
This section is devoted to showing that the estimate for the order of [1] given by Theorem 2.2 can be improved if the building ∆ is 2dimensional. The group G has type A 2 , B 2 or G 2 . Denote the type set by I = {s, a, b}, where s is a special type of the corresponding Coxeter diagram, as indicated below. Note that in the B 2 case, the vertex b is also special. In the A 2 case, all vertices are special and q t = q for all t ∈ I.
• • • s a b A 2 • • • s a b 4 4 B 2 • • • s a b 6 G 2
Proposition 4.1. Under the preceding assumptions, let Γ be a torsion free lattice in G. Then
(13) (q 2 a − 1)n s · [1] = 0 in Ω Γ .
Proof. We prove the G 2 case. For the minimal gallery of type i between C and C ′ described in Figure 1, we obtain N i = q s q 3 a q 2 b , so that (14) q s q 3 a q 2 b n s · [1] = n s · [1]. On the other hand, for a minimal gallery of type j between C and C ′′ described in Figure 4 below, we obtain N j = q 2 s q 4 a q 4 b , so that (15) q 2 s q 4 a q 4 b n s · [1] = n s · [1]. Equations (14), (15) imply that q 2 a n s · [1] = n s · [1], thereby proving (13).
The B 2 and A 2 cases follow by similar calculations, using the configurations in Figure 5 below. . Figure 4. The G 2 case. Let k be a non-archimedean local field with residue field k of order q. Let L be a simple, simply connected linear algebraic k-group and assume that L is k-split and has k-rank 2. Let G be the group of krational points of L and let Γ be a torsion free lattice in G. Then q t = q for all t ∈ I [Ti2,§3.5.4], and equation (13) becomes (16) (q 2 − 1)n s · [1] = 0.
A parahoric subgroup P s corresponding to a hyperspecial vertex of type s has maximal volume among compact subgroups of G [Ti2,3.8.2]. This volume is [P s : I], by Remark 3.2. In particular, all such subgroups have the same volume. It follows that n s = covol(Γ)/[P s : I] has the same value for all hyperspecial types s. Suppose, for example, that G is the symplectic group Sp 2 (k), which has type B 2 (or, equivalently, C 2 ). Examination of the tables at the end of [Ti2] shows that the diagram of G has two hyperspecial types s, t. Thus n s = n t , and it follows from (11) and Figure 3, that (q 3 − 1)n s · [1] = 0.
Combining this with (16) gives the following improvement to (16), for the case G = Sp 2 (k) :
(17) (q − 1)n s · [1] = 0.
Remark 4.2. An interesting problem is to find the exact value of the order of [1]. This is known in the case where the group G has k-rank 1, and ∆ is a tree. In that case a torsion free lattice Γ in G is a free group of finite rank r, and it follows from [R1, R2] that [1] has order r − 1 = −χ(Γ), where χ(Γ) denotes the Euler-Poincaré characteristic of Γ. If G = SL 2 (k), then −χ(Γ) = (q − 1)n s (Γ).
In the rank 2 case, the order of [1] is in general smaller than χ(Γ). For by [Se1,p. 150,Théorème 7], χ(Γ) = (q − 1)(q m − 1)n s (Γ), where m = 2, 3, 5 according as G has type A 2 , B 2 , G 2 . Note however that by (16), (17), we do have χ(Γ) · [1] = 0 if G = SL 3 (k) or G = Sp 2 (k).
K-Theory of the Boundary Algebra A Γ
We retain the general assumptions of Theorem 2.2. Thus G is a locally compact group acting strongly transitively by type preserving automorphisms on the affine building ∆, and Γ is a torsion free discrete subgroup of G.
As in [RS], [R1], the group Γ acts on the commutative C * -algebra C(Ω), and one can form the full crossed product C * -algebra A Γ = C(Ω) ⋊ Γ, [R1, Section 1]. The inclusion map C(Ω) → A Γ induces a natural homomorphism from C(Ω, Z) = K 0 (C(Ω)) to K 0 (A Γ ), which maps χ σ to the class of the corresponding idempotent in A Γ . The covariance relations in A Γ imply that for each g ∈ Γ and σ ∈ ∆ n , the functions χ σ and g · χ σ = χ gσ map to the same element of K 0 (A Γ ). Thus there is an induced homomorphism ϕ : Ω Γ → K 0 (A Γ ). Moreover ϕ([1]) = [1] K 0 , the class of 1 in the K 0 -group of A Γ . We have the following immediate consequence of Theorem 2.2.
Corollary 5.1. If Γ is a torsion free lattice in G then [1] K 0 has finite order in K 0 (A Γ ).
Remark 5.2. Clearly the bounds for the order of [1] obtained in the preceding sections apply also to [1] K 0 . If G has type A n , Corollary 5.1 was proved in [RS], [R1]. In that case q t = q for all t ∈ I. For n = 1, it follows from [R1, R2] that the order of [1] K 0 is actually
(18) ord([1] K 0 ) = (q − 1) · n s .
The computational evidence at the end of Section 6 below indicates that (18) also holds for n = 2.
Return now to the general assumptions of Theorem 2.2. It is important that Γ is amenable at infinity in the sense of [AR,Section 5.2]. Since the action of G on ∆ is strongly transitive, its action on the boundary Ω is transitive. Therefore Ω may be identified, as a topological Γ-space, with G/B, where the Borel subgroup B is the stabilizer of some point ω ∈ Ω. The next result shows that the group B is amenable and so the action of Γ on Ω is amenable [AR,Section 2.2]. Moreover the crossed product algebra A Γ is unique : the full and reduced crossed products coincide.
Proposition 5.3. Let ω ∈ Ω and let B = {g ∈ G : gω = ω}. Then B is amenable and so (Γ, Ω) is amenable as a topological Γ-space, if Γ is a closed subgroup of G.
Proof. Let s ∈ ∆ 0 be a special vertex and let A be an apartment in ∆ containing the sector [s, ω). Let N trans denote the subgroup of G consisting of elements which stabilize A and act by translation on A.
If g ∈ B, then the sectors [gs, ω) and g[s, ω) both have base vertex gs and both represent the same boundary point ω. Therefore g[s, ω) = [gs, ω). Now the sectors [gs, ω), [s, ω) and [g −1 s, ω) are all equivalent, and so contain a common subsector S. The sectors S and gS, being subsectors of [s, ω), are parallel sectors in the apartment A. Let σ be the base chamber of S. Since G acts strongly transitively on ∆, there exists an element g ′ ∈ G such that g ′ A = A and g ′ σ = gσ. In particular g ′ ω = ω.
Since the action of G is type preserving, it follows from [Gt,Theorem 17.3] that g ′ ∈ N trans . Moreover gv = g ′ v, for all v ∈ S. Let λ ω (g) = g ′ | A , the restriction of g ′ to A. Then λ ω (g) is the unique translation of A such that gv = λ ω (g)v, for all v ∈ S. As the notation suggests, λ ω (g) depends on g and ω, but not on S.
It is easy to check that the mapping λ ω : g → λ ω (g) is a homomorphism from B onto the group T 0 of type preserving translations on A.
Since T 0 ∼ = Z n is an amenable group, it will follow that B is amenable if ker λ ω can be shown to be amenable.
For B v .
Each of the groups B v is compact, being a closed subgroup of a parahoric subgroup. The group ker λ ω may thus be expressed as the inductive limit of the family of compact groups {B v : v ∈ [s, ω)}, directed by inclusion. Therefore ker λ ω is amenable.
Remark 5.4. If G is the group of k-rational points of an absolutely almost simple, simply connected linear algebraic k-group, this result is well known. For then the Borel subgroup B is solvable, hence amenable.
The amenability of the Γ-space Ω has the consequence that the Baum-Connes conjecture, with coefficients in C(Ω) has been verified [Tu, Théorème 0.1]. Consequently K * (A Γ ) can be calculated by means of the Kasparov-Skandalis spectral sequence [KaS,5.6,5.7]. This has initial terms E 2 p,q = H p (Γ, K q (C(Ω))) = H p (Γ, C(Ω, Z)) , if 0 ≤ p ≤ n and q is even, 0 , otherwise.
Note that H p = 0 for p > n, since Γ has homological dimension ≤ n . Moreover K 1 (C(Ω)) = 0, since Ω is totally disconnected.
Since H 2 (Γ, Z) is free abelian, (22) splits and we have K 0 (C * r (Γ)) = H 0 (Γ, Z) ⊕ H 2 (Γ, Z), K 1 (C * r (Γ)) = H 1 (Γ, Z).
Now H 1 (Γ, Z) is a finite group, because Γ has Kazhdan's property (T) [BS,Corollary 1]. It follows that K 0 (C * r (Γ)) = Z χ(Γ) , where χ(Γ) is the Euler-Poincaré characteristic of Γ. This proves Theorem 6.1. Let Γ be a torsion free cocompact lattice in G, where (G, I, N, S) is an affine topological Tits system of type A 2 . Then (24)
K 0 (C * r (Γ)) = Z χ(Γ) and K 1 (C * r (Γ)) = Γ ab .
The value of χ(Γ) is easily calculated [Se1,p. 150,Théorème 7], [R1,Section 4]. It is (25) χ(Γ) = (q − 1)(q 2 − 1) · n s (Γ),
where q is the order of the building ∆ and n s (Γ) is the number of Γ-orbits of vertices of type s, where s ∈ I is fixed.
In [CMSZ] a detailed study was undertaken of groups of type rotating automorphisms of A 2 buildings, subject to the condition that the group action is free and transitive on the vertex set of the building. For A 2 buildings of orders q = 2, 3, the authors of that article give a complete enumeration of the possible groups with this property. These groups are called A 2 groups. Some, but not all, of the A 2 groups are cocompact lattices in PGL 3 (k) for some local field k with residue field of order q. It is an empirical fact that either k = Q p or k = F q ((X)) in all the examples constructed so far.
For each A 2 group Γ < PGL 3 (k), consider the unique type preserving subgroup Γ < Γ of index 3. Each such Γ is torsion free and acts freely and transitively on the set of vertices of a fixed type s. That is n s = 1. Therefore χ(Γ) = (q − 1)(q 2 − 1) = 1 + rank H 2 (Γ, Z) .
Remark 6.2. There are eight such groups Γ if q = 2, and twentyfour if q = 3. Using the results of [RS] and the MAGMA computer algebra package, one can compute K 0 (A Γ ). One checks that in all these examples, rank K 0 (A Γ ) = 2 · rank H 2 (Γ, Z) = 4 if q = 2, 30 if q = 3.
its type preserving subgroup Γ is torsion free. One obtains K 0 (A Γ ) = Z 30 ⊕ (Z/2Z) ⊕ (Z/3Z) 6 ⊕ (Z/13Z) 4 , and the class of [1] in K 0 (A Γ ) is 1 + Z/2Z, which has order q − 1 = 2. It also follows from Theorem 6.1 that K 0 (C * r (Γ)) = Z 16 and K 1 (C * r (Γ)) = (Z/3Z) 3 ⊕ (Z/13Z).
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 1 .
1Part of an apartment A in a building of type G 2 .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 2 .
2A minimal gallery (σ 0 , σ 1 , . . . , σ k ) in a G 2 building.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 3 .
3Part of an apartment A in a building of type B 2 , and a minimal gallery from C to C ′ .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 5 .
5Figure 5.
each vertex v of [s, ω), let B v = {g ∈ B : gv = v}. Then ker λ ω = v∈[s,ω)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
This proves the second assertion of Theorem 2.2. The next Lemma proves the final assertion of Theorem 2.2 by showing that the estimate of the order of [1] can be improved if certain exceptional cases are excluded.Lemma 3.7. Suppose that the Weyl group is not one of the exceptional
types E 8 , F 4 , G 2 . Then
ord([1]) < covol(Γ) .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .The A 2 case.
s
s
s
C
C ′
C ′′
•
•
•
•
•
•
•
•
•
•
•
•
.
.
.
.
The B 2 case.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .s
s
s
b
a
C
C ′
C ′′
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
E 2 00 E 2 10 E 2 20 0 0 · · · Recall that for r ≥ 2 there are differentials d r p,q : E r p,q → E r p−r,q+r−1 , and E r+1 p,q is the homology of E r * at the position of E r p,q . Since the differentials d 2 go up one row, it is clear that d 2 = 0 and E 3 p,q = E 2 p,q . Since the differentials d 3 go three units to the left, d 3 = 0 and E 4 p,q = E 3 p,q . Continuing in this way we see that E ∞ p,q = E 2 p,q . Therefore the spectral sequence degenerates with E ∞ p,q = E 2 p,q . Convergence of the spectral sequence to K * (A Γ ) means thatand that there is a short exact sequenceIn particular, Ω Γ = H 0 (Γ, C(Ω, Z)) is isomorphic to a subgroup of K 0 (A Γ ). 6. A 2 buildings and reduced group C * -algebrasThe reduced group C * -algebra of a group Γ is the completion C * r (Γ) of the complex group algebra of Γ in the regular representation as operators on ℓ 2 (Γ). Let Γ be a discrete torsion free group acting properly on the affine building ∆, satisfying the hypotheses of Theorem 2.2. By Proposition 5.3, Γ acts amenably on the compact space Ω. It follows that the Baum-Connes assembly map is injective[Hig]and so the Novikov conjecture is true. (This also follows from[KaS].) Therefore the class [1] in K 0 (C * r (Γ)) does not have finite order. Since C * r (Γ) embeds in A Γ , there is a natural homomorphism K * (C * r (Γ)) → K * (A Γ ). This homomorphism is not injective, by Theorem 2.2, since [1] does have finite order in K 0 (A Γ ). It is therefore worth comparing the Ktheories of these two algebras. If the building is type A 2 , everything can be calculated explicitly.The computation required is a corollary of[La], which states that the Baum-Connes conjecture holds for any discrete group Γ satisfying the following properties.(1) Γ acts continuously, isometrically and properly with compact quotient on a uniformly locally finite affine building or on a complete riemannian manifold of nonpositive curvature; (2) Γ has property (RD) of Jolissaint.For a group Γ satisfying these conditions,The notation is consistent with[BCH], because ∆ is Γ-compact.) This provides a way of calculating the groups K * (C * r (Γ)). Assume therefore that all the conditions of Theorem 2.2 hold, together with the condition that ∆ has type A 2 . This is the case, for example, if Γ is a torsion free lattice in G = SL 3 (k).Condition(1)is clearly satisfied and condition(2)is also satisfied by the main result of[RRS]. The finite cell complex BΓ = ∆/Γ is a K(Γ, 1) space[Br,I.4], so the group homology H * (Γ, Z) is isomorphic to the usual simplicial homology H * (BΓ)[Br,Proposition II.4.1]. Thus H 0 (Γ, Z) = Z and H 1 (Γ, Z) = Γ ab , the abelianization of Γ. Moreover, since BΓ is 2-dimensional, the group Γ has homological dimension at most 2 [Br, VIII.2 Proposition (2.2) and VIII.6 Exercise 6]. It follows that H 2 (Γ, Z) is free abelian and H p (Γ, Z) = 0 for p > 2. Since Γ satisfies the Baum-Connes conjecture, K * (C * r (Γ)) coincides with its "γ-part" [Ka, Definition-corollary 3.12]. Therefore K * (C * r (Γ)) may be computed as the limit of a spectral sequence E r p,q [KaS, Theorem 5.6 and Remark 5.7(a)]. Since Γ is torsion free, Γ acts freely on ∆. According to[KaS,Remarks 5.7(b)] the initial terms of the spectral sequence are (20), 1, 2} and q is even, 0 otherwise.The nonzero terms in the first quadrant are shown in(19). Exactly as for(19), the spectral sequence degenerates with E ∞ p,q = E 2 p,q . Convergence of the spectral sequence to K * (C * r (Γ)) means thatand that there is a short exact sequenceFurthermore, the class of [1] in the K 0 (A Γ ) has order q − 1. Note that for q = 2 this means that [1] = 0.These values also appear to be true for higher values of q. In particular, they have been verified for a number of groups with q = 4, 5, 7.Here is an example with q = 4. Example 6.3. Consider the Regular A 2 group Γ r , with q = 4. This is a torsion free cocompact subgroup of PGL 3 (K), where K is the Laurent series field F 4 ((X)) with coefficients in the field F 4 with four elements. It is described in[CMSZ,Part I,Section 4], and its embedding in PGL 3 (F 4 ((X))) is essentially unique, by the Strong Rigidity Theorem of Margulis. The group Γ r is torsion free and has 21 generators x i , 0 ≤ i ≤ 20, and relations (written modulo 21):Let Γ < PSL 3 (K) be the type preserving index three subgroup of Γ r . The group Γ has generators x j x −1 0 , 1 ≤ j ≤ 20. Using the results of[RS]one obtains K 0 (A Γ ) = Z 88 ⊕ (Z/2Z) 12 ⊕ (Z/3Z) 4 ⊕ (Z/7Z) 4 ⊕ (Z/9Z), and the class of [1] in K 0 (A Γ ) is 3 + Z/9Z, which has order q − 1 = 3. It also follows from[RS,Theorem 2.1] that K 0 (A Γ ) = K 1 (A Γ ).According to Theorem 6.1, K 0 (C * r (Γ)) = Z 45 = Z 44 ⊕ [1] and K 1 (C * r (Γ)) = (Z/2Z) 6 ⊕(Z/3Z). The second equality was obtained using the MAGMA computer algebra package. This, and similar, examples suggest that the only reason for failure of injectivity of the natural homomorphism K 0 (C * r (Γ)) → K 0 (A Γ ) is the fact that [1] has finite order in K 0 (A Γ ).Example 6.4. For completeness, here are the results of the computations for one of the groups with q = 3. The Regular group 1.1 of[CMSZ], with q = 3, has 13 generators x i , 0 ≤ i ≤ 12, and relations (written modulo 13):x 3 j = 1 0 ≤ j ≤ 13, x j x j+8 x j+6 = 1 0 ≤ j ≤ 13.Let Γ be the type preserving index three subgroup. The group Γ has generators x j x −1 0 , 1 ≤ j ≤ 12. Note that the group 1.1 has torsion, but
Amenable Groupoids, Monographs of L'Enseignement Mathématique. C Anantharaman-Delaroche, J Renault, GenevaC. Anantharaman-Delaroche and J. Renault, Amenable Groupoids, Mono- graphs of L'Enseignement Mathématique, Geneva, 2000.
Classifying space for proper actions and K-theory of group C * -algebras, in C * -algebras: 1943-1993 A Fifty Year Celebration. P Baum, A Connes, N Higson, Contemp. Math. 167Amer. Math. SocP. Baum, A. Connes and N. Higson, Classifying space for proper actions and K-theory of group C * -algebras, in C * -algebras: 1943-1993 A Fifty Year Celebration, 241-291, Contemp. Math. 167, Amer. Math. Soc., 1994.
CAT(-1)-spaces, divergence groups and their commensurators. M Burger, S Mozes, J. Amer. Math. Soc. 9M. Burger and S. Mozes, CAT(-1)-spaces, divergence groups and their commensurators, J. Amer. Math. Soc. 9 (1996), 57-93.
N Bourbaki, Groupes et algèbres de Lie. Chap IV-VI.Éléments de mathématique, Fasc. XXXIV. Actualités Scientifiques et Industrielles, No 1337. Hermann; ParisN. Bourbaki, Groupes et algèbres de Lie. Chap IV-VI.Éléments de mathématique, Fasc. XXXIV. Actualités Scientifiques et Industrielles, No 1337, Hermann, Paris 1968.
K Brown, Cohomology of Groups. New YorkSpringer-VerlagK. Brown, Cohomology of Groups, Springer-Verlag, New York, 1982.
On L 2 -cohomology and Property (T) for automorphism groups of polyhedral cell complexes. W Ballmann, J Swiatkowski, Geometric and Funct. Anal. 7W. Ballmann and J. Swiatkowski, On L 2 -cohomology and Property (T) for automorphism groups of polyhedral cell complexes, Geometric and Funct. Anal. 7 (1997), 615-645.
Groupes réductifs sur un corps local : I. Données radicielles valuées. F Bruhat, J Tits, Inst. HautesÉtudes Sci. Publ. Math. 41F. Bruhat and J. Tits, Groupes réductifs sur un corps local : I. Données radicielles valuées, Inst. HautesÉtudes Sci. Publ. Math. 41 (1972), 5-251.
Groups acting simply transitively on the vertices of a building of type A 2 , I,II. D I Cartwright, A M Mantero, T Steger, A Zappa, Geom. Ded. 47D. I. Cartwright, A. M. Mantero, T. Steger and A. Zappa, Groups acting simply transitively on the vertices of a building of type A 2 , I,II, Geom. Ded. 47 (1993), 143-166 and 167-223.
p-adic curvature and the cohomology of discrete subgroups of p-adic groups. H Garland, Ann. of Math. 97H. Garland, p-adic curvature and the cohomology of discrete subgroups of p-adic groups, Ann. of Math. 97 (1973), 375-423.
Buildings and Classical Groups. P Garrett, Chapman and HallLondonP. Garrett, Buildings and Classical Groups, Chapman and Hall, London, 1997.
M Gromov, Hyperbolic groups. Essays in group theory. New YorkSpringer8M. Gromov, Hyperbolic groups. Essays in group theory, 75-263, Math. Sci. Res. Inst. Publ., 8, Springer, New York, 1987.
Bivariant K-theory and the Novikov conjecture. N Higson, Geometric and Funct. Anal. 10N. Higson, Bivariant K-theory and the Novikov conjecture, Geometric and Funct. Anal. 10 (2000), 563-581.
On some Bruhat decompostion and the structure of the Hecke ring of p-adic Chevalley groups. N Iwahori, H Matsumoto, Inst. HautesÉtudes Sci. Publ. Math. 25N. Iwahori and H. Matsumoto, On some Bruhat decompostion and the structure of the Hecke ring of p-adic Chevalley groups, Inst. HautesÉtudes Sci. Publ. Math. 25 (1965), 5-48.
Groups acting on buildings, operator K-theory, and Novikov's conjecture, K-theory. G G Kasparov, G Skandalis, 4G. G. Kasparov and G. Skandalis , Groups acting on buildings, operator K-theory, and Novikov's conjecture, K-theory 4 (1991), 303-337.
Equivariant KK-theory and the Novikov conjecture. G G Kasparov, Invent. Math. 91G. G. Kasparov, Equivariant KK-theory and the Novikov conjecture, In- vent. Math. 91 (1988), 147-201.
Rigidity of quasi-isometries for symmetric spaces and Euclidean buildings. B Kleiner, B Leeb, Inst. HautesÉtudes Sci. Publ. Math. 86B. Kleiner and B. Leeb, Rigidity of quasi-isometries for symmetric spaces and Euclidean buildings, Inst. HautesÉtudes Sci. Publ. Math. 86 (1997), 115-197.
V Lafforgue, Une démonstration de la conjecture de Baum-Connes pour les groupes réductifs sur un corps p-adique et pour certains groupes discrets possédant la propriété. 327V. Lafforgue, Une démonstration de la conjecture de Baum-Connes pour les groupes réductifs sur un corps p-adique et pour certains groupes discrets possédant la propriété (T), C. R. Acad. Sci. Paris Sér. I Math. 327 (1998), 439-444.
Discrete Subgroups of Semisimple Lie Groups. G A Margulis, Springer-VerlagBerlinG. A. Margulis, Discrete Subgroups of Semisimple Lie Groups, Springer- Verlag, Berlin, 1991.
G Robertson, Torsion in K-theory for boundary actions on affine buildings of type A n , K-theory. 22G. Robertson, Torsion in K-theory for boundary actions on affine buildings of type A n , K-theory 22 (2001), 251-269.
Boundary operator algebras for free uniform tree lattices. G Robertson, Houston J. Math. to appearG. Robertson, Boundary operator algebras for free uniform tree lattices, Houston J. Math., to appear.
A Haagerup inequality for A 1 × A 1 and A 2 buildings, Geometric and Funct. J Ramagge, G Robertson, T Steger, Anal. 8J. Ramagge, G. Robertson and T. Steger, A Haagerup inequality for A 1 × A 1 and A 2 buildings, Geometric and Funct. Anal. 8 (1998), 702-731.
Asymptotic K-theory for groups acting on A 2 buildings. G Robertson, T Steger, Canad. J. Math. 53G. Robertson and T. Steger, Asymptotic K-theory for groups acting on A 2 buildings, Canad. J. Math. 53 (2001), 809-833.
. M A Ronan, Lectures on Buildings. 7Academic PressPerspectives in MathM. A. Ronan, Lectures on Buildings, Perspectives in Math. 7, Academic Press, London 1989.
Cohomologie des groupes discrets. J.-P Serre, Ann. of Math. Studies. 70J.-P. Serre, Cohomologie des groupes discrets, Ann. of Math. Studies 70 (1971), 77-169.
. J.-P Serre, Arbres, Astérisque. 46Soc. Math. FranceJ.-P. Serre, Arbres, amalgames, SL 2 , Astérisque 46, Soc. Math. France, 1977.
Algebraic and abstract simple groups. J Tits, Ann. of Math. 80J. Tits, Algebraic and abstract simple groups, Ann. of Math. 80 (1964), 313-329.
Reductive groups over local fields. J Tits, Proceedings of Symposia in Pure Mathematics. Symposia in Pure Mathematics33J. Tits, Reductive groups over local fields, Proceedings of Symposia in Pure Mathematics 33 (1979), 29-69.
J L Tu, La conjecture de Baum-Connes pour les feuilletages moyennables, K-theory. 17J. L. Tu, La conjecture de Baum-Connes pour les feuilletages moyennables, K-theory 17 (1999), 215-264.
|
[] |
[
"Large deviations of mean-field interacting particle systems in a fast varying environment",
"Large deviations of mean-field interacting particle systems in a fast varying environment"
] |
[
"Sarath Yasodharan \nIndian Institute of Science\n\n",
"Rajesh Sundaresan \nIndian Institute of Science\n\n"
] |
[
"Indian Institute of Science\n",
"Indian Institute of Science\n"
] |
[] |
This paper studies large deviations of a "fully coupled" finite state mean-field interacting particle system in a fast varying environment. The empirical measure of the particles evolves in the slow time scale and the random environment evolves in the fast time scale. Our main result is the path-space large deviation principle for the joint law of the empirical measure process of the particles and the occupation measure process of the fast environment. This extends previous results known for two time scale diffusions to two time scale mean-field models with jumps. Our proof is based on the method of stochastic exponentials. We characterise the rate function by studying a certain variational problem associated with an exponential martingale.MSC 2010 subject classifications: Primary 60F10; Secondary 60K37, 60K35, 60J75 Keywords: Mean-field interaction, large deviations, time scale separation, averaging principle, metastability such fully coupled two time scale mean-field models (see Section 2.2 for the precise mathematical model and Theorem 2.2 for the statement of the main result).Our study of the LDP for such a two time scale mean-field model is motivated by the metastability phenomenon in networked systems. Many networked systems that arise in practice can be modelled using a two time scale mean-field model; see Appendix A for details of a retrial queueing system with N orbit queues, and a wireless local area network with local interactions. In such networks, there could be multiple seemingly "stable points of operation", or metastable points. Some of these may be desirable but some others undesirable in terms of some performance metrics. One is often interested in understanding the following metastable phenomena: (i) the mean time spent by the network near an operating point, (ii) the mean time required for transiting from one stable operating point to another, (iii) the mean time for the system to be sufficiently close to stationarity, etc. The process level large deviations result established in this paper helps to answer such questions on the large time behaviour of these systems.The above two time scale mean-field model is an example of a stochastic process with time scale separation where a certain component of the process evolves in the slow time scale (i.e. O(1)change in a given O(1) time duration) and another component evolves in the fast time scale (i.e. O(N )-change in a given O(1) time duration). Such processes that evolve on multiple time scales have been well studied in the past, and it is known that, under mild conditions, they exhibit the "averaging principle": when the time scale separation N becomes large, the slow component tracks the solution to a certain dynamical system whose driving function is "averaged" over the stationary behaviour of the fast component. In his seminal work, Khasminskii [21] first proved the averaging principle for two time scale diffusions. Freidlin and Wentzell [15, Chapter 7, Section 9] studied the averaging phenomenon in a fully coupled system of diffusions where both the drift and the diffusion coefficients of the slow component depend on the fast component and vice-versa. Their proof is based on discretisation arguments. The averaging phenomenon has also been studied in the context of jump processes with applications to performance analysis of various computer communication systems and queueing networks -Castiel et al. [6] studied a carrier sense multiple access algorithm in the context of wireless networks, Bordenave et al. [3] studied performance analysis of wireless local area networks, Hunt and Kurtz [16] studied scaling limits of loss networks, Hunt and Laws [17] studied analysis of trunk reservation policy in the context of loss networks; also see Kelly [20] and the references therein for other works on loss networks in the two time scale framework. While the above works on jump processes study the averaging principle in the large-N limit, this paper focuses on process-level large deviations from the large-N limit. Various authors have studied process level large deviations of diffusion processes evolving on multiple time scales under various assumptions -see Freidlin [15], Veretennikov [31, 32], Liptser [25], Puhalskii [28] and the references therein. Liptser [25] established the large deviation principle for the joint law of the slow process and the occupation measure of the fast process for one-dimensional diffusions when the fast process does not depend on the slow variable. More recently, Puhalskii [28] extended this for multidimensional diffusions when the slow and fast processes are fully coupled.His approach is based on the method of stochastic exponentials for large deviations[26], where one identifies a suitable exponential martingale associated with the process and characterises the rate function in terms of this exponential martingale. In identifying the rate function, the main ingredient in the proof is to study a certain variational problem and show certain continuity property of its solution.In this paper, our proof of the process-level large deviation result is based on the method of stochastic exponentials, see Puhalskii[26,28], but the main difficulty lies in extending the approach of Puhalskii [28] to our two time scale mean-field model with jumps. In particular, our setting requires us to study certain variational problems in an Orlicz space, instead of the usual L 2 space in the context of diffusions, to characterise the rate function; see Theorem 5.3 and Theorem 6.2.
|
10.1214/21-aap1718
|
[
"https://arxiv.org/pdf/2008.06855v2.pdf"
] | 221,139,537 |
2008.06855
|
3169cf1a582eb6767d9bb376f4e3338a625abad8
|
Large deviations of mean-field interacting particle systems in a fast varying environment
23 Jun 2021 June 24, 2021
Sarath Yasodharan
Indian Institute of Science
Rajesh Sundaresan
Indian Institute of Science
Large deviations of mean-field interacting particle systems in a fast varying environment
23 Jun 2021 June 24, 2021arXiv:2008.06855v2 [math.PR]Mean-field interactionlarge deviationstime scale separationaveraging principlemetastability
This paper studies large deviations of a "fully coupled" finite state mean-field interacting particle system in a fast varying environment. The empirical measure of the particles evolves in the slow time scale and the random environment evolves in the fast time scale. Our main result is the path-space large deviation principle for the joint law of the empirical measure process of the particles and the occupation measure process of the fast environment. This extends previous results known for two time scale diffusions to two time scale mean-field models with jumps. Our proof is based on the method of stochastic exponentials. We characterise the rate function by studying a certain variational problem associated with an exponential martingale.MSC 2010 subject classifications: Primary 60F10; Secondary 60K37, 60K35, 60J75 Keywords: Mean-field interaction, large deviations, time scale separation, averaging principle, metastability such fully coupled two time scale mean-field models (see Section 2.2 for the precise mathematical model and Theorem 2.2 for the statement of the main result).Our study of the LDP for such a two time scale mean-field model is motivated by the metastability phenomenon in networked systems. Many networked systems that arise in practice can be modelled using a two time scale mean-field model; see Appendix A for details of a retrial queueing system with N orbit queues, and a wireless local area network with local interactions. In such networks, there could be multiple seemingly "stable points of operation", or metastable points. Some of these may be desirable but some others undesirable in terms of some performance metrics. One is often interested in understanding the following metastable phenomena: (i) the mean time spent by the network near an operating point, (ii) the mean time required for transiting from one stable operating point to another, (iii) the mean time for the system to be sufficiently close to stationarity, etc. The process level large deviations result established in this paper helps to answer such questions on the large time behaviour of these systems.The above two time scale mean-field model is an example of a stochastic process with time scale separation where a certain component of the process evolves in the slow time scale (i.e. O(1)change in a given O(1) time duration) and another component evolves in the fast time scale (i.e. O(N )-change in a given O(1) time duration). Such processes that evolve on multiple time scales have been well studied in the past, and it is known that, under mild conditions, they exhibit the "averaging principle": when the time scale separation N becomes large, the slow component tracks the solution to a certain dynamical system whose driving function is "averaged" over the stationary behaviour of the fast component. In his seminal work, Khasminskii [21] first proved the averaging principle for two time scale diffusions. Freidlin and Wentzell [15, Chapter 7, Section 9] studied the averaging phenomenon in a fully coupled system of diffusions where both the drift and the diffusion coefficients of the slow component depend on the fast component and vice-versa. Their proof is based on discretisation arguments. The averaging phenomenon has also been studied in the context of jump processes with applications to performance analysis of various computer communication systems and queueing networks -Castiel et al. [6] studied a carrier sense multiple access algorithm in the context of wireless networks, Bordenave et al. [3] studied performance analysis of wireless local area networks, Hunt and Kurtz [16] studied scaling limits of loss networks, Hunt and Laws [17] studied analysis of trunk reservation policy in the context of loss networks; also see Kelly [20] and the references therein for other works on loss networks in the two time scale framework. While the above works on jump processes study the averaging principle in the large-N limit, this paper focuses on process-level large deviations from the large-N limit. Various authors have studied process level large deviations of diffusion processes evolving on multiple time scales under various assumptions -see Freidlin [15], Veretennikov [31, 32], Liptser [25], Puhalskii [28] and the references therein. Liptser [25] established the large deviation principle for the joint law of the slow process and the occupation measure of the fast process for one-dimensional diffusions when the fast process does not depend on the slow variable. More recently, Puhalskii [28] extended this for multidimensional diffusions when the slow and fast processes are fully coupled.His approach is based on the method of stochastic exponentials for large deviations[26], where one identifies a suitable exponential martingale associated with the process and characterises the rate function in terms of this exponential martingale. In identifying the rate function, the main ingredient in the proof is to study a certain variational problem and show certain continuity property of its solution.In this paper, our proof of the process-level large deviation result is based on the method of stochastic exponentials, see Puhalskii[26,28], but the main difficulty lies in extending the approach of Puhalskii [28] to our two time scale mean-field model with jumps. In particular, our setting requires us to study certain variational problems in an Orlicz space, instead of the usual L 2 space in the context of diffusions, to characterise the rate function; see Theorem 5.3 and Theorem 6.2.
Introduction
Let X , Y be finite sets and (X , E X ) and (Y, E Y ) be directed graphs on X and Y respectively. Let M 1 (X ) denote the space of probability measures on X . For each N ≥ 1, we consider Markov processes with infinitesimal generators acting on functions f on M N 1 (X ) × Y of the form
(x,x ′ )∈E X N ξ(x)λ x,x ′ (ξ, y) f ξ + δ x ′ N − δ x N
, y − f (ξ, y) + N y ′ :(y,y ′ )∈E Y (f (ξ, y ′ ) − f (ξ, y))γ y,y ′ (ξ), ξ ∈ M N 1 (X ) and y ∈ Y; here M N 1 (X ) ⊂ M 1 (X ) denotes the set of probability measures on X that can arise as empirical measures of N -particle configurations on X N , λ x,x ′ (·, y) : M 1 (X ) → R + , (x, x ′ ) ∈ E X and y ∈ Y, and γ y,y ′ : M 1 (X ) → R + , (y, y ′ ) ∈ E Y , are given functions. Such processes arise in the context of weakly interacting Markovian mean-field particle systems in a fast varying environment where the empirical measure of the particle system evolves in the slow time scale and the environment process evolves in the fast time scale. An important feature of such processes is that they are "fully coupled", i.e., the evolution of the empirical measure depends on the state of the environment, and the environment itself changes its state depending on the empirical measure of the particle system. This paper establishes a process-level large deviation principle (LDP) for the joint law of the empirical measure process and the occupation measure of the fast environment for * Supported by the Indo-French Centre for Applied Mathematics.
† Supported by a fellowship grant from the Centre for Networked Intelligence (a Cisco CSR initiative) of the Indian Institute of Science, Bangalore.
While Puhalskii [28] uses tools from the theory of elliptic partial differential equations for the characterisation of the rate function, we use tools from convex analysis and parametric continuity of optimisation problems. Also, our mean-field setting makes the solutions to these variational problems blow up near the boundary of the state space, and one of the main novelties of our work is the methodology to obtain a characterisation of the rate function in such cases via suitable approximations -see Section 7. Other works in the two time scale regime include Budhiraja et al. [5] who studied the case where the slow process is a diffusion and the fast process is a Markov chain on a finite set; their proof is based on the weak convergence approach to large deviations where one establishes the LDP by studying certain controlled versions of the processes. Kumar and Popovic [22] established the LDP for two time scale jump-diffusions under some general conditions via convergence of nonlinear semigroups, but their approach requires verification of the comparison principle for a certain nonlinear operator. While this is a possible alternative approach for the mean-field problem under consideration, we have used the more probabilistic stochastic exponentials approach.
Let us also mention some works on large deviations of mean-field models that do not involve the fast environment. Dawson and Gärtner [8] established process-level large deviations of interacting diffusions of mean-field type where each particle evolves as a diffusion process with coefficients that depend on the other particles via the empirical measure of the states of all the particles. Léonard [24,23] extended this to the case of jump processes. Our work can be viewed as an extension of Léonard [23] to the case of finite state mean-field interacting particle systems with a fully coupled fast varying environment. In the stationary regime, Borkar and Sundaresan [4] studied large deviations of the stationary measure of finite state mean-field interacting particle systems using tools from Freidlin and Wentzell [15,Chapter 6], and the authors [33] studied large time behaviour, metastability and convergence to stationarity in such systems using tools from Hwang and Sheu [18]. Our results in this paper, along with the results in [33], can be used to study the large time behaviour and metastability of two time scale mean-field models; see Section 2.3.2.
The rest of the paper is organised as follows. We start with a formal description of our fully coupled two time scale mean-field model and state our main result and its implications in Section 2. The proof of the main result is carried out in Sections 3-8. Section 3 establishes exponential tightness of the joint law of the empirical measure process and the occupation measure process of the fast environment. In Section 4, we define a certain exponential martingale and show a necessary condition that holds for every subsequential rate function. In Section 5, we define our candidate rate function using the above exponential martingale and study its relevant properties. In Section 6, we obtain a characterisation of subsequential rate functions for sufficiently regular elements in the space and Section 7 extends this to the whole space using certain approximation arguments. Finally we complete the proof of the main result in Section 8.
System model and main result 2.1 Notation
We summarise the frequently used notation in the paper. Let ·, · denote inner product and · denote the norm on Euclidean spaces. Given a complete separable metric space S, let B(S) denote the space of bounded Borel-measurable functions on S equipped with the uniform topology. Let M (S) denote the space of finite measures on S equipped with the topology of weak convergence. Let M 1 (S) denote the space of probability measures on S equipped with the Lévy-Prohorov metric (which generates the topology of weak convergence). (If S is a finite set, then M 1 (S) can be viewed as an (|S| − 1)-dimensional subset of the Euclidean space R |S| ; in this case, for ν ∈ M 1 (S), we shall denote the density of ν with respect to the counting measure on S by ν). Given N ∈ N, M N 1 (S) ⊂ M 1 (S) denotes the set of probability measures that can arise as empirical measures of N independent S-valued random variables. Given T > 0, let D([0, T ], S) (resp. D(R + , S)) denote the space of càdlàg functions on [0, T ] (resp. R + ) equipped with the Skorohod-J 1 topology (see, for example, Ethier and Kurtz [13,Chapter 3]). Similarly, given a finite set Y, D ↑ ([0, T ], M (Y)) ⊂ D([0, T ], M (Y)) denotes the space of càdlàg functions θ on [0, T ] such that for each 0 ≤ s ≤ t ≤ T , θ t −θ s is an element of M (Y) and θ t (Y) = t. This equipped with its subspace topology is a complete and separable metric space, and is closed in D([0, T ], M (Y)). If X is an element of D([0, T ], S), D([0, ∞), S) or D ↑ ([0, T ], M (Y)), let X t and X(t) denote the coordinate projection of X at time t.
Denote the moment generating function of the centred unit rate Poisson law by τ (u) := e u − u − 1, u ∈ R, and its convex dual by
τ * (u) := +∞ if u < −1 1 if u = −1 (u + 1) log(u + 1) − u if u > −1.
Given a complete separable metric space S and a finite measure ϑ on S, let L τ (S, ϑ) and L τ * (S, ϑ) denote the Orlicz spaces corresponding to the functions τ and τ * , respectively (see, for example, Rao and Ren [29,Chapter 3] for an introduction to Orlicz spaces). The Orlicz norms on these spaces are denoted by · L τ (S,ϑ) and · L τ * (S,ϑ) , respectively. Given a directed and connected graph (V, E) and ∆ = (u, v) ∈ E, let u + ∆ denote v. Given a function f on [0, T ] × S × V , let Df denote the function on [0, T ] × S × V × E defined by f (t, s, u, ∆) = f (t, s, v) − f (t, s, u) where ∆ = (u, v) ∈ E. Given a subset W of a Euclidean space and T > 0, let
C 1,1 ([0, T ] × W × S) (resp. C ∞ ([0, T ] × W × S)) denote the space of functions on f (t, u, s), (t, u, s) ∈ [0, T ] × W × S, that is continuously differentiable (resp. infinitely differentiable) in both t and u. For any function X on [0, T ] × S, let X t (s) and X(t, s) denote the evaluation of X at (t, s) ∈ [0, T ] × S.
We finally recall the definition of a large deviation principle. Let S be a metric space. We say that a sequence {X N } N ≥1 of S-valued random variables defined on a probability space (Ω, F, P ) satisfies the large deviation principle (LDP) with rate function I : S → [0, +∞] if
• the lower level sets of I are compact, i.e., for each M > 0, {x ∈ S : I(x) ≤ M } is a compact subset of S;
• for each open set G ⊂ S,
lim inf N →∞ 1 N log P (X N ∈ G) ≥ − inf x∈G I(x); • for each closed set F ⊂ S, lim sup N →∞ 1 N log P (X N ∈ F ) ≤ − inf x∈F I(x).
We say that I : S → [0, +∞] is a subsequential rate function for the family {X N } N ≥1 if there exists a subsequence {N k } k≥1 of N such that the sequence {X N k } k≥1 satisfies the large deviation principle with rate function I.
System model
We describe our model of the mean-field interacting particle system in a fast environment. Let there be N particles and an environment. There is a state associated with each particle as well as the environment at all times; the particle states come from a finite set X and the environment state comes from a finite set Y. The state of the nth particle at time t is denoted by X N n (t) ∈ X , and the state of the environment at time t is denoted by Y N (t) ∈ Y. To describe the evolution of the states of the particles, we consider a directed graph (X , E X ) on the vertex set X with the interpretation that whenever (x, x ′ ) ∈ E X , a particle at state x can transit to state x ′ . Similarly, to describe the evolution of the environment, we consider a directed graph (Y, E Y ); (y, y ′ ) ∈ E Y implies that the environment can transit from state y to state y ′ .
To describe the particle transitions, we define, for each y ∈ Y and (x, x ′ ) ∈ E X , a function λ x,x ′ (·, y) : M 1 (X ) → R + , and for each y ∈ Y, we consider the generator Q N,y acting on functions on X N by
Q N,y f (x N ) = N n=1 x ′ n :(xn,x ′ n )∈E X λ xn,x ′ n (x N , y)(f (x N n,xn,x ′ n ) − f (x N )),
where x N := 1 N N n=1 δ xn denotes the empirical measure associated with the configuration x N , and x N n,xn,x ′ n denotes the resultant configuration of particles when the nth particles changes its state from x n to x ′ n in x N . To describe the transitions of the environment, for each (y, y ′ ) ∈ Y, we define a function γ y,y ′ (·) : M 1 (X ) → R + , and for each ξ ∈ M 1 (X ), we consider the generator L ξ acting on functions on Y by
L ξ g(y) = y ′ :(y,y ′ )∈E Y (g(y ′ ) − g(y))γ y,y ′ (ξ).
Finally, we consider the generator Ψ N acting on functions f on X N × Y by
Ψ N f (x N , y) = Q N,y f (·, y)(x N ) + N L x N f (x N , ·)(y), where Q N,y f (·, y)(x N ) (resp. L x N f (x N , ·)(y))
indicates that the operator Q N,y (resp. L x N ) acts on the first variable (resp. second variable) of f and the resultant function is evaluated at x N (resp. y).
We make the following assumptions on the particle system: (A1) The graph (X , E X ) is irreducible; (A2) For each y ∈ Y and (x, x ′ ) ∈ E X , the function λ x,x ′ (·, y) is Lipschitz continuous on M 1 (X ) and inf ξ∈M 1 (X ) λ x,x ′ (ξ, y) > 0; and the following assumptions on the environment: (B1) The graph (Y, E Y ) is irreducible; (B2) For each (y, y ′ ) ∈ E Y , the function γ y,y ′ (·) is continuous on M 1 (X ) and inf ξ∈M 1 (X ) γ y,y ′ (ξ) > 0.
As a consequence of the assumptions (A2) and (B2), we see that the transition rates of the particles as well as that of the environment are bounded, i.e.,
sup ξ∈M 1 (X ) λ x,x ′ (ξ, y) < +∞ ∀ (x, x ′ ) ∈ E X and ∀ y ∈ Y and sup ξ∈M 1 (X ) γ y,y ′ (ξ) < +∞ ∀ (y, y ′ ) ∈ E Y ,
and hence the D([0, ∞), X N ×Y)-valued martingale problem for Ψ N is well-posed (see, for example, Ethier and Kurtz [13, Section 4.1, Exercise 15]). Therefore, given an initial configuration of particles (X N n (0), 1 ≤ n ≤ N ) ∈ X N and an initial state of the environment Y N (0) ∈ Y, we have a Markov process {((X N n (t), 1 ≤ n ≤ N ), Y N (t)), t ≥ 0} whose sample paths are elements of D([0, ∞), X N × Y).
To describe the process {((X N n (t), 1 ≤ n ≤ N ), Y N (t)), t ≥ 0} in words, consider the mapping
{((X N n (t), 1 ≤ n ≤ N ),Y N (t)), t ≥ 0} → 1 N N n=1 δ X N n (t) , t ≥ 0 =: {µ N (t), t ≥ 0} ∈ D([0, ∞), M N 1 (X ))
that takes the process {((X N n (t), 1 ≤ n ≤ N ), Y N (t)), t ≥ 0} and maps it to the empirical measure process {µ N (t), t ≥ 0}. Note that, if the environment were frozen to be y, then µ N is Markov with infinitesimal generator
Φ N,y f (ξ) = (x,x ′ )∈E X N ξ(x)λ x,x ′ (ξ, y) f ξ + δ x ′ N − δ x N − f (ξ) .
We see that a particle in state x at time t makes a transition to state
x ′ at rate λ x,x ′ (µ N (t), Y N (t))
independent of everything else. Similarly, the environment makes a transition from state y to y ′ at time t at rate N γ y,y ′ (µ N (t)) independent of everything else. Thus, the evolution of each particle depends on the empirical measure of the states of all the particles and the environment, and the evolution of the environment depends on the empirical measure of the states of all the particles. Note that the factor N in the second term of the generator Ψ N indicates that the process Y N makes O(N ) many transitions while each particle makes O(1) transitions in a given O(1) duration of time. Therefore, we have a "fully coupled" system where the particles evolve in a fast varying environment. Also, the empirical measure process µ N makes O(N ) transitions over a given duration of time, but each of those transitions are of size O(1/N ) on the probability simplex M 1 (X ). We shall refer to µ N as the slow process and Y N as the fast process.
Remark 2.1. Throughout the paper, we assume that all stochastic processes are defined on a complete filtered probability space (Ω, F, (F t ) t≥0 , P ). We denote integration with respect to P by E.
Fix T > 0. We now describe the typical behaviour of our two time scale mean-field system for large N over the time duration [0, T ]. Towards this, we define the occupation measure of the fast process Y N by
θ N (t) := t 0 1 {Y N (s)∈·} ds, 0 ≤ t ≤ T.
Note that θ N ∈ D ↑ ([0, T ], M (Y)), θ N,t (Y) = t and we can view θ N as a measure on [0, T ] × Y. For a fixed empirical measure of the particles ξ ∈ M 1 (X ), assumptions (B1) and (B2) imply that there exists a unique invariant probability measure for the Markov process on Y with infinitesimal generator L ξ (we denote this by π ξ ). Therefore, when the empirical measure at time t is at a fixed state µ t , since the fast process Y N makes O(N ) transitions, we expect that the occupation measure of Y N for large N becomes "close" to π µt , the unique invariant probability measure associated with L µt . Due to this ergodic behaviour of the fast process, we anticipate that a particle in state x at time t moves to state x ′ , where (x, x ′ ) ∈ E X , at rate Y λ x,x ′ (µ t , y)π µt (dy), i.e., the average of λ x,x ′ (µ t , ·) over π µt (for any ξ ∈ M 1 (X ), (x, x ′ ) ∈ E X and m ∈ M 1 (Y), we definē λ x,x ′ (ξ, m) := Y λ x,x ′ (ξ, y)m(dy)).
More precisely, for large enough N , we anticipate the following averaging principle for the empirical measure process µ N . If we assume that the initial conditions µ N (0) → ν weakly for some deterministic element ν ∈ M 1 (X ), then we anticipate that µ N converges in probability, in D([0, T ], M 1 (X )), to the solution to the McKean-Vlasov ODĖ
µ t =Λ * µt,πµ t µ t , t ≥ 0, µ 0 = ν,(2.1)
whereΛ µt,πµ t denotes the |X | × |X | rate matrix of the slow process when the empirical measure is µ t and the occupation measure of the fast process is π µt , i.e.,Λ µt,πµ t (x,
x ′ ) =λ x,x ′ (µ t , π µt ) when (x, x ′ ) ∈ E X ,Λ µt,πµ t (x, x ′ ) = 0 when (x, x ′ ) / ∈ E X ,Λ µt,πµ t (x, x) = − x ′ =xλ x,x ′ (µ t , π µt ), andΛ *
µt,πµ t denotes its transpose. Note that the above ODE is well-posed, thanks to the Lipschitz assumption on the transition rates (A2). See Bordenave et al. [3] for the study of averaging phenomena of a slightly general two time scale model in which each particle has a fast varying environment associated with it.
Main result
Our main result is on the large deviations of {(µ N , θ N )} N ≥1 , the joint empirical measure process associated with the particle system and the occupation measure process associated with the environment Y N , on D([0, T ], M 1 (X ))×D ↑ ([0, T ], M (Y)). Our main result is the following theorem.
I(µ, θ) := I 0 (µ(0)) + J(µ, θ),
where J is defined by
J(µ, θ) := [0,T ] sup α∈R |X | α, (μ t −Λ * µt,mt µ t ) − X ×E X τ (Dα(x, ∆))λ x,x+d∆ (µ t , m t )µ t (dx) + sup g∈B(Y) Y −L µt g(y) − E Y τ (Dg(y, ∆))γ y,y+d∆ (µ t ) m t (dy) dt (2.2)
whenever the mapping [0, T ] ∋ t → µ t ∈ M 1 (X ) is absolutely continuous and θ, when viewed as a measure on [0, T ] × Y, admits the representation θ(dtdy) = m t (dy)dt for some m t ∈ M 1 (Y) for almost all t ∈ [0, T ], and J(µ, θ) = +∞ otherwise.
Note that our rate function consists of two parts -one corresponding to the empirical measure process µ N and the other corresponding to the occupation measure of the fast process Y N . The form of the first part of the rate function in (2.2) corresponding to the empirical measure process µ N appears in the literature on large deviations of mean-field models (see Léonard [23,Theorem 3.3], [10, Theorem 1]). The form of the second part is related to the rate function that appears in the study of occupation measure of Markov processes (see Donsker and Varadhan [11,Theorem 1]). Here, the canonical form of the rate function is [0,T ] sup h>0 Y − Lµ t h(y) h(y) m t (dy)dt and this form of the second part of our rate function in (2.2) can be obtained by taking supremum over functions of the form e g , g ∈ B(Y). We see that the first part of the rate function corresponding to the empirical measure process µ N has parameters of the mean-field model "averaged" by the fast variable. Further the second part corresponding to the occupation measure of the fast process has parameters "frozen" at the current value of the slow variable. The form of our rate function is similar in spirit to that obtained by Puhalskii [28] in the case of coupled diffusions.
Note that, when µ is the solution to the McKean-Vlasov equation (2.1) starting at µ(0) and θ, when viewed as a measure on [0, T ] × Y, is given by θ(dydt) = π µt (dy)dt where π µt is the unique invariant probability measure associated with the infinitesimal generator L µt , it is easy to see that the suprema in (2.2) are attained at the identically 0 functions α ≡ 0 and g ≡ 0 and hence J(µ, θ) = 0. Therefore, we recover the typical behaviour of our fully coupled system -at each time t > 0, the empirical measure process µ N tracks the solution to the McKean-Vlasov equation µ t starting at µ(0) and the occupation measure of the fast process θ N tracks the invariant probability measure of the fast process Y N when the empirical measure is frozen at µ t . Our result on the large deviations of the joint empirical measure process and the occupation measure of the fast process {(µ N , θ N )} enables us to estimate the probabilities of two kinds of deviations from the typical behaviour -one where, for a given µ, the occupation measure of the fast process deviates from its typical behaviour (which at time t is π µt (dy)dt) and the other where µ deviates from its typical behaviour (which is the solution to (2.1) starting at µ(0)).
We now provide an outline of the proof of Theorem 2.2. Our proof is broadly built upon the methodology of stochastic exponentials for large deviations by Puhalskii [26,27,28], where one shows the large deviation principle by first obtaining an equation for a subsequential rate function in terms of a suitable exponential martingale and then obtaining a characterisation of this subsequential rate function. Towards this, we first show that the sequence Theorem 3.3); this is shown using standard martingale arguments and Doob's inequality. Exponential tightness of the sequence {(µ N , θ N )} N ≥1 implies that there exists a subsequence {N k } k≥1 of N such that the family {(µ N k , θ N k )} k≥1 satisfies the LDP (see, for example, Dembo and Zeitouni [9, Lemma 4.1.23]); letĨ denote the rate function that governs the LDP for the family {(µ N k , θ N k )} k≥1 . In Sections 4-7, we obtain a characterisation ofĨ whenĨ is such that, for some ν ∈ M 1 (X ),Ĩ(µ, θ) = +∞ unless µ 0 = ν; specifically we show thatĨ(µ, θ) is given by the right hand side of (2.2). In some more detail, in Section 4, we define an exponential martingale associated with the Markov process (µ N , Y N ) for a class of functions α : [0, T ] × M 1 (X ) → R |X | and g : [0, T ] × M 1 (X ) × Y → R with certain properties, and we obtain an equation that the rate functionĨ must satisfy in terms of this exponential martingale (see Theorem 4.1). In Section 5, we define our candidate rate function I * in terms of this exponential martingale as a variational problem over functions α and g, and we then show that I * coincides with the RHS of (2.2), and provide a nonvariational expression for I * using elements from suitable Orlicz spaces (see Theorem 5.3). In Section 6, using the properties of the solution to the variational problem established in Section 5 and an extension of the equation ofĨ to a larger class of functions α and g, we are able to obtain a characterisation of the rate functionĨ for sufficiently regular elements in D([0, T ], M 1 (X ))× D ↑ ([0, T ], M (Y)) (see Theorem 6.2). In Section 7, we extend the above characterisation ofĨ to the whole space D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) via certain approximation arguments. We finally complete the proof of Theorem 2.2 in Section 8, by removing the restriction that, for some ν ∈ M 1 (X ),Ĩ(µ, θ) = +∞ unless µ 0 = ν.
{(µ N , θ N )} N ≥1 is exponentially tight in D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) (see
Our setting of mean-field interaction with jumps introduces some difficulties in characterising a subsequential rate function. One of them is in obtaining regularity properties of the solution to the variational problem appearing in the definition of J(µ, θ) in (2.2) when (µ, θ) possesses some good properties. In the recent work of Puhalskii [28] on large deviations of fully coupled diffusions, the author uses tools from the theory of elliptic partial differential equations for this purpose whereas we resort to tools from convex analysis (Léonard [24,) and parametric continuity of optimisation problems (Sundaram [30, Chapter 9]) -see Theorem 5.3 and Theorem 6.2. Also, unlike in the case of Gaussian noise in Puhalskii [28], our Poissonian noise prevents us from obtaining an explicit form of the solution to the variational problem appearing in the rate function (2.2). Yet another difficulty is in obtaining a characterisation ofĨ(µ, θ) when the path µ hits the boundary of M 1 (X ). In such cases, the solution to the variational problem that appears in (2.2) blows up near the boundary and hence the condition onĨ established in Theorem 6.1 cannot be directly used. We demonstrate how to approximate (µ, θ) via a sequence of regular elements {(µ i , θ i )} i≥1 so that the solution to the variational problem in J(µ i , θ i ) is well-behaved. We can then use the conclusion of Theorem 6.1 on the above sequence and show thatĨ(µ i , θ i ) →Ĩ(µ, θ) as i → ∞; see Theorem 7.5.
Marginal µ N
The above result on large deviations of the joint law of the empirical measure process of the particles and the occupation measure of the fast process enables us to easily obtain large deviations of the empirical measure process µ N by using the contraction principle (see, for example, Dembo and Zeitouni [9, Theorem 4.2.1]).
Corollary 2.3. Assume (A1), (A2), (B1), (B2), and fix T > 0. Suppose that {µ N (0)} N ≥1 satisfies the LDP in M 1 (X ) with rate function I 0 . Then {µ N } N ≥1 satisfies the LDP in D([0, T ], M 1 (X )) with rate function J T defined as follows. If [0, T ] ∋ t → µ t is absolutely continuous, then J T (µ) = I 0 (µ 0 ) + [0,T ] sup α∈R |X | α,μ t − sup m∈M 1 (Y) α,Λ * µt,m µ t + X ×E X τ (Dα(x, ∆))λ x,x+d∆ (µ t , m)µ t (dx) − sup g∈B(Y) Y −L µt g(y) − E Y τ (Dg(y, ∆))γ y,y+d∆ (µ t ) m(dy) dt, where θ, when viewed as a measure on [0, T ] × Y, admits the representation θ(dydt) = m t (dy)dt for some m t ∈ M 1 (Y) for almost all t ∈ [0, T ], and J T (µ) = +∞ otherwise.
Large time behaviour
Using the result on the finite duration LDP for the process {µ N } N ≥1 in Corollary 2.3, we can employ the tools of Freidlin and Wentzell [15, Chapter 6] and Hwang and Sheu [18] to study the large time behaviour of the process µ N . The programme to understand the large time behaviour is carried out in [33, Section 3]. The two crucial properties needed to establish large time behaviour of µ N are: (i) the continuity of the Freidlin-Wentzell quasipotential (see [33,Section 3] for its definition) and (ii) uniform large deviations of µ N , uniformly with respect to the initial condition µ N (0) lying in a given closed set. One can show that the Freidlin-Wentzell quasipotential is continuous on M 1 (X ) × M 1 (X ) by constructing constant velocity trajectories between any two given points in M 1 (X ) and estimating the corresponding J T for that path; see Borkar and Sundaresan [4,Lemma 3.4]. Since the space M 1 (X ) is compact, one can also establish uniform large deviation estimates, see [33, Corollary 2.1]. Using the above two properties and the fact that (µ N , Y N ) is strong Markov, one can establish results on the large time behaviour of µ N such as (i) the mean exit time from a neighbourhood of an ω-limit set of (2.1), (ii) the probability of reaching a given ω-limit set starting from another, etc. -we refer the reader to [33, Section 3] for such results.
Exponential tightness
In this section, we prove the exponential tightness of the sequence
{(µ N (t), θ N (t)), 0 ≤ t ≤ T } N ≥1 in D([0, T ], M 1 (X ))×D ↑ ([0, T ], M (Y))
. Towards this, we shall use the following results (Theorems 3.1-3.2). The proof of these results are standard and will be omitted here (see Feng
Theorem 3.1. A sequence {X N } = {X N,t , 0 ≤ t ≤ T } taking values in D([0, T ], S) is exponentially tight if and only if (i) for each M > 0, there exists a compact set K M ⊂ S such that lim sup N →∞ 1 N log P (∃t ∈ [0, T ] such that X N,t / ∈ K M ) ≤ −M,
(ii) there exists a family of functions F ⊂ C(S) that is closed under addition and separates points on S such that for each
f ∈ F , {f (X N )} is exponentially tight in D([0, T ], R).
See Feng and Kurtz [14,Theorem 4.4] for a proof. We also need the following sufficient condition for exponential tightness in D([0, T ], R).
Theorem 3.2. Let {X N } be a sequence taking values in D([0, T ], R). Suppose that (i) we have lim M →∞ lim sup N →∞ 1 N log P (∃t ∈ [0, T ] such that |X N,t | > M ) = −∞, (ii) for each ε > 0, lim δ↓0 lim sup N →∞ 1 N log sup t 1 ∈[0,T ] P ( sup t 2 ∈[t 1 ,t 1 +δ] |X N,t 2 − X N,t 1 | > ε) = −∞. Then {X N } is exponentially tight in D([0, T ], R).
See Puhalskii [26, Theorem B] for a proof.
We now show the main result of this section, namely exponential tightness of the sequence .
{(µ N , θ N )} N ≥1 . Theorem 3.3. The sequence of random variables {(µ N (t), θ N (t)), t ∈ [0, T ]} N ≥1 is exponentially tight in D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)), i.e., given any M > 0, there exists a compact set K M ⊂ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) such that lim sup N →∞ 1 N log P ({(µ N (t), θ N (t)), 0 ≤ t ≤ T } / ∈ K M ) ≤ −M Proof. ItConsider θ N . Note that, for 0 ≤ t ≤ T , we have |θ N,t (Y )| ≤ t for any subset Y ⊂ Y. Therefore, using the compact set K M = {y ∈ R |Y| : 0 ≤ y i ≤ t ∀i} ⊂ M (Y), condition (i) of Theorem 3.1 holds. To verify condition (ii), define the collection of functions F := {f : M (Y) → R : f (θ) = α, θ , α ∈ R |Y| }. Clearly, F is closed under addition and separates points on M (Y). For any f of the form f (θ) = α, θ for some α ∈ R |Y| , note that, with X N,t = f (X N,t ), condition (i) of Theorem 3.2 holds since |X N,t | ≤ t max i∈Y |α i |. To verify condition (ii) of Theorem 3.2, note that, for any 0 ≤ s ≤ t ≤ T , we have |θ N,t (Y ) − θ N,s (Y )| ≤ t − s for any Y ⊂ Y and hence |X N,t − X N,s | ≤ (t − s) max i |α i |.
Thus, by choosing a sufficiently small δ > 0, it is easy to see that condition (ii) of Theorem 3.2 holds. This establishes the exponential tightness of θ N in
D ↑ ([0, T ], M (Y)).
We now show that µ N is exponentially tight in D([0, T ], M 1 (X )). Since for each t > 0, µ N,t takes values in a compact space, condition (i) of Theorem 3.1 holds trivially. Again, to show condition (ii) in Theorem 3.1, we shall make use of Theorem 3.2. For this, we fix the class of functions F := {f : M 1 (X ) → R + , f (ξ) = α, ξ , α ∈ R |X | }, which is clearly closed under addition and separates points on M 1 (X ). Fix f ∈ F such that f (ξ) = α, ξ for some α ∈ R |X | and let X N,t = f (µ N,t ) = α, µ N,t . Note that, we have |X N,t | ≤ max x |α x | for all t ≥ 0 and N ≥ 1, hence condition (i) of Theorem 3.2 holds. To check condition (ii), note that, for each t 1 ≥ 0 and β > 1,
M t := exp N βX N,t − βX N,t 1 − β t t 1 Φ Y N,s f (µ N,s )ds − t t 1 X ×E X τ (βDα(x, ∆))λ x,x+d∆ (µ N,s , Y N,s )µ N,s (dx)ds , t ≥ t 1 ,
is an F t -martingale (see Léonard [24,Lemma 3.3]; alternatively, this can be easily checked using the Doléans-Dade exponential formula, see, for example, Jacod and Shiryaev [19, Chapter I, Theorem 4.61]). Therefore, given ε > 0, δ > 0 and t 1 > 0, we have
P sup t 2 ∈[t 1 ,t 1 +δ] (X N,t 2 − X N,t 1 ) > ε = P sup t 2 ∈[t 1 ,t 1 +δ] exp{N β(X N,t 2 − X N,t 1 )} > exp{N βε} = P sup t 2 ∈[t 1 ,t 1 +δ] M t × exp N β t t 1 Φ Y N,s f (µ N,s )ds +N t t 1 X ×E X τ (βDα(x, d∆))λ x,x ′ (µ N,s , Y N,s )µ N,s (dx)ds > exp{N βε} ≤ P sup t 2 ∈[t 1 ,t 1 +δ] M t exp{N δc α,β } > exp{N βε} ≤ exp{−N (βε − δc α,β )}
where c α,β is a constant depending on α and β; here the first inequality follows from the boundedness of the transition rates which is a consequence of the Lipschitz assumption (A2), and the second inequality follows from Doob's martingale inequality and the fact that EM t = EM t 1 = 1. Thus, we obtain
lim δ↓0 lim sup N →∞ 1 N log sup t 1 ∈[0,T ] P sup t 2 ∈[t 1 ,t 1 +δ] (X N,t 2 − X N,t 1 ) > ε ≤ −βε,
and hence, letting β → ∞, we have
lim δ↓0 lim sup N →∞ 1 N log sup t 1 ∈[0,T ] P sup t 2 ∈[t 1 ,t 1 +δ] (X N,t 2 − X N,t 1 ) > ε = −∞.
We can now replace α with −α and repeat the above arguments to conclude that
lim δ↓0 lim sup N →∞ 1 N log sup t 1 ∈[0,T ] P sup t 2 ∈[t 1 ,t 1 +δ] |X N,t 2 − X N,t 1 | > ε = −∞.
We have thus verified condition (ii) of Theorem 3.2 and hence it follows that {µ N } N ≥1 is exponentially tight in D([0, T ], M 1 (X )). This completes the proof of the theorem.
4 An equation for the subsequential rate function
LetĨ : D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) → [0, +∞] denote a subsequential rate function for the family {(µ N , θ N )} N ≥1 , i.e., for some sequence {N k } k≥1 of N, the family {(µ N k , θ N k )} k≥1
satisfies the large deviation principle with rate functionĨ. In this section, we obtain a condition that every such subsequential rate function must satisfy. We start with some definitions.
Given g ∈ C 1,1 ([0, T ] × M 1 (X ) × Y), define V g t (µ N , Y N ) := g t (µ N (t), Y N (t)) − g 0 (µ N (0), Y N (0)) − t 0 ∂g s ∂s (µ N (s), Y N (s))ds − t 0 (x,x ′ )∈E X g s µ N (s) + δ x ′ − δ x N , Y N (s) − g s (µ N (s), Y N (s)) × N µ N,s (x)λ x,x ′ (µ N (s), Y N (s))ds − t 0 (x,x ′ )∈E X τ g s µ N (s) + δ x ′ − δ x N , Y N (s) − g s (µ N (s), Y N (s)) × N µ N,s (x)λ x,x ′ (µ N (s), Y N (s))ds (4.1) Let n ∈ N. Given the time points 0 = t 0 < t 1 < · · · < t n < T , α = (α t i ) n i=0 where α t i : M 1 (X ) → R |X | is continuous for each 0 ≤ i ≤ n, and µ ∈ D([0, T ], M 1 (X )), define t 0 α s (µ s )dµ s := n i=1 α t∧t i−1 (µ t i−1 ), (µ t∧t i − µ t∧t i−1 ) , t ∈ [0, T ]; (4.2)
note that this object is an element of D([0, T ], R). Given x ∈ X and ∆ = (x, x ′ ) ∈ E X , define
Dα s (µ s )(x, ∆) := α s (µ s )(x ′ ) − α s (µ s )(x).
Similarly, given y ∈ Y and ∆ = (y, y ′ ) ∈ E Y , define Dg s (µ s , y, ∆) := g s (µ s , y ′ ) − g s (µ s , y).
Finally, given (µ, θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)), time points 0 = t 0 < t 1 < · · · < t n < T , α = (α t i ) n i=0
and g that satisfy the above requirements, define
U α,g t (µ, θ) := t 0 α s (µ s )dµ s − t 0 α s (µ s ), Y Λ * µs,y µ s m s (dy) ds − t 0 X ×E X ×Y τ (Dα s (µ s )(x, ∆))λ x,x+d∆ (µ s , y)µ s (dx)m s (dy)ds − t 0 Y L µs g s (µ s , ·)(y) + E Y τ (Dg s (µ s , y, ∆))γ y,y+d∆ (µ s ) m s (dy)ds; (4.3)
here θ, when viewed as a measure on [0, T ] × Y, admits the representation θ(dydt) = m t (dy)dt for some m t ∈ M 1 (Y) for almost all t ∈ [0, T ], which follows from the existence of the regular conditional distribution (see, for example, Ethier and Kurtz [13, Theorem 8.1, page 502]). We prove the following result, a condition thatĨ must satisfy in terms of the functions U α,g .
Theorem 4.1. LetĨ : D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) → [0, +∞] denote a rate function and suppose that there is a subsequence {(µ N k , θ N k )} k≥1 of {(µ N , θ N )} N ≥1 thatsup (µ,θ)∈D([0,T ],M 1 (X ))×D ↑ ([0,T ],M (Y)) (U α,g T (µ, θ) −Ĩ(µ, θ)) = 0. (4.4)
Proof. Note that, since the transition rates are bounded (which is a consequence of the assumptions (A2) and (B2)),
N t 0 α s (µ s )dµ N,s − t 0 α s (µ s ), Y Λ * µ N,s ,y µ N,s θ N (dyds) , t ≥ 0,
is an F t -martingale. Also, by Itô's formula,
g t (µ N (t),Y N (t)) − g 0 (µ N (0), Y N (0)) − t 0 ∂g s ∂s (µ N (s), Y N (s))ds − t 0 (x,x ′ )∈E X g s µ N (s) + δ x ′ − δ x N , Y N (s) − g s (µ N (s), Y N (s)) × N µ N,s (x)λ x,x ′ (µ N (s), Y N (s))ds − N t 0 L µ N (s) g s (µ N (s), ·)(Y N (s))ds, t ≥ 0,
is an F t -martingale. Therefore, using the Doléans-Dade exponential formula, it follows that
exp{N U α,g t (µ N , θ N ) + V g t (µ N , Y N )}, t ≥ 0,
is an F t -martingale, and hence
E exp{N U α,g T (µ N , θ N ) + V g T (µ N , Y N )} = 1. Clearly, U α,g T (·, ·) is continuous on D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y))
, and since g is continuously differentiable in the second argument, V g T (µ N , Y N ) is bounded, and hence V g T (µ N , Y N )/N goes to 0 P -a.s. Therefore, the result follows from an application of Varadhan's lemma along the subsequence {N k } k≥1 (see, for example, [9, Theorem 4.3.1]).
The variational problem in J
Motivated by the duality relation (4.4), we define our candidate rate function
I * (µ, θ) := sup α,g U α,g T (µ, θ), (5.1)
where the supremum is taken over all functions α and g that satisfy the conditions in Theorem 4.1.
In this section, we study the above variational problem and show that, whenever I * (µ, θ) < +∞, I * (µ, θ) coincides with the RHS of (2.2) and that I * (µ, θ) can be expressed in a non-variational form using elements from suitable Orlicz spaces. We begin with a necessary condition on the elements in D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) whose I * is finite.
Lemma 5.1. If I * (µ, θ) < +∞, then the mapping [0, T ] ∋ t → µ t ∈ M 1 (X ) is absolutely continuous.
Proof. Take g ≡ 0 and α to be a function of only time (and denote this by α t ) in the definition of U α,g t in (4.3). Then (5.1) becomes
I * (µ, θ) = sup α,g U α,g T (µ, θ) ≥ T 0 α t dµ t − T 0 α t ,Λ * µt,mt µ t dt − T 0 X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (µ t , m t )µ t (dx)dt. Therefore, T 0 α t dµ t ≤ I * (µ, θ) + T 0 α t ,Λ * µt,mt µ t dt + T 0 X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (µ t , m t )µ t (dx)dt.
Replacing cα t in place of α t in the above equation, dividing throughout by c and choosing c =
1/ Dα L τ ([0,T ]×X ×E X ,λ x,x+d∆ (µt,mt)µt(dx)dt) (i.e. the inverse of the norm of the function α t (x, x + ∆) in the Orlicz space L τ ([0, T ] × X × E X ,λ x,x+d∆ (µ t , m t )µ t (dx)dt)), we have T 0 α t dµ t ≤ Dα L τ ([0,T ]×X ×E X ,λ x,x+d∆ (µt,mt)µt(dx)dt) (I * (µ, θ) + 1) + T 0 α t ,Λ * µt,mt µ t dt.
Since α t is arbitrary, from the definition of t 0 α t dµ t in (4.2), it is clear that the mapping [0, T ] ∋ t → µ t ∈ M 1 (X ) is absolutely continuous.
We also need the following lemma, whose proof can be found in Puhalskii [
V. Let f (t, v) be a function defined on [0, T ] × V that is measurable in t and continuous in v. Further, if f (t, β(t)) is locally integrable with respect to the Lebesgue measure on [0, T ] for all measurable functions β : [0, T ] → U , then sup β(·) T 0 f (t, β(t))dt = T 0 sup y∈U f (t, y)dt,
where the supremum in the LHS is taken over all U -valued measurable functions β(·).
Let us introduce some notations. Let DC X (resp. DC Y ) denote the space of functions Dα
(resp. Dg) on [0, T ] × X × E X (resp. [0, T ] × Y × E Y ) such that α ∈ C 1 ([0, T ] × X ) (resp. g ∈ C 1 ([0, T ] × Y)).I * (µ, θ) = [0,T ] sup α∈R |X | α, (μ t −Λ * µt,mt µ t ) − X ×E X τ (Dα(x, ∆))λ x,x+d∆ (µ t , m t )µ t (dx) + sup g∈B(Y) Y −L µt g(y) − E Y τ (Dg(y, ∆))γ y,y+d∆ (µ t ) m t (dy) dt, (5.2) where m t ∈ M 1 (Y) is such that θ,h X ∈ H X (µ, θ) and h Y ∈ H Y (µ, θ) that satisfy [0,T ]×X ×E X h X Dαλ x,x+d∆ (µ t , m t )µ t (dx)dt = [0,T ] α t , (μ t −Λ * µt,mt µ t ) dt, ∀α ∈ B([0, T ] × X ), (5.3) and [0,T ]×Y×E Y h Y Dgγ y,y+d∆ (µ t )m t (dy)dt = − [0,T ]×Y×E Y Dgγ y,y+d∆ (µ t )m t (dy)dt, ∀g ∈ B([0, T ] × Y),(5.
4)
respectively, h X ∈ L τ * ([0, T ]×X ×E X ,λ x,x+d∆ µ t (dx)dt) and h Y ∈ L τ * ([0, T ]×Y×E Y , γ y,y+d∆ (µ t )m t (dy)dt), and I * (µ, θ) admits the representation
I * (µ, θ) = [0,T ]×X ×E X τ * (h X )λ x,x+d∆ (µ t , m t )µ t (dx)dt + [0,T ]×Y×E Y τ * (h Y )γ y,y+d∆ (µ t )m t (dy)dt. (5.5)
Furthermore, if inf t∈[0,T ] min x∈X µ t (x) > 0 and inf t∈[0,T ] min y∈Y m t (y) > 0, the suprema in (5.2) over α and g are attained byα t ∈ R |X | andĝ t ∈ B(Y) that satisfẏ
µ t (x) − (Λ * µt,mt µ t )(x) + µ t (x) x ′ ∈X : (x,x ′ )∈E X (exp{α t (x ′ ) −α t (x)} − 1)λ x,x ′ (µ t , m t ) − x 0 ∈X : (x 0 ,x)∈E X µ t (x 0 )(exp{α t (x) −α t (x 0 )} − 1)λ x 0 ,x (µ t , m t ) = 0, ∀x ∈ X ,(5.6)
and m t (y)
y ′ ∈Y: (y,y ′ )∈E Y exp{ĝ t (y ′ ) −ĝ t (y)}γ y,y ′ (µ t ) − y 0 ∈Y: (y 0 ,y)∈E Y m t (y 0 ) exp{ĝ t (y) −ĝ t (y 0 )}γ y 0 ,y (µ t ) = 0, ∀y ∈ Y,(5.7)
for almost all t ∈ [0, T ], respectively.
Proof. For the first part of the theorem, we shall make use of Lemma 5.2. Note that, by Lemma 5.1, we have that the mapping [0, T ] ∋ t → µ t ∈ M 1 (X ) is absolutely continuous and θ admits the representation θ(dydt) = m t (dy)dt where m t ∈ M 1 (Y) for almost all t ∈ [0, T ]. Therefore, for each t ≥ 0, U α,g t in (4.3) can be written as
U α,g t (µ, θ) = t 0 α s (µ s ),μ s ds − t 0 α s (µ s ),Λ * µs,ms µ s ds − t 0 X ×E X τ (Dα s (µ s )(x, ∆))λ x,x+d∆ (µ s , m s )µ s (dx)ds − t 0 Y L µs g s (µ s , ·)(y) + E Y τ (Dg s (µ s , y, ∆))γ y,y+d∆ (µ s ) m s (dy)ds,
where α and g be satisfy the requirements in the definition of U α,g t in (4.3). Thus,
I * (µ, θ) = sup α [0,T ] α t (µ t ),μ t − α t (µ t ),Λ * µt,mt µ t − X ×E X τ (Dα t (µ t )(x, ∆))λ x,x+d∆ (µ t , m t )µ t (dx) dt + sup g [0,T ] Y − L µt g t (µ t , ·)(y) − E Y τ (Dg t (µ t , y, ∆))γ y,y+d∆ (µ t ) m t (dy)dt
where the supremum is taken over all functions α and g that satisfy the conditions in the definition of U α,g t in (4.3). Note that, since µ is kept fixed, an approximation argument using mollifiers implies that the above supremum over α can be replaced by supremum over α s , where α s is any R |X | -valued bounded measurable function on [0, T ]. Once again, since µ is fixed, we can replace the supremum over g ∈ C 1,1 ([0, T ], M 1 (X ) × Y) with the supremum over g where g is any bounded measurable function on [0, T ] × Y. Therefore,
I * (µ, θ) = sup α [0,T ] α t (µ t ),μ t − α t (µ t ),Λ * µt,mt µ t − X ×E X τ (Dα t (µ t )(x, ∆))λ x,x+d∆ (µ t , m t )µ t (dx) dt + sup g [0,T ] Y − L µt g t (µ t , ·)(y) − E Y τ (Dg t (µ t , y, ∆))γ y,y+d∆ (µ t ) m t (dy)dt
where the supremum is taken over bounded measurable functions α : [0, T ] → R |X | and g : [0, T ] × Y → R. We can now apply Lemma 5.2 to conclude that I * (µ, θ) is given by (5.2). We obtain the existence of functions h X ∈ H X (µ, θ) and h Y ∈ H Y (µ, θ) that satisfy the conditions X ×E X ,λ x,x+d∆ (µ t , m t )µ t (dx)dt) and L τ ([0, T ]×Y×E Y , γ y,y+d∆ (µ t )m t (dy)dt) respectively; the proof follows verbatim from Léonard [24] to our case, and we omit the details here. Finally, to show the existence of supremisersα t andĝ in (5.2) and the conditions (5.6) and (5.7) in the case when inf t∈[0,T ] min x∈X µ t (x) > 0 and inf t∈[0,T ] min y∈Y m t (y) > 0, note that, for each t ∈ [0, T ] for whichμ t exists, the mappings
α t → α t , (μ t −Λ * µt,mt µ t ) − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (µ t , m t )µ t (dx) (5.8)
and, viewing g t as an element of R |Y| ,
g t → − Y L µt g t (y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (µ t ) m t (y) (5.9)
are concave on R |X | and R |Y| respectively. Therefore, there is anα t and aĝ t that attain the suprema in (5.2); the conditions in (5.6) and (5.7) onα t andĝ t easily follow by writing down the first order conditions for optimality of the mappings in (5.8) and (5.9) respectively.
6 Characterisation of the subsequential rate function for sufficiently regular elements T ] and y ∈ Y both α(t, ·) and g(t, ·, y) are continuous on M 1 (X ), we define, with a slight abuse of notation, for (µ, θ) ∈ Γ and t ∈ [0, T ],
U α,g t (µ, θ) := [0,t] α s (µ s ),μ s −Λ * µs,ms µ s − X ×E X τ (Dα s (µ s )(x, ∆))λ x,x+d∆ (µ s , m s )µ s (dx) − Y L µs g s (µ s , ·)(y) + E Y τ (Dg s (µ s , y, ∆))γ y,y+d∆ (µ s ) m s (dy) ds. (6.1)
Note that the boundedness of α and g in the above definition implies that Moreover, there exists some δ > 0 (depending on α and g) such that sup (µ,θ)∈K δ (U α,g T (µ, θ) −Ĩ(µ, θ)) = 0, (6.2) and the above supremum is attained.
Dα ∈ L τ ([0, T ] × X × E X ,λ x,x+d∆ (µ t , m t )µ t (dx)dt), and Dg ∈ L τ ([0, T ] × E Y × Y, γ y,
Proof. We first define certain approximations of functions α and g that meet the requirements of Theorem 4.1 and prove certain convergence properties of these approximations. We then use the conclusion of Theorem 4.1 for these approximations and pass to the limit to obtain (6.2). Our proof is inspired by ideas from Puhalskii [27, Lemma 7.2 and Theorem 7.1], with necessary modifications to our mean-field with jumps setting. Since α is a Carathéodory function, using the Scorza-Dragoni theorem, for each i ≥ 1, there exists a compact set F i ⊂ [0, T ] and a measurable functionᾱ i : ), where n(i) → ∞ as i → ∞. By continuity of τ , boundedness of α and α i , boundedness of transition rates of the particles (which is a consequence of assumption (A2)), we have that, for each δ > 0,
[0, T ] × M 1 (X ) → R |X | such that α i = α on F i × M 1 (X ),ᾱ i is continuous on F i × M 1 (X ),sup (µ,θ)∈K δ [0,T ]×X ×E X τ (Dα i (t, µ t )(x, ∆))λ x,x+d∆ (µ t , m t )µ t (dx)dt− [0,T ]×X ×E X τ (Dα(t, µ t )(x, ∆))λ x,x+d∆ (µ t , m t )µ t (dx)dt = sup (µ,θ)∈K δ K c i ×X ×E X τ (Dα i (t, µ t )(x, ∆))λ x,x+d∆ (µ t , m t )µ t (dx)dt− K c i ×X ×E X τ (Dα(t, µ t )(x, ∆))λ x,x+d∆ (µ t , m t )µ t (dx)dt ≤ Leb(K c i ) × c α → 0 (6.3)
as i → ∞, where c α > 0 is a constant depending on α. Furthermore, given δ > 0 and (µ, θ) ∈ K δ , by Lemma 5.1, the mapping [0, T ] ∋ t → µ t ∈ M 1 (X ) is absolutely continuous. Hence, noting that µ is kept fixed, by (5.3) in Theorem 5.3, there exists h X ∈ H(µ, θ) such that
[0,T ] α(t, µ t ), (μ t −Λ * µt,mt µ t ) dt = [0,T ]×X ×E X h X Dαλ x,x+d∆ (µ t , m t )µ t (dx)dt, and [0,T ] α i (t, µ t ), (μ t −Λ * µt,mt µ t ) dt = [0,T ]×X ×E X h X Dα iλx,x+d∆ (µ t , m t )µ t (dx)dt.
Therefore,
[0,T ] α i (t, µ t ) − α(t, µ t ),μ t −Λ * µt,mt µ t dt = [0,T ]×X ×E X h X (Dα i − Dα)λ x,x+d∆ (µ t , m t )µ t (dx)dt ≤ [0,T ]×X ×E X |h X (Dα i − Dα)|λ x,x+d∆ (µ t , m t )µ t (dx)dt ≤ 2 h X L τ * ([0,T ]×X ×E X ,λ x,x+d∆ (µt,mt)µt(dx)dt) × Dα i − Dα L τ ([0,T ]×X ×E X ,λ x,x+d∆ (µt,mt)µt(dx)dt) ≤ 2 max{1, δ + T } × Dα i − Dα L τ ([0,T ]×X ×E X ,λ x,x+d∆ (µt,mt)µt(dx)dt) ,
where the second inequality follows from Hölder's inequality in Orlicz spaces and the third inequality follows from the non-variational representation of the candidate rate function in I * in (5.5), which gives that h X L τ * ([0,T ]×X ×E X ,λ x,x+d∆ (µt,mt)µt(dx)dt) ≤ max{1, I * (µ, θ) + T }, along with the fact that (µ, θ) ∈ K δ and I * (µ, θ) ≤Ĩ(µ, θ). Hence,
sup (µ,θ)∈K δ [0,T ] α i (t, µ t ) − α(t, µ t ),μ t −Λ * µt,mt µ t dt → 0 (6.4)
as i → ∞. Similarly, by standard arguments using mollifiers and the Scorza-Dragoni theorem, we can show that there exist functions g i on [0, T ]×M 1 (X )×Y such that g i (·, ·, y) ∈ C ∞ ([0, T ]×M 1 (X )) for all y ∈ Y and Leb{t ∈ [0, T ] : g i (t, ·, ·) = g(t, ·, ·)} ≤ 1/i for each i ≥ 1. Therefore, using boundedness of the functions g, g i , i ≥ 1, and boundedness of the transition rates of the fast process (which is a consequence of assumption (B2)), we see that
sup (µ,θ)∈K δ [0,T ]×Y L µt g i (t, µ t , ·)(y) + E Y τ (Dg i (t, µ t , y, ∆))γ y,y+d∆ (µ t ) m t (dy)dt − [0,T ]×Y L µt g(t, µ t , ·)(y) + E Y τ (Dg(t, µ t , y, ∆))γ y,y+d∆ (µ t ) m t (dy)dt → 0 (6.(U α i ,g i T (µ, θ) −Ĩ(µ, θ)) = 0.
By Lemma 5.1 and the fact thatĨ(µ, θ) ≥ I * (µ, θ), we see thatĨ(µ, θ) = +∞ whenever (µ, θ) / ∈ Γ, and hence we immediately get
sup (µ,θ)∈Γ (U α i ,g i T (µ, θ) −Ĩ(µ, θ)) = 0. (6.6)
Let us now show that
sup (µ,θ)∈K δ (U α i ,g i T (µ, θ) −Ĩ(µ, θ)) = 0 (6.7)
holds for a suitable δ > 0 and all i ≥ 1. Note that, using the boundedness of the functions α, g, α i and g i , i ≥ 1, and boundedness of the transition rates (as a consequence of assumptions (A2) and (B2)), we have
U 2α i ,2g i T (µ, θ) = [0,T ] 2 α i (t, µ t ),μ t −Λ * µt,mt µ t − X ×E X τ (2Dα i (t, µ t )(x, ∆))λ x,x+d∆ (µ t , m t )µ t (dx) − Y 2L µt g t (µ t , ·)(y) + E Y τ (2Dg i (t, µ t , y, ∆))γ y,y+d∆ (µ t ) m t (dy) dt ≥ 2U α i ,g i (µ, θ) − 2T c α,g
for all i ≥ 1, where c α,g > 0 is a constant depending on α and g. Therefore, for a fixed M > 0, we
have sup (µ,θ):U α i ,g i T (µ,θ)≥M (U α i ,g i (µ, θ) −Ĩ(µ, θ)) ≤ sup (µ,θ):U α i ,g i T (µ,θ)≥M (2U α i ,g i (µ, θ) −Ĩ(µ, θ)) − M ≤ sup (µ,θ):U α i ,g i T (µ,θ)≥M (U 2α i ,2g i (µ, θ) −Ĩ(µ, θ)) + 2T c α,g − M ≤ 2T c α,g − M.
Therefore the above implies that,
sup (µ,θ)∈Γ (U α i ,g i (µ, θ) −Ĩ(µ, θ)) ≤ sup (µ,θ)∈K δ (U α i ,g i (µ, θ) −Ĩ(µ, θ)) ∨ sup (µ,θ):U α i ,g i (µ,θ)≥M (U α i ,g i (µ, θ) −Ĩ(µ, θ)) ∨ (M − δ) ≤ sup (µ,θ)∈K δ (U α i ,g i (µ, θ) −Ĩ(µ, θ)) ∨ (2T c α,g − M ) ∨ (M − δ).
Hence, choosing M = 1 + 2T c α,g and δ = M + 1, the above and (6.6) imply (6.7). Letting i → ∞, using convergences (6.3)-(6.4) for the slow process, and (6.5) for the fast process, (6.7) becomes
sup (µ,θ)∈K δ (U α,g T (µ, θ) −Ĩ(µ, θ)) = 0. (6.8)
Since the functions U α i ,g i T (defined in (4.3)), i ≥ 1, are continuous on Γ and since for all δ ′ > 0
lim i→∞ sup (µ,θ)∈K δ ′ |U α i ,g i T (µ, θ) − U α,g T (µ, θ)| → 0
as i → ∞, it follows that, for all δ ′ > 0, U α,g T (defined in (6.1)) is continuous on K δ ′ . Hence, using the compactness of the level sets ofĨ, we see that the supremum in (6.8) is attained. This completes the proof of the theorem. ThenĨ(μ,θ) = I * (μ,θ).
Characterisation ofĨ for regular elements
Proof. Let δ = inf t>0 min x∈Xμt (x). For each t ∈ [0, T ], consider the parametrised optimisation problems (Dg t (y, ∆))γ y,y+d∆ (u) m t (dy) , (6.10)
sup αt∈R |X | α t ,μ t −Λ * u,mt u − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (u,m t )u(dx) , (6.9) u ∈ M 1 (X ) is such that u(x) ≥ δ/2 for all x ∈ X , and sup gt∈B(Y) − Y L u g t (·)(y) + E Y τ
u ∈ M 1 (X ). Note that the mappings
α t → α t ,μ t −Λ * u,mt u − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (u,m t )u(dx), (6.11)
where u is such that u(x) ≥ δ/2 for all x ∈ X , and since inf t∈[0,T ] min y∈Ymt (y) > 0, viewing g t as an element of R |Y| ,
g t → − Y L u g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (u) m t (dy) (6.12)
are concave on R |X | and R |Y| respectively. Therefore, we see that there exist anα t (u) ∈ R |X | and aĝ t (u) ∈ R |Y| that solve (6.9) and (6.10) respectively. Guided by (5.6) and (5.7),α t (u) andĝ t (u) satisfy the first order optimality conditionṡ
µ t (x) − (Λ * u,mt u)(x) + u(x) x ′ ∈X : (x,x ′ )∈E X (exp{α t (u)(x ′ ) −α t (u)(x)} − 1)λ x,x ′ (u,m t ) − x 0 ∈X : (x 0 ,x)∈E X u(x 0 )(exp{α t (u)(x) −α t (u)(x 0 )} − 1)λ x 0 ,x (u,m t ) = 0, ∀x ∈ X , (6.13)
where t ∈ [0, T ] and u ∈ M 1 (X ) is such that u(x) ≥ δ/2 for all x ∈ X , and m t (y)
y ′ ∈Y: (y,y ′ )∈E Y exp{ĝ t (u, y ′ ) −ĝ t (u, y)}γ y,y ′ (u) − y 0 ∈Y: (y 0 ,y)∈E Ym t (y 0 ) exp{ĝ t (u, y) −ĝ t (u, y 0 )}γ y 0 ,y (u) = 0, ∀y ∈ Y, (6.14)
where t ∈ [0, T ] and u ∈ M 1 (X ), respectively. We now define bounded measurable functionsα : [0, T ] × M 1 (X ) → R |X | andĝ : [0, T ] × M 1 (X ) × Y → R that are continuous on M 1 (X ) such thatα(u) (resp.ĝ(u)) solves the optimisation problem in (6.9) (resp. (6.10)). Note that the objective function in (6.10) is uniquely determined by {g(t, y ′ ) − g(t, y), (y, y ′ ) ∈ E Y }, and by assumption (A1), the objective function in (6.9) is uniquely determined by {α t (x ′ ) − α t (x), (x, x ′ ) ∈ E X }. Since inf t∈[0,T ] min x∈Xμt (x) > 0, the mapping t → µ t is Lipschitz continuous and the transition rates of the slow process are bounded (which is a consequence of assumption (A2)), we see that we can restrict the supremum over α t in (6.9) to a single compact and convex subset of R |X | , regardless of t ∈ [0, T ] and u ∈ M 1 (X ) with u(x) ≥ δ/2 for all x ∈ X . Similarly, since inf t∈[0,T ] min y∈Ymt (y) > 0 and the transition rates of the fast process are bounded (which follows from assumption (B2)), we see that we can restrict the supremum in (6.10) to a single compact and convex subset of R |Y| , regardless of t ∈ [0, T ] and u ∈ M 1 (X ). Also, note that the mappings (6.9) and (6.10), when viewed as
{α t (x ′ ) − α t (x), (x, x ′ ) ∈ E X } → α t ,μ t −Λ * u,mt u − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (u,m t )u(dx)
and,
{g t (y ′ ) − g t (y), (y, y ′ ) ∈ E Y } → − Y L u g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (u) m t (dy)
are strictly concave on R |E X | and R |E Y | respectively; hence there exists a unique {α t (u)(x ′ ) − α t (u)(x), (x, x ′ ) ∈ E X } and a unique {ĝ t (u, y ′ ) −ĝ t (u, y), (y, y ′ ) ∈ E Y } that solve (6.9) and (6.10) respectively. Fixingα t (u)(x 0 ) = 0 for some x 0 ∈ X , where t ∈ [0, T ] and u ∈ M 1 (X ) with u(x) ≥ δ/2 for all x ∈ X , fixing g t (u, y) = 0 for some y 0 ∈ Y, where t ∈ [0, T ] and u ∈ M 1 (X ), defininĝ α t (u)(x) = 0 ∀x ∈ X whenever u ∈ M 1 (X ) is such that u(x) < δ/4 for some x ∈ X , and defininĝ α(u) whenever u is such that u(x) ∈ [δ/4, δ/2] for some x ∈ X using a linear interpolation, we obtain bounded functionsα : Sinceα andĝ satisfy the assumptions of Theorem 6.1, there exists (μ,θ) ∈ Γ that attains the supremum in (6.2) withα andĝ in place of α and g, respectively. That is, Uα ,ĝ T (μ,θ) =Ĩ(μ,θ). On the other hand, by (5.2) and the above,
[0, T ]×M 1 (X ) → R |X | andĝ : [0, T ]×M 1 (X )×Y → R., T ] × M 1 (X ) ∋ (t, u) →α t (u) ∈ R |X | and [0, T ] × M 1 (X ) × Y ∋ (t, u, y) →ĝ t (u, y) ∈ RI * (μ,θ) ≥ Uα ,ĝ T (μ,θ) =Ĩ(μ,θ),
and sinceĨ(μ,θ) ≥ I * (μ,θ), we have that Uα ,ĝ T (μ,θ) = I * (μ,θ) =Ĩ(μ,θ). (6.15) Note thatμ 0 = ν sinceĨ(μ,θ) < +∞. We now proceed to show thatm t =m t for almost all t ∈ [0, T ] andμ =μ. This would establishĨ(μ,θ) = I * (μ,θ). By (6.15), we havẽ m t (y)
y ′ ∈Y: (y,y ′ )∈E Y exp{ĝ t (μ t , y ′ ) −ĝ t (μ t , y)}γ y,y ′ (μ t ) − y 0 ∈Y: (y 0 ,y)∈E Ym t (y 0 ) exp{ĝ t (μ t , y) −ĝ t (μ t , y 0 )}γ y 0 ,y (μ t ) = 0, ∀y ∈ Y, (6.16)
for almost all t ∈ [0, T ]. By assumption (B2), the Markov process on Y with transition rates exp{ĝ t (μ t , y ′ ) −ĝ t (μ t , y)}γ y,y ′ (μ t ), (y, y ′ ) ∈ E Y , possesses a unique invariant probability measure; comparing (6.14) with u =μ t and (6.16), we get
m t =m t (6.17)
for almost all t ∈ [0, T ]. On one hand, by using the first order optimality condition in (6.13) with u =μ t , and the just established fact thatm t =m t for almost all t ∈ [0, T ], we geṫ
µ t (x) − (Λ * µt,mtμ t )(x) +μ t (x) x ′ ∈X : (x,x ′ )∈E X (exp{α t (μ t )(x ′ ) −α t (μ t )(x)} − 1)λ x,x ′ (μ t ,m t ) − x 0 ∈X : (x 0 ,x)∈E Xμ t (x 0 )(exp{α t (μ t )(x) −α t (μ t )(x 0 )} − 1)λ x 0 ,x (μ t ,m t ) = 0, ∀x ∈ X , (6.18)
for almost all t ∈ [0, T ]. On the other hand, by (6.15), we geṫ
µ t (x) − (Λ * µt,mtμt )(x) +μ t (x) x ′ ∈X : (x,x ′ )∈E X (exp{α t (μ t )(x ′ ) −α t (μ t )(x)} − 1)λ x,x ′ (μ t ,m t ) − x 0 ∈X : (x 0 ,x)∈E Xμ t (x 0 )(exp{α t (μ t )(x) −α t (μ t )(x 0 )} − 1)λ x 0 ,x (μ t ,m t ) = 0, ∀x ∈ X , (6.19)
for almost all t ∈ [0, T ]. Note that, by the optimality condition (6.13) and by (6.17), the mapping
u → (Λ * u,mt u)(x) + u(x) x ′ ∈X : (x,x ′ )∈E X (exp{α t (u)(x ′ ) −α t (u)(x)} − 1)λ x,x ′ (u,m t ) − x 0 ∈X : (x 0 ,x)∈E X u(x 0 )(exp{α t (u)(x) −α t (u)(x 0 )} − 1)λ x 0 ,x (u,m t ),
x ∈ X ∈ R |X | on {u ∈ M 1 (X ) : u(x) ≥ δ/2 ∀x ∈ X } is identically equal toμ t for almost all t ∈ [0, T ]. Hence, by (6.18) and (6.19), and noting thatμ 0 =μ 0 = ν, Gronwall inequality implies thatμ t =μ t for all t ∈ [0, T ].
We have thus shown that (μ,θ) = (μ,θ), and the second equality in (6.15) implies thatĨ(μ,θ) = I * (μ,θ). This completes the proof of the theorem. We shall proceed through a sequence of lemmas. In each lemma, we shall extend the conclusioñ I(µ, θ) = I * (µ, θ) to a larger class of elements (µ, θ) by producing a sequence (µ i , θ i ) such that
I(µ i , θ i ) = I * (µ i , θ i ) for all i ≥ 1, (µ i , θ i ) → (µ, θ) in D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)
) as i → ∞, and I * (µ i , θ i ) → I * (µ, θ) as i → ∞. Using these approximations, we finally show that I(µ, θ) = I * (µ, θ) for all (µ, θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) (see Theorem 7.5).
We start with an extension of the conclusion of Theorem 6.2 to all initial conditions ν. ThenĨ(μ,θ) = I * (μ,θ).
Proof. We begin with some notations.
Let X 0 = {x ∈ X :μ 0 (x) = 0}. For each x ∈ X 0 , let {x x k , 1 ≤ k ≤ l(x)} be such thatμ 0 (x x 1 ) ≥ 1/|X 0 | (in particular, x x 1 / ∈ X 0 ), (x x k , x x k+1 ) ∈ E X for all 1 ≤ k ≤ l(x) − 1 and (x x l(x) , x) ∈ E X , i.e., the collection of edges {(x x k , x x k+1 ), 1 ≤ k ≤ l(x)− 1}∪ (x x l(x) , x) form a directed path of length l(x) from x x 1 to x.
Also, for the given ν ∈ M 1 (X ), let µ(ν,θ) ∈ D([0, ∞), M 1 (X )) denote the unique solution to the ODEμ t =Λ * µt,mt µ t with initial condition µ 0 = ν.
For each i ≥ 1, we define a pathμ i ∈ D([0, T ], M 1 (X )) as follows. Defineμ i t = µ t (μ 0 ,θ) for t ∈ [0, τ i ] where τ i = inf{t > 0 : µ t (μ 0 ,θ)(x) =μ 1/i (x)/2 for some x ∈ X 0 }. Note that τ i < +∞ for i sufficiently large. Also note thatμ i τ i (x) > 0 for all x ∈ X , and that the supremum over α t in the definition of I * (μ i ,θ) (see (5.2)) is attained at α t = 0 for all t ∈ [0, τ i ]. Let ε i (x) =μ 1/i (x) −μ i τ i (x) for x ∈ X and i ≥ 1. Since the mapping t →μ t is Lipschitz continuous, we see that τ i → 0 as i → ∞, and ε i (x) → 0 as i → ∞ for all x ∈ X . For each x ∈X 0 := X 0 ∩ {x ∈ X 0 : ε i (x) > 0}, we shall now move the mass ε i (x) from the vertex x x 1 to x via the edges defined in the previous paragraph using a piecewise constant velocity path. Denote the elements ofX 0 by x 1 , x 2 . . . , x |X 0 | , let l(x 0 ) = 0 and ε i (x 0 ) = 0. Given r ∈ {0, 1, . . . ,
|X 0 | − 1}, s ∈ {0, 1, . . . , l(x r+1 ) − 1}, and t ∈ [τ i + r m=0 l(x m )ε i (x m ) + sε i (x r+1 ), τ i + r m=0 l(x m )ε i (x m ) + (s + 1)ε i (x r+1 )), definė µ i t (x) := −1 if x = x x r+1 s+1 1 if x = x x r+1 s+2 0 otherwise,
i.e., we transport a mass of ε i (x r+1 ) at unit rate from the node x x r+1 s+1 to x x r+1 s+2 during the above time interval. Note that we haveμ i t (x) =μ t (x) for all x ∈X 0 at time t = τ i + |X 0 | m=1 l(x m )ε i (x m ). Similarly, for x ∈ X \X 0 with ε i (x) > 0, one defines a sequence of edges from a suitable x ′ ∈ X \X 0 (possibly from multiple x ′ ∈ X \X 0 ) with ε i (x ′ ) < 0 and moves the mass ε i (x) to x through similar piecewise constant velocity trajectories defined above. For each x ∈ X \X 0 with ε i (x) < 0, we similarly move the mass ε i (x) from x to suitable vertices in X \X 0 via piecewise constant velocity trajectories. At the end of this procedure, we haveμ î τ i =μ 1/i for someτ i ≥ τ i . We now definê µ i t =μ t+1/i−τ i for all t ∈ [τ i , T ] (see Figure 1 for a pictorial representation ofμ i ). Since ε i (x) → 0 as i → ∞ for all x ∈ X , we have thatτ i → 0 as i → ∞.
Also, for each i ≥ 1 and t ∈ [0, T ], define the probability measurem i t on Y bŷ
m i t (y) := m t (y) if t ∈ [0, τ i ], m τ i (y) if t ∈ [τ i ,τ i ], m t+1/i−τ i (y) if t ∈ (τ i , T ],
for all y ∈ Y, and define the measureθ
i on [0, T ] × Y byθ i (dydt) =m i t (dy)dt. Clearly,θ i ∈ D ↑ ([0, T ], M (Y)).
Thanks to the fact thatμ i τ i (x) > 0 for all x ∈ X and the fact that α t = 0 attains the supremum in the definition of I * (μ i ,θ i ) for all t ∈ [0, τ i ], using arguments similar to those used in the proof of Theorem 6.2, one can now construct a bounded measurable functionα i : [0, T ] × M 1 (X ) → R |X | such thatα i t (μ i t ) attains the supremum over α t in the definition of I * (μ i ,θ i ) (in (5.2)) andα i t (·) is continuous on M 1 (X ) for all t ∈ [0, T ]. Similarly, sinceθ i satisfies the conditions of Theorem 6.2, one can construct a bounded measurable functionĝ i : [0, T ] × M 1 (X ) × Y → R such thatĝ i t (μ i t , ·) attains the supremum over g t in the definition of I * (μ i ,θ i ) andĝ i t (·) is continuous on M 1 (X ) for each t ∈ [0, T ]. Hence, using arguments similar to those used in the proof of Theorem 6.2, one concludes thatĨ(μ i ,θ i ) = I * (μ i ,θ i ) for all i ≥ 1.
Let us now show that I * (μ i ,θ i ) → I * (μ,θ) as i → ∞. For the fast component, sinceτ i → 0, we see thatθ i →θ in D ↑ ([0, T ], M (Y)) as i → ∞. By assumption (B2), we see that
0 ≤ sup i≥1,t∈[0,T ] sup gt∈R |Y| − Y Lμi t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ i t ) m i t (dy) < +∞,
and hence the bounded convergence theorem immediately yields
[0,τ i ] sup gt∈B(Y) − Y Lμi t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ i t ) m i t (dy) dt → 0 and [T +1/i−τ i ,T ] sup gt∈B(Y) − Y Lμ t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ t ) m t (dy) dt → 0 as i → ∞. Noting thatm i t =m t+1/i−τ i andμ i t =μ t+1/i−τ i for all t ∈ [τ i , T ]
, the above convergences imply that
[0,T ] sup gt∈B(Y) − Y Lμi t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ i t ) m i t (dy) dt → [0,T ] sup gt∈B(Y) − Y Lμ t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ t ) m t (dy) dt as i → ∞.
For the slow component, sinceτ i → 0 as i → ∞, using the absolute continuity of the mapping t →μ t and the definition of the pathsμ i , it follows from the dominated convergence theorem that μ i t →μ t as i → ∞ uniformly in t ∈ [0, T ] and hence we have thatμ i →μ in D([0, T ], M 1 (X )) as i → ∞. Let us first show that
[0,τ i ] sup αt∈R |X | α t ,μ i t −Λ * µ i t ,m i tμ i t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ i t ,m i t )μ i t (dx) dt converges to 0 as i → ∞. Towards this, let t ∈ [τ i + r m=0 l(x m )ε i (x m ) + sε i (x r+1 ), τ i + r m=0 l(x m )ε i (x m ) + (s + 1)ε i (x r+1 )) where r ∈ {0, 1, . . . , |X 0 | − 1}, and s ∈ {0, 1, . . . , l(x r+1 ) − 1}. Note that, we have sup αt∈R |X | α t ,μ i t −Λ * µ i t ,m i tμ i t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ i t ,m i t )μ i t (dx) ≤ sup αt∈R |X | (α t (x x r+1 s+2 ) − α t (x x r+1 s+1 )) − (exp{α t (x x r+1 s+2 ) − α t (x x r+1 s+1 )} − 1) ×λ x x r+1 s+1 ,x x r+1 s+2 (μ i t ,m i t )μ i t (x x r+1 s+1 ) − inf αt∈R |X | (x,x ′ )∈E X : (x,x ′ ) =(x ≤ − 1 c (u log u − u)| cμ i t 2 (x x r+1 s+1 ) cμ i t 1 (x x r+1 s+1 ) + c 1 ε i (x r+1 ) = o(1) as i → ∞, where t 1 = τ i + r m=0 l(x m )ε i (x m ) + sε i (x r+1 ), t 2 = t 1 + ε i (x r+1
) and the above integral is evaluated over the time interval [τ i + r m=0 l(x m )ε i (x m ) + sε i (x r+1 ), τ i + r m=0 l(x m )ε i (x m ) + (s + 1)ε i (x r+1 )). Hence, repeating the above calculation for each constant velocity section of the pathμ i during the time interval [τ i ,τ i ], we see that
andμ i t = µ t (μ 0 ,θ) on t ∈ [0, τ i ], we have [0,T ] sup αt∈R |X | α t ,μ t −Λ * µt,mtμ t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ t ,m t )μ t (dx) dt − [0,T ] sup αt∈R |X | α t ,μ i t −Λ * µ i t ,m i tμ i t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ i t ,m i t )μ i t (dx) dt ≤ [0,1/i] sup αt∈R |X | α t ,μ t −Λ * µt,mtμ t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ t ,m t )μ t (dx) dt + [T +1/i−τ i ,T ] sup αt∈R |X | α t ,μ t −Λ * µt,mtμ t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ t ,m t )μ t (dx) dt + [0,τ i ] sup αt∈R |X | α t ,μ i t −Λ * µ i t ,m i tμ i t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ i t ,m i t )μ i t (dx) dt → 0 as i → ∞. We have thus shown that I * (μ i ,θ i ) → I * (μ,θ) as i → ∞. Since (μ i ,θ i ) → (μ,θ) in D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)
) as i → ∞, the lower semicontinuity ofĨ implies that lim inf i→∞Ĩ (μ i ,θ i ) ≥Ĩ(μ,θ). Therefore, using the above convergence and the fact thatĨ(μ i ,θ i ) = I * (μ i ,θ i ) for all i ≥ 1, we see thatĨ(μ,θ) ≤ I * (μ,θ). On the other hand, sincẽ I(μ,θ) ≥ I * (μ,θ), it follows thatĨ(μ,θ) = I * (μ,θ). This completes the proof of the lemma.
Remark 7.2. We shall repeatedly use the immediately preceding argument; starting with an element
(μ,θ) ∈ D([0, T ], M 1 (X ))×D ↑ ([0, T ], M (Y)), we shall produce a sequence (μ i ,θ i ) ∈ D([0, T ], M 1 (X ))× D ↑ ([0, T ], M (Y)), i ≥ 1, such thatĨ(μ i ,θ i ) = I * (μ i ,θ i ) for all i ≥ 1, (μ i ,θ i ) → (μ,θ) in D([0, T ], M 1 (X ))× D ↑ ([0, T ], M (Y)
) as i → ∞ and I * (μ i ,θ i ) → I * (μ,θ) as i → ∞, and use the above argument to conclude thatĨ(μ,θ) = I * (μ,θ).
We now extend the conclusion of the previous lemma to all elementsθ ∈ D ↑ ([0, T ], M (Y)). Since, for each t ∈ [0, T ], the mapping
(g t , m t ) → max − Y Lμ t g t (·)(y) + E Y
τ (Dg t (y, ∆))γ y,y+d∆ (μ t ) m t (dy), 0 on (R ∪ {+∞, −∞}) |Y| × M 1 (Y) is bounded and continuous (thanks to assumption (B2)), by an application of the Berge's maximum theorem, it follows that the mapping
m t → sup gt∈R |Y| − Y Lμ t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ t ) m t (dy) (7.2)
is continuous on M 1 (Y). Similarly, for each t ≥ 0, by assumption (A2), it follows that the mapping
(α t , m t ) → α t ,μ t −Λ * µt,mt − X ×E X τ (Dα t (x, ∆))λ x.x+d∆ (μ t , m t )μ t (dx)
is bounded and continuous on R |X | × M 1 (Y). Again, by the Berge's maximum theorem,
m t → sup αt∈R |X | α t ,μ t −Λ * µt,mt − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ t , m t )μ t (dx)
is continuous on M 1 (Y). Therefore, for each t ∈ [0, T ], we see that
sup αt∈R |X | α t ,μ t −Λ * µt,m i t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ t ,m i t )μ t (dx) → sup αt∈R |X | α t ,μ t −Λ * µt,mt − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ t ,m t )μ t (dx) , and sup gt∈B(Y) − Y Lμ t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ t ) m i t (dy) → sup gt∈B(Y) − Y Lμ t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ t ) m t (dy) as i → ∞. Noting that 0 ≤ sup i≥1,t∈[0,T ] sup αt∈R |X | α t ,μ t −Λ * µt,m i t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ t ,m i t )μ t (dx) < +∞ and 0 ≤ sup i≥1,t∈[0,T ] sup gt∈R |Y| − Y Lμ t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ t ) m i t (dy) < +∞,
using the bounded convergence theorem, we obtain that I * (μ,θ i ) → I * (μ,θ) as i → ∞. Thanks to Remark 7.2, this completes the proof of the lemma.
We now extend the conclusion of the previous lemma to the case when the mapping [0, T ] ∋ t → µ t ∈ M 1 (X ) is not necessarily Lipschitz continuous. Proof. Let us first suppose that the mapping t →μ t is locally Lipschitz continuous at t = 0 so that sup t∈ [0,η] μ t < +∞ for some η > 0. Define a sequence of pathsμ i , i ≥ 1, byμ i 0 =μ 0 , anḋ
µ i t =μ t 1 { μ t ≤i} +Λ * µ i t ,mtμ i t 1 { μ t >i} , t ∈ [0, T ].
Since I * (μ,θ) < +∞, by Lemma 5.1, it follows that the mapping t →μ t is absolutely continuous and by the dominated convergence theorem one easily concludes thatμ i t →μ t as i → ∞ uniformly in t ∈ [0, T ]. Thus, by the assumption inf t∈[δ,T ] min x∈Xμt (x) > 0 for all δ > 0, it follows thatμ i ∈ D([0, T ], M 1 (X )) for all i sufficiently large. Note that (μ i ,θ) satisfies the conditions of Lemma 7.3 and henceĨ(μ i ,θ) = I * (μ i ,θ) for all i ≥ 1, thatμ i →μ in D([0, T ], M 1 (X )) as i → ∞, and that µ i t =μ t for all t ∈ [0, η] for all sufficiently large i.
Let us now show that I * (μ i ,θ) → I * (μ i ,θ) as i → ∞. By the arguments similar to those used in the proof of Lemma 7.3, using Berge's maximum theorem, for each t ∈ [0, T ], the mapping
u → sup gt∈B(Y) − Y L u g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (u) m t (dy) is continuous on M 1 (X ), and hence sup gt∈B(Y) − Y Lμi t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ i t ) m t (dy) → sup gt∈B(Y) − Y Lμ t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ t ) m t (dy)
as i → ∞. Therefore, by the bounded convergence theorem, we have
[0,T ] sup gt∈B(Y) − Y Lμi t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ i t ) m t (dy) dt → [0,T ] sup gt∈B(Y) − Y Lμ t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ t ) m t (dy) dt as i → ∞.
For the slow component, define
Z i t := sup αt∈R |X | α t ,μ i t −Λ * µ i t ,mtμ i t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ i t ,m t )μ i t (dx) , t ∈ [0, T ],
and
Z t := sup αt∈R |X | α t ,μ t −Λ * µt,mtμ t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ t ,m t )μ t (dx) , t ∈ [0, T ].
Since I * (μ,θ) < +∞ it follows that Z t < +∞ for almost all t ∈ [0, T ]. Thanks to the assumption inf t∈[δ,T ] min x∈Xμt (x) > 0 for all δ > 0, using the Berge's maximum theorem, for almost all t ∈ [η, T ], we see that the mapping
u → sup αt∈R |X | α t ,μ t −Λ * u,mt u − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (u,m t )u(dx)
thatĨ(μ i ,θ i ) = I * (μ i ,θ i ) for all i ≥ 1. Again, using arguments similar to those used in the proof of Lemma 7.1, we conclude that I * (μ i ,θ i ) → I * (μ,θ) as i → ∞. Once again, by Remark 7.2, we haveĨ(μ,θ) = I * (μ,θ). This completes the proof of the lemma.
We finally show thatĨ(µ, θ) = I * (µ, θ) for all (µ, θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)), by allowing the path µ to hit the boundary of M 1 (X ). We shall construct a sequence of pathsμ i ∈ D([0, T ], M 1 (X )), i ≥ 1, such thatμ i →μ in D([0, T ], M 1 (X )) as i → ∞,Ĩ(μ i ,θ) = I * (μ,θ) for all i ≥ 1, and I * (μ i ,θ) → I * (μ,θ) as i → ∞.
Let ε i (x) =μ 1/i (x)+1/i 1+|X |/i , x ∈ X and i ≥ 1.
Using arguments similar to those used in the proof of Lemma 7.1, we first construct a sequence of timesτ i , i ≥ 1, and a sequence of piecewise constant velocity trajectoriesμ i t , t ∈ [0,τ i ], with the property thatμ i 0 =μ 0 for all i ≥ 1,μ î τ i (x) = ε i (x) for all x ∈ X and i ≥ 1,τ i → 0 as i → ∞, and
[0,τ i ] sup αt∈R |X | α t ,μ i t −Λ * µ i t ,mtμ i t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ i t ,m t )μ i t (dx) dt → 0 (7.4)
as i → ∞. We then define the pathμ i t on t ∈ (τ i , T ] bŷ
µ i t (x) =μ t+1/i−τ i (x) + 1/i 1 + |X |/i , x ∈ X .
Clearly,μ i t →μ t as i → ∞ uniformly in t ∈ [0, T ] and henceμ i →μ in D([0, T ], M 1 (X )) as i → ∞. Note that (μ i ,θ) satisfies the conditions of Lemma 7.4 and hence we haveĨ(μ i ,θ) = I * (μ i ,θ) for all i ≥ 1.
We now show that I * (μ i ,θ) → I * (μ,θ) as i → ∞. Using arguments similar to those used in the proof of Lemma 7.4, it is easy to show that
[0,T ] sup gt∈B(Y) − Y Lμi t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ i t ) m t (dy) dt → [0,T ] sup gt∈B(Y) − Y Lμ t g t (·)(y) + E Y τ (Dg t (y, ∆))γ y,y+d∆ (μ t ) m t (dy) dt (7.5) as i → ∞.
To show convergence of the integral corresponding to the slow process, define
Z i t := sup αt∈R |X | α t ,μ i t−1/i+τ i −Λ * µ i t−1/i+τ i ,mtμ i t−1/i+τ i − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ i t−1/i+τ i ,m t )μ i t−1/i+τ i (dx) , t ∈ [1/i, T + 1/i −τ i ],
and
Z t := sup αt∈R |X | α t ,μ t −Λ * µt,mtμ t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ t ,m t )μ t (dx) , t ∈ [0, T ].
Note the shift in the time index in the definition of Z i t to enable direct comparison between Z t and Z i t . For t ∈ [1/i, T ], we then have
Z i t = 1 1 + |X |/i sup αt∈R |X | α t ,μ t − (x,x ′ )∈E X (exp{α t (x ′ ) − α t (x)} − 1)λ x,x ′ (μ i t−1/i+τ i ,m t )(μ t (x) + 1/i) .
The objective function above can be simplified as
α t ,μ t − (x,x ′ )∈E X exp{α t (x ′ ) − α t (x)}λ x,x ′ (μ i t−1/i+τ i ,m t )(μ t (x) + 1/i) = α t ,μ t − (x,x ′ )∈E X exp{α t (x ′ ) − α t (x)}λ x,x ′ (μ t ,m t )μ t (x) − (x,x ′ )∈E X exp{α t (x ′ ) − α t (x)} (λ x,x ′ (μ i t−1/i+τ i ,m t ) −λ x,x ′ (μ t ,m t ))μ t (x) +λ x,x ′ (μ i t−1/i+τ i ,m t ) i ≤ α t ,μ t − (x,x ′ )∈E X exp{α t (x ′ ) − α t (x)}λ x,x ′ (μ t ,m t )μ t (x) − (x,x ′ )∈E X exp{α t (x ′ ) − α t (x)} − c Lμt (x) i + c i
where the last inequality follows from assumption (A2); here c = min (x,x ′ )∈X min y∈Y min ξ∈M 1 (X ) λ x,x ′ (ξ, y)
and
c L = max (x,x ′ )∈E X max y∈Y c x,x ′ ,y L
where c x,x ′ ,y L is the Lipschitz constant of λ x,x ′ (·, y), (x, x ′ ) ∈ E X , y ∈ Y. Fix t ∈ [1/i, T + 1/i −τ i ] with Z t < +∞ and let (α i t (x), x ∈ X ) ∈ R |X | denote the optimiser in the definition of Z i t . Then the above computation gives us and hence for all t ∈ [1/i, T + 1/i −τ i ] with Z t < +∞, we obtain that
Z i t ≤ 1 1 + |X |/i {Z t + c 2 |E X |(1 + μ t )}.
Hence by the dominated convergence theorem, we see that
[0,T ] sup αt∈R |X | α t ,μ i t −Λ * µ i t ,mtμ i t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ i t ,m t )μ i t (dx) × 1 {t≥τ i } dt converges to [0,T ] sup αt∈R |X | α t ,μ t −Λ * µt,mtμ t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ t ,m t )μ t (dx) dt
as i → ∞. This along with the convergences (7.4) and (7.5) implies that I * (μ i ,θ) → I * (μ,θ) as i → ∞. The procedure of Remark 7.2 then completes the proof of the theorem.
Completing the Proof of Theorem 2.2
We finally complete the proof of Theorem 2.2 by extending the conclusion of Theorem 7.5 to all subsequential rate functionsĨ, i.e. we remove the restriction that, for some ν ∈ M 1 (X ), I(µ, θ) = +∞ unless µ 0 = ν.
Proof of Theorem 2.2. Fix ν ∈ M 1 (X ) and suppose that {µ N } is such that lim sup N →∞ with rate function I 0 (µ(0)) + I * (µ, θ) (see, for example, Chaganty [7]). This completes the proof of Theorem 2.2.
1 N log P (|µ N (0)− ν| ≥ ε) = −∞ for each ε > 0. By Theorem 3.3, the family {(µ N , θ N )} N ≥1 is exponentially tight in D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)). Therefore, there exists a subsequence {N k } k≥1 of N such that {(µ N k , θ N k )}
A Examples of two time scale mean-field models
We describe two applications that can be studied using our two time scale mean-field model -a retrial system with orbit queues and a wireless local area network (WLAN) with local interaction.
Example 1. We first describe a retrial system with orbit queues (see Figure 2). Such systems have been used to model multiple competing jobs in a carrier sense multiple access network (see Avrachenkov et al. [1] and the references therein). In this model, there is a single exponential server with service rate N , N statistically identical Poisson arrival streams (of rate λ) and N orbit queues of identical (finite) size K, one corresponding to each arrival stream. Whenever an arriving customer finds an empty server, it occupies the server and spends a random amount of time, exponentially distributed with mean 1/N , and then leaves the system. If the arriving customer sees a busy server, it waits in the orbit queue corresponding to that arrival stream, if the queue is not full. Whenever an orbit queue is nonempty and the server is free, the head of the line customer in that orbit queue attempts for service at a fixed positive rate α. In this setting, the state of the server (i.e. idle/busy) represents the environment, and the number of waiting customers in an orbit queue represents the state of that node. Note that the state of each orbit queue evolves slowly (i.e. O(1) many transitions in a given O(1) duration of time). But since there are N orbit queues and each nonempty queue attempts for service with a fixed positive rate, the environment makes O(N ) many transitions in a given O(1) duration of time. Also, the transition rates of the number of customers in a queue depend on the state of the server and the transition rates of the environment depend on the fraction of non-empty orbit queues. Figure 3 depicts the transition rates of each orbit queue when the server state is y (y = 0 indicates idle state and y = 1 indicates busy state), and Figure 4 depicts the transition rates of the server when the empirical measure of the states of all the orbit queues is ξ. Clearly, this system falls within the framework of our two time scale mean-field model. We now describe the setting of WLAN. Let there be N nodes. Time is divided into slots. Each node has a state associated with it, which represents the probability of attempting a packet transmission in a slot. Since the network could be spread over a large geographical area, the nodes are grouped into C classes; every node that belongs to a class can hear the transmissions of every other node in that class. Figure 5 depicts an example network with 7 nodes and 3 classes. The interaction among the nodes comes from the distributed channel access algorithm executed by the nodes. This interaction results in the evolution of the state of each node in the following ✖✕ ✗✔ ✖✕ ✗✔ 0 idle 1 busy ✲ ✛ N (λ + α(1 − ξ(0))) N Figure 4: Transition rates of the server when the empirical measure of nodes is ξ; ξ(0) denotes the fraction of empty orbit queues Figure 5: A wireless local area network with 3 classes and 7 users; interference among classes are indicated by arrows fashion: a node that incurs a collision upon a packet transmission moves to a different state with a reduced probability of attempt, and upon a successful transmission moves to another state with an increased probability of attempt. Since multiple nodes could transmit at the same slot, the channel corresponding to a class of nodes could be in three different states in a given time slot: (i) an idle slot (denoted by state 0), (ii) a collision (state 2) or (iii) a successful packet transmission (state 1). We denote the channel state corresponding to each class of nodes as the environment, i.e., at each time slot, the environment is an element of {0, 1, 2} C with the cth coordinate representing the channel state of the cth class of nodes. Since there are O(N ) many nodes in each class, we see that the environment makes O(N ) many transitions over a given O(1) time duration. Also, we see that the transition rates of the environment depend on the attempt probabilities of the nodes in that class, but only through the empirical measure of the states of the nodes in that class. On the other hand, the transition rates of the states of a node depend on the attempt probabilities of the nodes in that class (again, only through the empirical measure) as well as the environment. Hence, we have a two time scale mean-field model that describes the network, but one that operates in discrete-time. We now see how to translate this to an approximate continuous-time model. Figure 6 depicts the set of allowed transitions of a node; in typical WLAN implementations, the most aggressive state is 0 and the least aggressive state is K. A node moves from state i to state i + 1 when it incurs a collision, and moves from state i to 0 when a packet is successfully transmitted. To describe the transition rates of the continuous time model, we shall consider a scaled version of the above discrete time model where each time slot is of duration 1/N . Let p i /N denote the attempt probability of a node in state i, and let A denote the interference matrix among Figure 6: Set of allowed transitions for a particle in a WLAN classes, specifically, A c,d = 1 implies that a class c node's transmission is interfered by a class d node's transmission. Let V c = {d : A c,d = 1} denote the classes that interfere with class c nodes' transmissions. Also, for each i ∈ {0, 1, . . . , K} and c ∈ {1, 2, . . . , C}, let ξ c i denote the fraction of nodes (among the nodes in class c) in state i and let y ∈ {0, 1, 2} C denote the state of the background process. The transition probability of tagged node in class c from state i to state 0 is
p i N d∈Vc,d =c 1 {y d =0} j∈c,j =i 1 − p j N × d∈Vc,d =c d ′ ∈V d 1 {y d ′ =0} j∈d ′ (1 − p j N ) + 1 − d ′ ∈V d 1 {y d ′ =0} ;
scaling the above by N and noting that j∈d (1 − p j /N ) ∼ exp{− K i=0 p i ξ d i }, the corresponding transition rate of the continuous time model can be approximated as
p i d∈Vc 1 {y d =0} × d∈Vc d ′ ∈V d 1 {y d ′ =0} exp − K i=0 p i ξ d i − 1 + 1 .
Similarly, the transition rate of a class c node from state i to state i + 1 is
p i d∈Vc 1 {y d =0} × 1 − d∈Vc d ′ ∈V d 1 {y d ′ =0} exp − K i=0 p i ξ d i − 1 + 1 .
We can also write down the transition rates of the background process; for example, a transition from the all-0 state to the state y with y c = 1 and y d = 0 for all d = c (which happens when a node in class c starts a transmission) occurs with rate
N K i=0 p i ξ c i × exp − K i=0 p i ξ c i .
A study of the above model in the large-N regime has been done by Bordenave et al. [3] towards understanding the average throughput obtained by a node in a given class, whereas our result in this paper provides a finer asymptotic analysis, in the realm of large deviations, which enables us to study metastability in such systems. For a continuous-time model of WLAN without a fast environment, see Boorstyn et al. [2].
Theorem 2. 2 .
2Assume (A1), (A2), (B1), (B2), and fix T > 0. Suppose that {µ N (0)} N ≥1 satisfies the LDP on M 1 (X ) with rate function I 0 . Then the sequence {(µ N (t), θ N (t)), 0 ≤ t ≤ T } N ≥1 satisfies the LDP on D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) with rate function
suffices to show that µ N and θ N are individually exponentially tight in D([0, T ], M 1 (X )) and D ↑ ([0, T ], M (Y)) respectively (see, for example, Feng and Kurtz [14, Lemma 3.6])
(For economy of notation in the sequel, we shall also view R-valued functions on[0, T ]×X as R |X | -valued functions on [0, T ].) Given (µ, θ) ∈ D([0, T ], M 1 (X ))×D ↑ ([0, T ], M (Y)), let H X (µ, θ) denote the L τ * ([0, T ] × X × E X ,λ x,x+d∆ (µ t , m t )µ t (dx)dt)-closure of functions of the form {exp{Dα} − 1, Dα ∈ DC X } and let H Y (µ, θ) denote the L τ * ([0, T ] × Y × E Y , γ y,y+d∆ (µ t )m t (dy)dt)closure of functions of the form {exp{Dg} − 1, Dg ∈ DC Y }, where θ admits the representation θ(dydt) = m t (dy)dt for some m t ∈ M 1 (Y) for almost all t ∈ [0, T ]. We now prove the main result of this section.
when viewed as a measure on [0, T ]×Y, admits the representation θ(dyds) = m t (dy)ds for almost all t ∈ [0, T ]. Moreover, there exist functions
(5.3) and (5.4) and the non-variational representation of I * in (5.5) by carrying out the convex analytic programme of Léonard [24, Sections 5-6] to the bounded linear functionals α → [0,T ] α, (μ t −Λ * µt,mt µ t ) + ∆) − g(y) γ y,y+d∆ (µ t )m t (dy)dt on the closure of {Dα, α ∈ B([0, T ]×X )} and {Dg, g ∈ B([0, T ]×Y)} in the Orlicz spaces L τ ([0, T ]×
LetĨ : D([0, T ], M 1 (X ))×D ↑ ([0, T ], M (Y)) → [0, +∞] be a subsequential rate function for the family {(µ N , θ N )} N ≥1 , i.e., for some sequence {N k } k≥1 of N, {(µ N k , θ N k )} k≥1 satisfies the large deviation principle with rate functionĨ. In addition suppose that, for some ν ∈ M 1 (X ),Ĩ(µ, θ) = +∞ unless µ 0 = ν. In this section, we characteriseĨ for sufficiently regular elements in D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)), i.e., we show thatĨ(μ,θ) = I * (μ,θ) for all elements (μ,θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) that satisfy certain regularity properties, where I * is given by (5.2) (see Theorem 6.2). 6.1 An extension of Theorem 4.1 We first extend the conclusion of Theorem 4.1 to a larger class of functions α and g. Let Γ ⊂ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) denote the set of points (µ, θ) such that the mapping [0, T ] ∋ t → µ t ∈ M 1 (X ) is absolutely continuous, and θ, when viewed as a measure on [0, T ] × Y admits the representation θ(dydt) = m t (dy)dt where m t ∈ M 1 (Y) for almost all t ∈ [0, T ]. In particular, (µ, θ) ∈ Γ implies that the mapping t → µ t is differentiable for almost all t ∈ [0, T ]. Given bounded measurable functions α : [0, T ] × M 1 (X ) → R |X | and g : [0, T ] × M 1 (X ) × Y → R such that for all t ∈ [0,
y+d∆ (µ t )m t (dy)dt). LetĨ : D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) → [0, +∞] be a subsequential rate function for the family {(µ N , θ N )} N ≥1 . Note that, by Theorem 4.1 and the definition of I * in (5.1), we have thatI(µ, θ) ≥ I * (µ, θ) for all (µ, θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)). Given δ > 0, define K δ = {(µ, θ) :Ĩ(µ, θ) ≤ δ};sinceĨ has compact level sets, K δ is compact in D([0, T ], M 1 (X ))×D ↑ ([0, T ], M (Y)). By Lemma 5.1 and the fact thatĨ ≥ I * , we have that K δ ⊂ Γ. We now prove the following extension to Theorem 4.1.Theorem 6.1. LetĨ : D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) → [0, +∞] be a subsequential rate function. Let α : [0, T ] × M 1 (X ) → R |X | , g : [0, T ] × M 1 (X ) × Y → R be bounded and measurable functions such that both α and g are continuous on M 1 (X ). Then, sup (µ,θ)∈Γ (U α,g T (µ, θ) −Ĩ(µ, θ)) = 0.
and Leb([0, T ] \ F i ) ≤ 1/i (see, for example, Ekeland and Temam [12, page 235]). Since [0, T ] \ F i is open in [0, T ], we can write it as a countable union of disjoint open intervals, and hence we can extendᾱ i to a continuous function on [0, T ] × M 1 (X ) by a linear interpolation between the two endpoints of the above open intervals; we again denote this function byᾱ i . Put α i (t, µ t ) =ᾱ i ( ⌊tn(i)⌋ n(i) , µ ⌊tn(i)⌋ n(i)
→ ∞. Since α i and g i , i ≥ 1, satisfy the conditions on α and g respectively in the definitions of U in (4.3) and V in (4.1), Theorem 4.1 implies that sup (µ,θ)∈D([0,T ],M 1 (X ))×D ↑ ([0,T ],M (Y))
We now prove the main result of this section, namelyĨ(µ, θ) = I * (µ, θ) for all (µ, θ) ∈ D([0, T ], M 1 (X ))× D ↑ ([0, T ], M (Y)) that satisfy certain regularity properties.Theorem 6.2. Let ν ∈ M 1 (X ) and letĨ : D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) → [0, +∞] be a subsequential rate function such thatĨ(µ, θ) = +∞ unless µ 0 = ν. Suppose that (μ,θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) is such that • inf t∈[0,T ] min x∈Xμt (x) > 0, • the mapping [0, T ] ∋ t →μ t ∈ M 1 (X ) isLipschitz continuous, •θ, when viewed as a measure on [0, T ] × Y, admits the representationθ(dydt) =m t (dy)dt for somem t ∈ M 1 (Y) for almost all t ∈ [0, T ], and inf t∈[0,T ] min y∈Ymt (y) > 0.
7
Approximating the subsequential rate function LetĨ : D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) → [0, +∞] be a subsequential rate function for the family {(µ N , θ N )} N ≥1 , and suppose that, for some ν ∈ M 1 (X ),Ĩ(µ, θ) = +∞ unless µ 0 = ν. In this section, we show thatĨ(µ, θ) = I * (µ, θ) for all (µ, θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)).
Lemma 7. 1 .Figure 1 :
11Let ν ∈ M 1 (X ) and letĨ : D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) → [0, +∞] be a subsequential rate function such thatĨ(µ, θ) = +∞ unless µ 0 = ν. Suppose that (μ,θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) is such that Figure depicting the idea of construction ofμ i in the proof of Lemma 7.1 • I * (μ,θ) < +∞, • inf t∈[δ,T ] min x∈Xμt (x) > 0 for all δ > 0, • the mapping [0, T ] ∋ t →μ t ∈ M 1 (X ) is Lipschitz continuous, •θ, when viewed as a measure on [0, T ] × Y, admits the representationθ(dydt) =m t (dy)dt for somem t ∈ M 1 (Y) for almost all t ∈ [0, T ], and inf t∈[0,T ] min y∈Ymt (y) > 0.
Lemma 7 . 3 .
73Let ν ∈ M 1 (X ) and letĨ : D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) → [0, +∞] be a subsequential rate function such thatĨ(µ, θ) = +∞ unless µ 0 = ν. Suppose that (μ,θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) is such that• I * (μ,θ) < +∞, • inf t∈[δ,T ] min x∈Xμt (x) > 0 for all δ > 0, • the mapping [0, T ] ∋ t →μ t ∈ M 1 (X ) is Lipschitz continuous. ThenĨ(μ,θ) = I * (μ,θ).Proof. Letθ, when viewed as a measure on [0, T ]×Y, admit the representationθ(dydt) =m t (dy)dt, wherem t ∈ M 1 (Y) for almost all t ∈ [0, T ]. For each i ≥ 1 and for each t ∈ [0, T ], define the probability measurem i t on Y bym for each i ≥ 1, define the measureθ i (dydt) on [0, T ] × M (Y) byθ i (dydt) :=m i t (dy)dt. Clearly, θ i ∈ D ↑ ([0, T ], M (Y)) for all i ≥ 1, andθ i →θ in D ↑ ([0, T ], M (Y)) as i → ∞. Since (μ,θ i ) satisfies the assumptions of Lemma 7.1, we haveĨ(μ,θ i ) = I * (μ,θ i ).
Lemma 7 . 4 .
74Let ν ∈ M 1 (X ) and letĨ : D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) → [0, +∞] be a subsequential rate function such thatĨ(µ, θ) = +∞ unless µ 0 = ν. Suppose that (μ,θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) is such that I * (μ,θ) < +∞, and inf t∈[δ,T ] min x∈Xμt (x) > 0 for all δ > 0. ThenĨ(μ,θ) = I * (μ,θ).
Theorem 7. 5 .
5Let ν ∈ M 1 (X ) and letĨ : D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) → [0, +∞] be a subsequential rate function such thatĨ(µ, θ) = +∞ unless µ 0 = ν. Then, for all(μ,θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)), we haveĨ(μ,θ) = I * (μ,θ). Proof. SinceĨ(µ, θ) ≥ I * (µ, θ) for all (µ, θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ],M (Y)), it suffices to focus on a (μ,θ) ∈ D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) such that I * (μ,θ) < +∞ andμ 0 = ν. By Lemma 5.1, we have that the mapping [0, T ] ∋ t →μ t ∈ M 1 (X ) is absolutely continuous. In particular,μ t exists for almost all t ∈ [0, T ] andμ t = ν + [0,t]μ s ds for all t ∈ [0, T ].
k≥1 satisfies the LDP with rate functionĨ (see, for example, Dembo and Zeitouni [9, Lemma 4.1.23]); by the above condition on the family {µ N } and by the contraction principle, we see thatĨ(µ, θ) = +∞ unless µ 0 = ν. Therefore, by Theorem 7.5,Ĩ = I * on D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)). HenceĨ is uniquely determined for all such subsequences, and it follows that the family {(µ N , θ N )} N ≥1 satisfies the LDP with rate function I * (see, for example, Dembo and Zeitouni [9, Exercise 4.4.15 (b)]) defined as follows: I * (µ, θ) is defined by (5.1) whenever µ is such that µ(0) = ν, and I * (µ, θ) = +∞ otherwise. In the general case when {µ N (0)} satisfies the LDP on M 1 (X ) with rate function I 0 , let p (N ) ν N denote the regular conditional distribution of (µ N , θ N ) on D([0, T ], M 1 (X ))×D ↑ ([0, T ], M (Y)) given µ N (0) = ν N ∈ M N 1 (X ). By the above argument, whenever ν N → ν in M 1 (X ), p (N ) ν N satisfies the LDP on D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y)) with rate function I * (µ, θ) + ∞1 µ(0) =ν . Therefore, the family {(µ N , θ N )} N ≥1 satisfies the LDP on D([0, T ], M 1 (X )) × D ↑ ([0, T ], M (Y))
Figure 2
2Figure 2: A retrial system with N orbit queues
Figure 3 :
3Transition rates of an orbit queue when the server state is y Example 2.
satisfies the LDP with rate functionĨ. Then, for each α and g that satisfy the requirements of the definition of U and Vin (4.3) and (4.1) respectively, we have
27, Lemma A.2, page 460]. Lemma 5.2. Let V be a complete separable metric space, and let U be a dense subspace of
are measurable. By the Berge's maximum theorem (see, for example, Sundaram [30, Theorem 9.17, page 237]) it follows that the functionsα andĝ are continuous on M 1 (X ).
x r+1 s+1 ,x x r+1 s+2 ) (exp{α t (x ′ ) − α t (x)} − 1)λ x,x ′ (μ i t ,m i t )μ i t (x) ≤ log 1 cμ i t (x x r+1 s+1 ) + c 1 where c = min (x,x ′ )∈E X min y∈Y min ξ∈M 1 (X ) λ x,x ′ (ξ,y)and c 1 > 0 is a suitable constant to bound the extra additive terms. Hence, using a variable change u = cμ i t (xx r+1 s+1 ), we see that sup αt∈R |X | α t ,μ i t −Λ * µ i t ,m i tμ i t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ i t ,m i t )μ i t (dx) dt
[0,τ i ] sup αt∈R |X | α t ,μ i t −Λ * µ i t ,m i tμ i t − X ×E X τ (Dα t (x, ∆))λ x,x+d∆ (μ i t ,m i t )μ i t (dx) dtconverges to 0 as i → ∞. Therefore, notingthatμ i t =μ t+1/i−τ i andm i t =m t+1/i−τ i for t ∈ [τ i , T ],
on M 1 (X ) is continuous atμ t . Hence, noting that Z i t = Z t on t ∈ [0, η] for all i sufficiently large, for all t ∈ [0, T ] ∩ {s ∈ [0, T ] : Z s < +∞} we haveμ i t =μ t for all i sufficiently large, andμ i t →μ t as i → ∞ uniformly in t ∈ [0, T ], it follows that for all t ∈ [0, T ] ∩ {s ∈ [0, T ] : Z s < +∞} Z i t → Z t as i → ∞. Let us now show the convergence of the corresponding integrals. Fix t ∈ (0, T ] such that Z t < +∞ and letα i t ∈ R |X | andα t ∈ R |X | attain the supremum in the definition of Z i t and Z t respectively. Whenever μ i t ≤ i, we have,, the first order optimality condition for (α i t (x), x ∈ X ) (see(6.18)) implies that, for some constants c η > 0, we haveIn particular, the right hand side of (7.3) is integrable. Hence, noting that Z i t = 0 in the alternative case when μ t > i, by an application of the dominated convergence theorem, we have thatas i → ∞. Hence, combining the convergences for the slow and the fast components, we have I * (μ i ,θ) → I * (μ,θ) as i → ∞. Further, by Remark 7.2, it follows thatĨ(μ,θ) = I * (μ,θ).In the general case when the mapping t →μ t is not locally Lipschitz continuous at t = 0, using arguments similar to those used in the proof of Lemma 7.1, one constructs a sequenceτ i , i ≥ 1, and a sequence of elements (τ (Dg t (y, ∆))γ y,y+d∆ (μ i t ) m i t (dy) dt converges to 0 as i → ∞ (by using the small cost construction of constant velocity paths). Based on what we have already shown for paths that are locally Lipschitz continuous at t = 0, we see
A retrial system with two input streams and two orbit queues. K Avrachenkov, P Nain, U Yechiali, Queueing Systems. 771K. Avrachenkov, P. Nain, and U. Yechiali. A retrial system with two input streams and two orbit queues. Queueing Systems, 77(1):1-31, 2014.
Throughput analysis in multihop csma packet radio networks. R Boorstyn, A Kershenbaum, B S Maglaris, V Sahin, IEEE Transactions on Communications. 35R. Boorstyn, A. Kershenbaum, B. S. Maglaris, and V. Sahin. Throughput analysis in multihop csma packet radio networks. IEEE Transactions on Communications, 35:267-274, 1987.
C Bordenave, D Mcdonald, A Proutiere, A particle system in interaction with a rapidly varying environment: Mean field limits and applications. Networks and Heterogeneous Media. 5C. Bordenave, D. McDonald, and A. Proutiere. A particle system in interaction with a rapidly varying environment: Mean field limits and applications. Networks and Heterogeneous Media, 5(1):31-62, 2010.
Asymptotics of the invariant measure in mean field models with jumps. V Borkar, R Sundaresan, Stochastic Systems. 2V. Borkar and R. Sundaresan. Asymptotics of the invariant measure in mean field models with jumps. Stochastic Systems, 2(2):322-380, 2012.
Large deviations for small noise diffusions in a fast Markovian environment. A Budhiraja, P Dupuis, A Ganguly, Electronic Journal of Probability. 23A. Budhiraja, P. Dupuis, and A. Ganguly. Large deviations for small noise diffusions in a fast Markovian environment. Electronic Journal of Probability, 23:1-33, 2018.
Induced idleness leads to deterministic heavy traffic limits for queue-based random-access algorithms. E Castiel, S Borst, L Miclo, F Simatos, P Whiting, Annals of Applied Probability. 312E. Castiel, S. Borst, L. Miclo, F. Simatos, and P. Whiting. Induced idleness leads to de- terministic heavy traffic limits for queue-based random-access algorithms. Annals of Applied Probability, 31(2):941-971, 2021.
Large deviations for joint distributions and statistical applications. N R Chaganty, Sankhyā: The Indian Journal of Statistics, Series A. 592N. R. Chaganty. Large deviations for joint distributions and statistical applications. Sankhyā: The Indian Journal of Statistics, Series A, 59(2):147-166, 1997.
Large deviations from the mckean-vlasov limit for weakly interacting diffusions. D A Dawson, J Gärtner, Stochastics. 204D. A. Dawson and J. Gärtner. Large deviations from the mckean-vlasov limit for weakly interacting diffusions. Stochastics, 20(4):247-308, 1987.
Large Deviations Techniques and Applications. A Dembo, O Zeitouni, Springer-VerlagBerlin Heidelberg2 editionA. Dembo and O. Zeitouni. Large Deviations Techniques and Applications. Springer-Verlag Berlin Heidelberg, 2 edition, 2010.
The rate function for some measure-valued jump processes. B Djehiche, I Kaj, Annals of Probability. 233B. Djehiche and I. Kaj. The rate function for some measure-valued jump processes. Annals of Probability, 23(3):1414-1438, 1995.
Asymptotic evaluation of certain markov process expectations for large time, i. M D Donsker, S R S Varadhan, Communications on Pure and Applied Mathematics. 281M. D. Donsker and S. R. S. Varadhan. Asymptotic evaluation of certain markov process expectations for large time, i. Communications on Pure and Applied Mathematics, 28(1):1-47, 1975.
Convex Analysis and Variational Problems. I Ekeland, R Temam, 28SIAMI. Ekeland and R. Temam. Convex Analysis and Variational Problems, volume 28. SIAM, 1999.
Markov Processes: Characterization and Convergence. S N Ethier, T G Kurtz, John Wiley & Sons2 editionS. N. Ethier and T. G. Kurtz. Markov Processes: Characterization and Convergence. John Wiley & Sons, 2 edition, 2005.
Large Deviations for Stochastic Processes. Mathematical Surveys and Monographs. J Feng, T G Kurtz, American Mathematical Society2 editionJ. Feng and T. G. Kurtz. Large Deviations for Stochastic Processes. Mathematical Surveys and Monographs. American Mathematical Society, 2 edition, 2006.
Random Perturbations of Dynamical Systems. Grundlehren der mathematischen Wissenschaften. M I Freidlin, A D Wentzell, American Mathematical Society3 editionM. I. Freidlin and A. D. Wentzell. Random Perturbations of Dynamical Systems. Grundlehren der mathematischen Wissenschaften. American Mathematical Society, 3 edition, 2012.
Large loss networks. P J Hunt, T G Kurtz, Stochastic Processes and their Applications. 53P. J. Hunt and T. G. Kurtz. Large loss networks. Stochastic Processes and their Applications, 53(2):363-378, 1994.
Optimization via trunk reservation in single resource loss systems under heavy traffic. P J Hunt, C N Laws, Annals of Applied Probability. 74P. J. Hunt and C. N. Laws. Optimization via trunk reservation in single resource loss systems under heavy traffic. Annals of Applied Probability, 7(4):1058-1079, 1997.
Large-time behavior of perturbed diffusion markov processes with applications to the second eigenvalue problem for fokker-planck operators and simulated annealing. C.-R Hwang, S.-J Sheu, Acta Applicandae Mathematicae. 193C.-R. Hwang and S.-J. Sheu. Large-time behavior of perturbed diffusion markov processes with applications to the second eigenvalue problem for fokker-planck operators and simulated annealing. Acta Applicandae Mathematicae, 19(3):253-295, 1990.
Limit Theorems for Stochastic Processes. J Jacod, A N Shiryaev, Springer Science & Business Media288J. Jacod and A. N. Shiryaev. Limit Theorems for Stochastic Processes, volume 288. Springer Science & Business Media, 2013.
Loss networks. F P Kelly, Annals of Applied Probability. 13F. P. Kelly. Loss networks. Annals of Applied Probability, 1(3):319-378, 1991.
On the averaging principle for stochastic differential Itô equation. R Khasminskii, Kibernetika. 43R. Khasminskii. On the averaging principle for stochastic differential Itô equation. Kibernetika, 4(3):260-279, 1968.
Large deviations for multi-scale jump-diffusion processes. R Kumar, L Popovic, Stochastic Processes and their Applications. 127R. Kumar and L. Popovic. Large deviations for multi-scale jump-diffusion processes. Stochastic Processes and their Applications, 127(4):1297-1320, 2017.
Large deviations for long range interacting particle systems with jumps. C Léonard, Annales de l'Institut Henri Poincaré Probabability and Statistics. 312C. Léonard. Large deviations for long range interacting particle systems with jumps. Annales de l'Institut Henri Poincaré Probabability and Statistics, 31(2):289-323, 1995.
On large deviations for particle systems associated with spatially homogeneous boltzmann type equations. Probability Theory and Related Fields. C Léonard, 101C. Léonard. On large deviations for particle systems associated with spatially homogeneous boltzmann type equations. Probability Theory and Related Fields, 101(1):1-44, Mar 1995.
Large deviations for two scaled diffusions. R Liptser, Probability theory and Related Fields. 106R. Liptser. Large deviations for two scaled diffusions. Probability theory and Related Fields, 106(1):71-104, 1996.
The method of stochastic exponentials for large deviations. A Puhalskii, Stochastic Processes and their Applications. 54A. Puhalskii. The method of stochastic exponentials for large deviations. Stochastic Processes and their Applications, 54(1):45-70, 1994.
Large Deviations and Idempotent Probability. A Puhalskii, Chapman and Hall/CRCA. Puhalskii. Large Deviations and Idempotent Probability. Chapman and Hall/CRC, 2001.
On large deviations of coupled diffusions with time scale separation. A A Puhalskii, Annals of Probability. 4442016A. A. Puhalskii. On large deviations of coupled diffusions with time scale separation. Annals of Probability, 44(4):3111-3186, 07 2016.
Theory of Orlicz Spaces. M M Rao, Z D Ren, Pure and Applied Mathematics. 146Marcel Dekker, IncM. M. Rao and Z. D. Ren. Theory of Orlicz Spaces, volume 146 of Pure and Applied Mathe- matics. Marcel Dekker, Inc., 1991.
A First Course in Optimization Theory. R K Sundaram, Cambridge University PressR. K. Sundaram. A First Course in Optimization Theory. Cambridge University Press, 1996.
On large deviations in the averaging principle for sdes with a "full dependence. A Y Veretennikov, Annals of Probability. 271A. Y. Veretennikov. On large deviations in the averaging principle for sdes with a "full depen- dence". Annals of Probability, 27(1):284-296, 1999.
On large deviations for SDEs with small diffusion and averaging. A Y Veretennikov, Stochastic Processes and their Applications. 89A. Y. Veretennikov. On large deviations for SDEs with small diffusion and averaging. Stochastic Processes and their Applications, 89(1):69-79, 2000.
Large time behaviour and the second eigenvalue problem for finite state mean-field interacting particle systems. S Yasodharan, R Sundaresan, arXiv:1909.03805arXiv preprintS. Yasodharan and R. Sundaresan. Large time behaviour and the second eigenvalue problem for finite state mean-field interacting particle systems. arXiv preprint arXiv:1909.03805, 2019.
. S Yasodharan, R , 560Sundaresan Department of Electrical Communication Engineering Indian Institute of Science BangaloreIndia email: [email protected], [email protected]. Yasodharan and R. Sundaresan Department of Electrical Communication Engineering Indian Institute of Science Bangalore 560 012, India email: [email protected], [email protected]
|
[] |
[
"The Hellan-Herrmann-Johnson Method for Nonlinear Shells",
"The Hellan-Herrmann-Johnson Method for Nonlinear Shells"
] |
[
"Michael Neunteufel \nInstitute for Analysis and Scientific Computing\nWiedner Hauptstraße 8-101040Wien, Wien, Aus-triaTU\n",
"Joachim Schöberl \nInstitute for Analysis and Scientific Computing\nWiedner Hauptstraße 8-101040Wien, Wien, Aus-triaTU\n"
] |
[
"Institute for Analysis and Scientific Computing\nWiedner Hauptstraße 8-101040Wien, Wien, Aus-triaTU",
"Institute for Analysis and Scientific Computing\nWiedner Hauptstraße 8-101040Wien, Wien, Aus-triaTU"
] |
[] |
In this paper we derive a new finite element method for nonlinear shells. The Hellan-Herrmann-Johnson (HHJ) method is a mixed finite element method for fourth order Kirchhoff plates. It uses convenient Lagrangian finite elements for the vertical deflection, and introduces sophisticated finite elements for the moment tensor. In this work we present a generalization of this method to nonlinear shells, where we allow finite strains and large rotations. The geometric interpretation of degrees of freedom allows a straight forward discretization of structures with kinks. The performance of the proposed elements is demonstrated by means of several established benchmark examples.
|
10.1016/j.compstruc.2019.106109
|
[
"https://arxiv.org/pdf/1904.04714v1.pdf"
] | 104,292,373 |
1904.04714
|
42eaa436d4f8dd6c30746053bfc987d0fa4b457b
|
The Hellan-Herrmann-Johnson Method for Nonlinear Shells
Michael Neunteufel
Institute for Analysis and Scientific Computing
Wiedner Hauptstraße 8-101040Wien, Wien, Aus-triaTU
Joachim Schöberl
Institute for Analysis and Scientific Computing
Wiedner Hauptstraße 8-101040Wien, Wien, Aus-triaTU
The Hellan-Herrmann-Johnson Method for Nonlinear Shells
nonlinear shellsstructural mechan- icsdiscrete differential geometrymixed finite ele- mentsKirchhoff hypothesis
In this paper we derive a new finite element method for nonlinear shells. The Hellan-Herrmann-Johnson (HHJ) method is a mixed finite element method for fourth order Kirchhoff plates. It uses convenient Lagrangian finite elements for the vertical deflection, and introduces sophisticated finite elements for the moment tensor. In this work we present a generalization of this method to nonlinear shells, where we allow finite strains and large rotations. The geometric interpretation of degrees of freedom allows a straight forward discretization of structures with kinks. The performance of the proposed elements is demonstrated by means of several established benchmark examples.
Introduction
The difficulty of constructing simple C 1 -conforming Kirchhoff-Love shell elements led to the development of the well-known discrete Kirchhoff (DKT) elements [30,46,3], where the Kirchhoff constraint was inforced in a discrete way along the edges. The class of rotation-free (RF) elements eliminate the rotational degrees of freedom by using out-of-plane translation degrees of freedom (dofs) [33,9,19]. Alternative approaches are discontinuous Galerkin (DG) methods [17,21,47] and Isogeometric Analysis (IGA) [26,40,28,16]. 1 Corresponding author. E-mail adress: [email protected]
The HHJ method for fourth order Kirchhoff plates has been developed and analyzed in [22,23,27]. Later work has been done in the 80s [13,1], 90s [44] and recently after 20 years [11,25,7]. It overcomes the issue of C 1 -conformity by introducing the moment tensor as an additional tensor field leading to a mixed method. The tangential displacement and normal-normal stress method (TDNNS) developed for linear elasticity and Reissner-Mindlin plates in [42,34,35,36] follows the idea of mixed methods, where the stress tensor gets interpolated in the reinvented H(divdiv) space from the HHJ method.
In this paper modern coordinate-free differential geometry, see e.g. [14,43], is used to define the shell energy. The aim of this work is to find a (highorder) finite element shell element, consisting of H 1conforming finite elements for the displacement and H(divdiv) elements for the moments. It turns out, that this model can be seen as a generalization of the HHJ method to nonlinear shells. Furthermore, the method can handle surfaces with kinks in a natural way without additional treatment. Numerical results are shown to confirm the model.
Methodology
Notation and finite element spaces
Let S be a 2-dimensional surface in R 3 , and let S h = T ∈T h T be its approximation by a triangulation T h consisting of possibly curved triangles or quadrilaterals. The set of all edges in T h is denoted by E h . Further, let L 2 (S h ) and C 0 (S h ) be the set of all square-integrable and continuous functions on S h , respectively. For each element in T h we denote the surface normal vector by ν and the normalized edge tangent vector between elements by τ e . The outgoing elementnormal vector µ is defined as µ = ±ν × τ e depending on the orientation of τ e , cf. The set of all piece-wise polynomials of degree k on T h is denoted by Π k (T h ). With this, we define the following function and finite element spaces
µ L µ R ν L ν R τ e T L T RH 1 (T h ) := u ∈ L 2 (S h ) | ∇ τ u ∈ [L 2 (S h )] 3 , (2.1) Σ(T h ) := σ ∈ [C ∞ (T h )] 3×3 sym | σ µµ = 0 , (2.2) V k h (T h ) := Π k (T h ) ∩ C 0 (S h ), (2.3) Σ k h (T h ) := σ ∈ [Π k (T h )] 3×3 sym | σ µµ = 0 , (2.4) Γ k h (T h ) := u ∈ [Π k (T h )] 3 | u µ = 0 , (2.5)
where we used the notations σ µµ := µ T σµ and u µ := u · µ with · denoting the jump over elements. Note, that ∇ τ u denotes the surface gradient of u, which can be introduced in weak sense [15], or directly as Fréchet-derivative. For the construction of finite element spaces and explicit basis functions of (2.3), (2.4) and (2.5) we refer to [6,42,48,32,8,49]. With a hierarchical basis of (2.5) we can define the finite element space
Γ k h (T h ) as the space Γ k h (T h ),
where the inner degrees of freedom are neglected. Σ k h (T h ) will be called the Hellan-Herrmann-Johnson finite element space.
In the following we denote the Frobenius scalar product of two matrices A, B by A :
B := i,j A ij B ij , A F := √ A : A and (a, b) := arccos( a·b a 2 b 2 )
measures the angle between two vectors a, b, · 2 denoting the Euclidean norm.
Shell model
LetΩ ⊂ R 3 be an undeformed configuration of a shell with thickness t, described by the mid-surfaceŜ and the according orientated normal vectorνŜ
Ω := {x + zνŜ (x) :x ∈Ŝ, z ∈ [−t/2, t/2]}.
(2.6) Furthermore, let Φ :Ω → Ω be the deformation from the initial to the deformed configuration of the shell and φ :Ŝ h → S h the deformation of the approximated mid-surface. I.e., let φ ∈ [V k+1 h (T h )] 3 withT h and T h = φ(T h ) the according triangulations ofŜ h and S h . Then, we define F := ∇τ φ and J := cof(F ) F = cof(F )ν 2 as the deformation gradient and the deformation determinant, respectively. Here, cof(F ) denotes the cofactor matrix of F . We can split the deformation into the identity function and the displacement, φ = id + u, and thus, F = Pτ + ∇τ u with the projection onto the tangent plane Pτ := I −ν ⊗ν, ⊗ denoting the dyadic outer product.
We consider the Kirchhoff-Love assumption, where the deformed normal vector has to be orthogonal to the deformed mid-surface S h . With Steiner's formula, asymptotic analysis in the thickness parameter t and using the plane strain assumption for the material norm, we obtain for the according shell energy functional
W = t 8 I −Î 2 M + t 3 24 II −ÎI 2 M . (2.7)
W is given in terms of differential forms, see [41], and (2.7) is comparable to the classical formulations [12,5,10]. The material norm is given by
· 2 M := E 1 − ν 2 Ŝ h ν tr(·) 2 + (1 − ν) tr(· 2 ) dx,(2.8)
with E the Young's modulus and ν the Poisson's ratio.Î, I andÎI, II denote the (pull-backed) first and second fundamental form of the reference and deformed configuration, respectively. With the Green strain tensor E := 1/2(C −Pτ ) restricted on the tangent space, C = F T F denoting the Cauchy-Green tensor, we obtain
E mem := t 8 I −Î 2 M = t 2 E 2 M . (2.9)
This corresponds to the membrane energy of the shell. The difference between the curvature of the deformed and initial second fundamental form describes the bending energy, for which holds
E bend := t 3 24 II −ÎI 2 M = t 3 24 F T ∇τ (ν • φ) − ∇τν 2 M . (2.10)
Motivated by discrete differential geometry, see [20] and references therein, and DG methods [2] we add also distributional contributions to the bending energy
E bend := t 3 24 T ∈T h F T ∇τ (ν • φ) − ∇τν 2 M ,T + Ê ∈Ê h (ν L , ν R ) • φ − (ν L ,ν R ) 2 M ,Ê .
(2.11) Thus, with the notation of (2.9) and (2.11), we have to minimizẽ W(u) := E mem + E bend .
(2.12)
To reduce this fourth order problem to a second order one, we introduce a new variable σ which leads to a mixed saddle point problem. Hence, we have to find the critical points of the following Lagrange functional, which is equivalent to minimize (2.12), see Appendix A,
L(u, σ) := t 2 E 2 M − 6 t 3 σ 2 M −1 +B(σ, u), (2.13) wherẽ B(σ, u) := T ∈T h σ, F T ∇τ (ν • φ) − ∇τν T − Ê ∈Ê h (ν L , ν R ) • φ − (ν L ,ν R ), σμμ Ê ,(2.14)
with ·, · denoting the L 2 -scalar product on an ele-mentT or on an edgeÊ. With some computations, see Appendix A, we finally obtain the following Lagrange functional
L(u, σ) = t 2 E 2 M − 6 t 3 σ 2 M −1 − B(σ, u), (2.15) with B(σ, u) = T ∈T h T σ : (H ν•φ + (1 −ν · ν • φ)∇τν) dx − Ê ∈Ê h Ê ( (ν L ,ν R ) − (ν L , ν R ) • φ)σμμ ds.
(2.16)
In (2.16) H ν•φ := i (∇ 2 τ u i )ν i • φ,
where ∇ 2 τ denotes the surface Hessian [15]. For the deformed normal and tangent vectors the following identities hold
ν • φ = 1 cof(F )ν 2 cof(F )ν = 1 J cof(F )ν, (2.17) τ e • φ = 1 Fτ e 2 Fτ e = 1 J b Fτ e , (2.18) µ • φ = ±(ν • φ) × (τ e • φ) = 1 (F † ) Tμ 2 (F † ) Tμ ,(2.19)
where F † denotes the Moore-Penrose pseudo-inverse of F . The Lagrange multiplier σ has the physical meaning of the moment. Note, that the thickness parameter t appears now also in the denominator and the inverse material tensor
· 2 M −1 := 1 + ν E Ŝ h ( tr(· 2 ) − ν 2ν + 1 tr(·) 2 ) dx,(2.20)
is used. In case of a flat plane (2.16) becomes
B(σ, u) = T ∈T h T σ : H ν•φ dx − Ê ∈Ê h Ê (ν L , ν R ) • φ σμμ ds. (2.21)
A possible simplification of (2.16) can be achieved by the approximation
1 2 (ν L , ν R ) = {ν} · µ L + O(|{ν} · µ L | 3 ), (2.22) where {ν} := 1 ν L +ν R 2 (ν L + ν R )
denotes the averaged normal vector.
The resulting system is a saddle point problem, which would lead to an indefinite matrix after assembling. To overcome this problem, we can use complete discontinuous elements for the moment σ and introduce a hybridization variableα ∈ Γ k h (T h ) to reinforce the normal-normal continuity of σ:
L(u, σ,α) = t 2 E 2 M − 6 t 3 σ 2 M −1 − B(σ, u,α), (2.23)
where (2.16) is now given by
B(σ, u,α) = T ∈T h T σ : (H ν•φ + (1 −ν · ν • φ)∇τν) dx − Ê ∈Ê h Ê ( (ν L ,ν R ) − (ν L , ν R ) • φ) σμμ ds + Êαμ σμμ ds,(2.24)
with σμμ := 1/2(σμ LμL + σμ RμR ). Due to the hybridization variableα, we can use static condensation to eliminate the moment σ locally, which leads to a positive definite problem again. The new unknownα has the physical meaning of the changed angle, the rotation, between two elements.
For the computation of the jump term we use that
Ê ∈Ê h Ê (ν L ,ν R ) − (ν L , ν R ) • φ ds = T ∈T h ∂T ({ν},μ) − ({ν}, µ) • φ ds. (2.25)
To compute the deformed averaged normal vector {ν} on an edge, information of the two neighbored elements is needed at once, which would need e.g. Discontinuous Galerkin techniques. Instead, one can use the information of the last (load-step) solution {ν} n , see Figure 2.2. To measure the correct angle, we have to project {ν} n to the plane orthogonal to the tangent vector τ e by using the projection P ⊥ τe = I − τ e ⊗ τ e , and then re-normalize it Note that τ e itself depends on the unknown deformation. By using (2.26) we have to ensure that {ν} n lies between the two element-normal vectors, see Figure 2.2. For smooth manifolds the angle between the element-normal vectors tends to 180 degree as h → 0. Hence, this assumption is fulfilled, if the elements do not rotate more than half of their included angle during one load-step, which is an acceptable and realistic assumption.
{ν} ≈ 1 P ⊥ τe ({ν} n ) 2 P ⊥ τe ({ν} n ) =: {ν} n . (2.26) {ν} {ν} n
Relation to the HHJ-method
If we assume to have a plate which lies in the x-yplane and a force f is acting orthogonal on it, we can compute the linearized bending energy by solving the following fourth order scalar equation onŜ h div( div(∇ 2 w)) = f, (2.27) where the thickness t and all material parameters are hidden in the right-hand side f . Therefore, the HHJ-method [22,23,27] introduces the linearized moment tensor σ and solves the following saddle point problem instead, given by the Lagrange functional
L(w, σ) := − σ 2 L 2 (Ŝ h ) + T ∈T h T ∇w · div(σ) dx − ∂T (∇w) τe σ µτe ds + Ŝ h f w dx.
(2.28)
If we now consider our shell model (2.15), neglect the membrane energy term and the material parameters and linearize the bending energy, see Appendix B, we obtain (2.28). Thus, (2.15) can be seen as a generalization of the HHJ-method (2.28) from linear plates to nonlinear shells.
Boundary conditions and kink structures
For H 1 the Dirichlet boundary condition u = u D can be used to prescribe the displacement on the boundary, whereas the do-nothing condition is used for free boundaries. For σ ∈ H(divdiv) we can prescribe the normal-normal component, σ µµ , on the boundary. Homogeneous Dirichlet data, σ µµ = 0, are used for free boundaries. By setting non-homogeneous data one can prescribe a moment. The do-nothing Neumann boundary condition σ µτe = 0 is used for clamped boundaries.
In the case of a complete discontinuous moment tensor and the hybridization variableα, the boundary conditions for σ have to be incorporated in terms ofα. Note that the essential and natural boundary conditions swap, i.e. the clamped boundary condition is now set directly as homogeneous Dirichlet data and the prescribed moment is handled natural as a righthand side.
If we compute the variations of (2.15) with respect to σ, we obtain in strong form that the angle from the initial configuration gets preserved, see (C.2). The hidden interface condition for the displacement u in strong form are not needed for the method itself. However, if one uses e.g. Residual error estimators, the boundary conditions are crucial, see Appendix C for the calculations.
The method can also handle non-smooth surfaces with kinks and branching shells, where one edge is shared by more than two elements, in a natural way, without any extra treatments. Due to the normalnormal continuity of σ the moment gets preserved over the kinks and as the angle is the same on the initial and deformed configuration, the kink itself gets also preserved. Note, that in this case simplification (2.22) cannot be used any more, as |{ν} · µ| 0 as h → 0 at the kinks. 3 , the moment tensor σ ∈ Σ k−1 h (T h ) and, eventual, the hybridization space Γ k−1 h (T h ) leads to our shell element. For polynomial order k, the method will be denoted by pk, i.e. p1 is the lowest order method consisting of piece-wise linear displacements and piece-wise constant moments. In Figure 2.3 the hybridized p1 and p2 element for quadrilaterals can be seen. Note, that the hybridized lowest order triangle shell element is equivalent to the Morley element [30]. If we use the lowest order elements on triangles for (2.15) then the Hessian term vanishes, as only linear polynomials are used. For quadrilaterals the Hessian is constant on each element in this case.
Shell element
Combining the displacement u ∈ [V k h (T h )]
To solve (2.15) we have to assemble the according matrix. As it is formulated in terms of a Lagrange functional, the first variations must be computed, which is a bit challenging due to the nonlinearity but doable, see Appendix C. If, however, the finite element software supports energy based integrators where the variations are calculated automatically, one can use directly the Lagrange functional (2.15).
Membrane locking
We observed that the lowest order elements do not suffer from locking, but for the higher order methods membrane locking, cf. [37], may occur, e.g. in the benchmark cantilever subjected to end moment, section 3.2. To overcome this problem one can interpolate the membrane stress tensor by a L 2 -projection into a space of reduced dimension, I h L 2 E 2 M . The projection can be incorporated to (2.15) by introducing an auxiliary variable R and adding for the dis-
placement u ∈ [V k h (T h )] 3 and R ∈ [Π k−1 (T h )] 3×3 sym − 1 2t R 2 M −1 + R, E (2.29)
to the Lagrange functional. As R is discontinuous, we can use static condensation to eliminate it locally. This works well for structured quadrilateral meshes and is similar to reduced integration order methods. For triangles, however, the locking is reduced, but still has an impact to the solution. Here, other interpolation operators and spaces have to be used, which is topic of further research.
Numerical results
The method is implemented in the NGS-Py interface, which is based on the finite element library Netgen/NGSolve 2 [38,39]. We will use the lowest order elements p1and also the p3 method as an high-order example. An end shear force P on the right boundary is applied to a cantilever, which is fixed on the left. The material and geometrical properties are E = 1.2 × 10 6 , ν = 0, L = 10, W = 1, t = 0.1 and P max = 4, see
Cantilever subjected to end moment
Slit annular plate
The material and geometrical properties are E = 2.1 × 10 8 , ν = 0, R i = 6, R o = 10, t = 0.03 and P max = 4.034, see Figure 3.7. We used structured quadrilateral meshes. The quantity of interest is the transverse displacement at point B. The reference value of 13.7432 is taken from [24]. The initial and deformed mesh can be seen in Table 3.6 are convenient with [24]. Table 3.5: Radial load-deflection at point B for the hemispherical shell subjected to alternating radial forces at maximal load for P max = 1. Different forces, P max ∈ {10 −6 , 10 −3 , 1, 10 3 }, are applied in x-and z-direction. Some combinations of thickness and force parameters led to a solution in a linear regime, see Table 3.7 and 3.9, where the reference solutions are taken from [4] and [29], respectively. Others are already in the nonlinear regime, see Table 3.8 and 3.10. Therefore, the full three-dimensional model is used with a 150 × 14 × 2 structured cubic grid and standard Lagrangian elements of polynomial order 3, i.e. 162 dofs/cube, to generate a reference solution. Table 3.7: Deflection U A ×10 3 for P x = 10 −6 , P z = 0, and t = 0.0032 and W A × 10 3 for P x = 0, P z = 10 −6 , and t = 0.0032 of twisted beam. Reference values are 5.256 and1.294. Table 3.8: Deflection U A for P x = 10 −3 , P z = 0, and t = 0.0032 and W A for P x = 0, P z = 10 −3 , and t = 0.0032 of twisted beam. Reference values are 4.496 and 1.227.
Twisted beam
L b Z, W X, U Y , V P z P x A
Z-section cantilever
A moment M = 1.2×10 6 is applied at the right end of a Z-section, which is fixed on the left side. Therefore, two shear forces P = 6 × 10 5 are involved, see Figure p1 p3 p1 p3 2x12 5.654 5.598 1.933 1.798 4x24 5.605 5.591 1.822 1.795 6x36 5.597 5.590 1.806 1.795 8x48 5.593 5.589 1.801 1.795 Table 3.9: Deflection U A × 10 3 for P x = 1, P z = 0, and t = 0.32 and W A × 10 3 for P x = 0, P z = 1, and t = 0. 32 Table 3.10: Deflection U A for P x = 10 3 , P z = 0, and t = 0.32 and W A for P x = 10 3 , P z = 1, and t = 0.32 of twisted beam. Reference values are 4.610 and 1.778.
3.14. The material and geometrical properties are E = 2.1 × 10 11 , ν = 0.3, t = 0.1, L = 10, W = 2 and H = 1. The quantity of interest is the membrane stress Σ xx at point A. The reference value −1.08 × 10 8 is taken from NAFEMS [31]. The results are compared with rotation-free elements [18] and can be found in Table 3.11. Table 3.11: Membrane stress Σ xx × 10 8 of Z-section cantilever at maximal load.
T-section cantilever
We propose an example where more than two elements share an edge. The material and geometrical properties are E = 6 × 10 6 , ν = 0, t = 0.1, L = 1, W = 1 and H = 1. The structure is clamped on the bottom and a shear force P max = 1000 is applied on the left boundary, see Figure 3.15.
The moment induced by the shear force P on the left top branch goes over the kink to the bottom branch where the structure is fixed without inducing moments on the right top one. Thus, it only rotates and the curvature is zero also after the deformation. The deflections of the point A are given in
Acknowledgments
Michael Neunteufel has been funded by the Austrian Science Fund (FWF) project W1245.
A Lagrange functional
We compute the variations of the Lagrange functional in (2.13), neglecting the sums overT andÊ,
δ σL = − 12 t 3 M −1 σ, δσ + F T ∇τ (ν • φ) − ∇τν, δσ T − (ν L , ν R ) • φ − (ν L ,ν R ), δσμμ Ê ! = 0, (A.1) δ uL = δ u ( t 2 E M ) + σ, δ u (F T ∇τ (ν • φ) − ∇τν) T − δ u ( (ν L , ν R ) • φ), σμμ Ê ! = 0. (A.2)
Expressing σ from (A.1) and inserting it into (A.2) yields to the same expression as the variation of (2.12) with respect to the displacement u. We conclude that (2.13) and (2.12) are equivalent.
Equivalence of (2.15) and (2.13) follows by differentiating the identity F T ν • φ = 0 and some compu-
tations σ, F T ∇τ (ν • φ) T = − H 1 : σ H 2 : σ H 3 : σ , ν • φ T , (A.3)
where H i := ∇ 2 τ u i + ∇τ ((Pτ ) i ), (Pτ ) i denoting the i-th column of Pτ and ∂x i F the i-th partial derivative of F . With Pτ = I −ν ⊗ν, neglecting φ, and sum convention for i we obtain
ν · H 1 : σ H 2 : σ H 3 : σ = ν i ∇τ ((Pτ ) i + ∇ 2 τ u i ) : σ = −ν i (∇τ (ν ⊗ν) i − ∇ 2 τ u i ) : σ = −ν i (∇τν i ⊗ν +ν i ∇τν − ∇ 2 τ u i ) : σ = −(ν ·ν∇τν − ν i ∇ 2 τ u i ) : σ = −(ν ·ν∇τν − H ν ) : σ, (A.4)
where we used that ∇τν i ⊗ν : σ ≡ 0.
B Linearization
To show that (2.15) simplifies to (2.28) in the linear regime we use that the gradient of the displacement of the full three-dimensional model
∇U = ∇(u + zν • φ) is small, ∇U = O(ε)
1. Thus, we immediately obtain that ∇τ u = O(ε), F = I +O(ε), J b = 1+O(ε) and σ = O(ε). Furthermore, there holds ν • φ −ν = −ν T ∇τ u + O(ε 2 ) for ε → 0. We neglect all terms of order O(ε 2 ) or higher. For simplicity we will also neglect the φ dependency, e.g. we write ν instead of ν • φ.
Starting from (2.21), we obtain on eachT ∈T h
T σ : H ν dx = T 3 i=1 σ : (∇ 2 τ u i ν i ) dx ≈ − T 3 i=1 σ : ∇ 2 τ u i (ν +ν T ∇τ u) i dx ≈ − T 3 i=1 σ : ∇ 2 τ u iνi dx = T divτ (σ) · (ν T ∇τ u) dx − ∂Tν T ∇τ u · σμ ds. (B.1)
For the jump term we use (2.22) and (2.26), such that
∂T 1 2 (ν L , ν R )σμμ ds ≈ ∂T {ν} n · µ σμμ ds. (B.2)
For ease of presentation we neglect σμμ in (B.2), employ that
− σ 2 L 2 (Ŝ h ) + T ∈T h T ∇τ w · divτ (σ) dx − ∂T (∇τ w)τ e σμτ e ds , (B.4)
which is indeed (2.28).
C Variations
We compute the variations of (2.15) to deduce the bilinear form of the according variational equations. Then we will (partly) integrate by parts to find the hidden boundary conditions in strong form. For simplicity, we will neglect the material tensor M and write only ν instead of ν • φ. The same holds for µ and τ e . We will consider only the formulation (2.15), the case with the hybridization variableα in (2.23) can be done analogously.
Computing the first variation of problem (2.15) with respect to σ gives
− 12 t 3 σ, δσ − T ∈T h δσ, H ν + (1 −ν · ν)∇τν − Ê ∈Ê h Ê ( (ν L ,ν R ) − (ν L , ν R ))δσμμ ds = 0 (C.1)
for all permissible directions δσ. Testing (C.1) with functions which have only support on one edgeÊ of the triangulationT h yields in strong
(μ L ,μ R ) − (µ L , µ R ) = 0. (C.2)
For the first variation of the membrane energy term of (2.15) in direction v := δu we immediately obtain for everyT ∈T h
δ u E 2 T = T (2F E) : ∇τ v dx. (C.3)
The other variations are more involved. We define the operator(·) ij : R 3×3 → R 2×2 , which maps 3 × 3 matrices to its 2×2 sub-matrix where the i-th row and j-th column are canceled out. Further, let A ij (·) : R 2×2 → R 3×3 denotes the operator which embeds 2× 2 matrices into 3 × 3 matrices, such that A ij (A) ij = A and the i-th row and the j-th column of A ij (A) are zero. Thus, A ij (·) is the right-inverse of(·) ij . With this, we define for i, j ∈ {1, 2, 3}
A # ij := (−1) i+j A ij cof(Ā ij ) . (C.4)
Then, the following identity holds for all smooth matrix valued functions for i, j, k ∈ {1, 2, 3}
∂ ∂x k cof(A) ij = A # ij : ∂ ∂x k A.
(C.5)
With the notation A # ν,ν := ν i A # ijν j and v := δu there further holds Now, the volume term of (2.16) is split into two terms depending on u δ u (σ : H ν ) = σ :
δ u J = F # ν,ν : ∇τ v, (C.6) δ u ν = 1 J F # ν,1 : ∇τ v F # ν,2 : ∇τ v F # ν∇ 2 τ v i ν i − 1 J (F # ν,ν : ∇τ v)σ : H ν + 1 J (F # ν,i : ∇τ v) σ : ∇ 2 τ u i (C.10)
and δ u (−ν · ν) = 1 J (ν · ν)F # ν,ν − F # ν,ν : ∇τ v. For (C.10) we have to integrate twice by parts obtaining
T divτ ( divτ (ν i σ))v i + divτ ( 1 J (σ : H ν )F # ν,ν ) · v − divτ ( 1 J (σ : ∇ 2 τ u i )F # ν,i ) · v dx + ∂T divτ (ν i σμ)v i − divτ (ν i σ)μv i − 1 J (σ : H ν (F # ν,ν )μ − σ : ∇ 2 τ u i (F # ν,i )μ) · v ds − ∂∂T ν
· v σμτ e dss = 0, (C.14)
where ∂∂T are the vertices of the element T and dss denotes point evaluation. For (C.11) we get
− T divτ ( 1 J (ν · ν)F # ν,ν − F # ν,ν ) · v dx + ∂T 1 J (ν · ν)F # ν,ν − F # ν,ν μ
· v ds = 0.
(C. 15) Finally, one has to use integration by parts for (C.12) to obtain the last boundary terms. Adding up all boundary terms, taking care of the constants and material parameters, one obtain the natural boundary conditions in strong form with respect to the displacement u.
Figure 2
2
Figure 2 . 1 :
21Normal, element-normal and normalized edge tangent vectors on two triangles T L and T R .
Figure 2 . 2 :
22Angle computation with the current averaged normal vector {ν} and the averaged normal vector {ν} n from the previous step.
Figure 2 . 3 :
23Lowest order H(divdiv), H 1 and H(div) elements for the moment, displacement and hybridization variable (top) and lowest order and high order hybridized quadrilateral shell element (bottom).
Figure 3 . 1 :
31Geometry of cantilever subjected to end shear force benchmark.
Figure 3 .
31. A structured 16 × 1 rectangular grid is used. The reference values are taken from [45]. In Figure 3.2 one can see the initial and deformed mesh and in Figure 3.3 andTable 3.2 the results.
Figure 3 . 2 :
32Initial and final configuration of cantilever subjected to end shear force.
Figure 3 . 3 :
33Horizontal and vertical load-deflection for cantilever subjected to end shear force with 16 × 1 grid.
Figure 3 . 4 :
34Geometry of cantilever subjected to end moment benchmark. A cantilever is clamped on the left side and a moment M is applied on the right. On the other boundaries we use the symmetry-condition. The material and geometrical properties are E = 1.2 × 10 6 , ν = 0, L = 12, W = 1, t = 0.1 and M max = 50π/3, see Figure 3.4. The results can be found in Figure 3.6 andTable 3.3, and the initial and final mesh inFigure 3.5.
Figure 3 . 5 :
35Initial and final configuration of cantilever subjected to end moment.
Figure 3
3
Figure 3 . 7 :
37Geometry, force and points of interest of slit annular plate.
Figure 3 Figure 3 . 9 :
339deflection at point B at maximal load for slit annular plate with 10 × 80 grid. Vertical load-deflection for slit annular plate at points A and B with 10 × 80 grid.3.4 Hemispherical shell subjected to alternating radial forcesThe material and geometrical properties are E = 6.825 × 10 7 , ν = 0.3, R = 10, t = 0.04, seeFigure3.10. A non-structured triangulation is used with different mesh-sizes. For P max = 1 [41] gives the reference value of the vertical deflection at point B with 0.093 at maximal load. In Table 3.5 the results for p1 and three different meshes can be found. For the large displacement case we used P max = 400, see Figure 3.11 and 3.12. The results shown in
Figure 3 . 10 :
310Geometry of hemispherical shell subjected to alternating radial forces with h = 1.
Figure 3 .
311: Initial and final configuration of hemispherical shell subjected to alternating radial forces with h = 1.
A
beam is twisted by 90 degrees and clamped on the left side, whereas a point load is applied on the middle
Figure 3 .
312: Radial load-deflections for the hemispherical shell subjected to alternating radial forces with mesh-size h = 0.25.
load-deflection at point B for the hemispherical shell subjected to alternating radial forces at maximal load for P max = 400. of the right boundary. The material and geometrical properties are E = 2.9 × 10 7 , ν = 0.22, L = 12, b = 1.1, t = 0.0032, 0.32, see Figure 3.13.
Figure 3
3Figure 3.13: Geometry of twisted beam.
Figure 3.14: Geometry of Z-section cantilever.
Figure 3 .
316: Horizontal and vertical deflection at point A for T-section cantilever.
For
{ν}) 2 P ⊥ τe ({ν}) = P ⊥ τe ({ν}) + O(ε 2 ) and that {ν} =ν on a flat plane to obtain∂T P ⊥ τe (ν) · µ ds = ± ∂Tν · (ν × τ e ) ds ≈ ∓ ∂Tν · (ν T ∇τ u × Fτ e ) ds ≈ ∓ ∂Tν · (ν T ∇τ u ×τ e ) ds = ∓ ∂T det(ν,ν T ∇τ u,τ e ) ds = ∓ ∂T det(ν, (ν T ∇τ u ·μ)μ,τ e ) ds= ∓ ∂Tν T ∇τ u ·μ det(ν,μ,τ e ) ∂T {ν} · µ ds the linearization is done analogously and leads to the same result as (B.3). If we now use (B.1), (B.2), (B.3) and (2.21) and apply it to (2.15), neglect the membrane energy term and the constants, and employ thatν
, 3 :
3∇τ J b = (τ e ⊗τ e ) : ∇τ v = τ e · (∇τ v)τ e . (C.8) Note, that (C.7) has the form of a covarient derivative. By using (Fτ e ) × (Fμ) = cof(F )ν, F # ν,i : ∇τ v can be simplified to F # ν,i : ∇τ v = ((∇τ vτ ) × (Fμ) + (Fτ ) × (∇τ vμ)) i . (C.9)
+ν 2 (ν o + ν), with ν o denoting the element normal vector on the neighbored element. This yields δ u (− ({ν},μ) + ({ν}, be computed exploiting (C.7). Using (2.26) instead of {ν} yields to a similar expression.To obtain the boundary conditions of u in strong form, which are hidden naturally in the weak form of the equation, we have to integrate by parts until no derivatives of v appear.E.g., (C.
Table 3 .
31 lists the according number of degrees of freedom for each element.2 www.ngsolve.org
Table 3 .
31: Number of degrees of freedom per (hy-
bridized and condensed) element for triangles (T) and
quadrilaterals (Q).
Mixed and nonconforming finite element methods : implementation, postprocessing and error estimates. D N Arnold, F Brezzi, ESAIM: M2AN. 19Arnold, D. N., and Brezzi, F. Mixed and nonconforming finite element methods : imple- mentation, postprocessing and error estimates. ESAIM: M2AN 19, 1 (1985), 7-32.
Unified analysis of discontinuous Galerkin methods for elliptic problems. D N Arnold, F Brezzi, B Cockburn, L D Marini, Arnold, D. N., Brezzi, F., Cockburn, B., and Marini, L. D. Unified analysis of discon- tinuous Galerkin methods for elliptic problems.
. Siam J Numer, Anal. 39SIAM J. Numer. Anal. 39, 5 (2001/02), 1749- 1779.
Formulation and evaluation of new triangular, quadrilateral, pentagonal and hexagonal discrete Kirchhoff plate/shell elements. J L Batoz, C L Zheng, F Hammadi, International Journal for Numerical Methods in Engineering. 52Batoz, J. L., Zheng, C. L., and Hammadi, F. Formulation and evaluation of new trian- gular, quadrilateral, pentagonal and hexagonal discrete Kirchhoff plate/shell elements. Interna- tional Journal for Numerical Methods in Engi- neering 52, 56 (2001), 615-630.
Assumed strain stabilization procedure for the 9-node Lagrange shell element. T Belytschko, B L Wong, H Stolarski, International Journal for Numerical Methods in Engineering. 28Belytschko, T., Wong, B. L., and Sto- larski, H. Assumed strain stabilization pro- cedure for the 9-node Lagrange shell element. International Journal for Numerical Methods in Engineering 28, 2 (1989), 385-414.
Models and Finite Elements for Thin-Walled Structures. M Bischoff, E Ramm, J Irslinger, American Cancer SocietyBischoff, M., Ramm, E., and Irslinger, J. Models and Finite Elements for Thin-Walled Structures. American Cancer Society, 2017, pp. 1-86.
D Finite Braess, Elemente -Theorie, schnelle Löser und Anwendungen in der Elastizitätstheorie. Berlin HeidelbergSpringer-Verlag5Braess, D. Finite Elemente -Theorie, schnelle Löser und Anwendungen in der Elas- tizitätstheorie, 5 ed. Springer-Verlag, Berlin Hei- delberg, 2013.
An equilibration based a posteriori error estimate for the biharmonic equation and two finite element methods. D Braess, A Pechstein, J Schöberl, IMA Journal of Numerical Analysis. to appearBraess, D., Pechstein, A., and Schöberl, J. An equilibration based a posteriori error esti- mate for the biharmonic equation and two finite element methods. IMA Journal of Numerical Analysis, to appear.
Two families of mixed finite elements for second order elliptic problems. F Brezzi, J Douglas, L D Marini, Numerische Mathematik. 47Brezzi, F., Douglas, J., and Marini, L. D. Two families of mixed finite elements for second order elliptic problems. Numerische Mathematik 47, 2 (1985), 217-235.
Analysis of a rotation-free 4-node shell element. M Brunet, F Sabourin, International Journal for Numerical Methods in Engineering. 669Brunet, M., and Sabourin, F. Analysis of a rotation-free 4-node shell element. International Journal for Numerical Methods in Engineering 66, 9 (2006), 1483-1510.
The finite element analysis of shells -fundamentals. D Chapelle, K.-J Bathe, Springer-Verlag2Berlin HeidelbergChapelle, D., and Bathe, K.-J. The finite element analysis of shells -fundamentals, 2 ed. Springer-Verlag, Berlin Heidelberg, 2011.
Multigrid methods for Hellan-Herrmann-Johnson mixed method of Kirchhoff plate bending problems. L Chen, J Hu, X Huang, Journal of Scientific Computing. 76Chen, L., Hu, J., and Huang, X. Multigrid methods for Hellan-Herrmann-Johnson mixed method of Kirchhoff plate bending problems. Journal of Scientific Computing 76, 2 (2018), 673-696.
An introduction to differential geometry with applications to elasticity. P Ciarlet, Journal of Elasticity. 78Ciarlet, P. An introduction to differential ge- ometry with applications to elasticity. Journal of Elasticity 78-79, 1-3 (2005), 1-207.
The Hellan-Herrmann-Johnson method: Some new error estimates and postprocessing. M I Comodi, Mathematics of Computation. 52Comodi, M. I. The Hellan-Herrmann-Johnson method: Some new error estimates and postpro- cessing. Mathematics of Computation 52, 185 (1989), 17-29.
Shapes and geometries: metrics, analysis, differential calculus, and optimization. M C Delfour, J.-P Zolésio, 22SIAM, PhiladelphiaDelfour, M. C., and Zolésio, J.-P. Shapes and geometries: metrics, analysis, differential calculus, and optimization, vol. 22. SIAM, Philadelphia, 2011.
Finite element methods for surface pdes. G Dziuk, C M Elliott, Acta Numerica. 22Dziuk, G., and Elliott, C. M. Finite ele- ment methods for surface pdes. Acta Numerica 22 (2013), 289-396.
A hierarchic family of isogeometric shell finite elements. R Echter, B Oesterle, M Bischoff, Computer Methods in Applied Mechanics and Engineering. 254Echter, R., Oesterle, B., and Bischoff, M. A hierarchic family of isogeometric shell fi- nite elements. Computer Methods in Applied Mechanics and Engineering 254 (2013), 170- 180.
Continuous/discontinuous finite element approximations of fourth-order elliptic problems in structural and continuum mechanics with applications to thin beams and plates, and strain gradient elasticity. G Engel, K Garikipati, T Hughes, M Larson, L Mazzei, Taylor , R , Computer Methods in Applied Mechanics and Engineering. 191Engel, G., Garikipati, K., Hughes, T., Larson, M., Mazzei, L., and Taylor, R. Continuous/discontinuous finite element ap- proximations of fourth-order elliptic problems in structural and continuum mechanics with appli- cations to thin beams and plates, and strain gra- dient elasticity. Computer Methods in Applied Mechanics and Engineering 191, 34 (2002), 3669 -3750.
A rotationfree shell triangle for the analysis of kinked and branching shells. F G Flores, E Oñate, International Journal for Numerical Methods in Engineering. 69Flores, F. G., and Oñate, E. A rotation- free shell triangle for the analysis of kinked and branching shells. International Journal for Nu- merical Methods in Engineering 69, 7 (2007), 1521-1551.
A comparison of rotation-free triangular shell elements for unstructured meshes. M Gärdsback, G Tibert, Computer Methods in Applied Mechanics and Engineering. 196Gärdsback, M., and Tibert, G. A com- parison of rotation-free triangular shell elements for unstructured meshes. Computer Methods in Applied Mechanics and Engineering 196, 49-52 (2007), 5001-5015.
Computing discrete shape operators on general meshes. E Grinspun, Y Gingold, J Reisman, D Zorin, Computer Graphics Forum. 253Grinspun, E., Gingold, Y., Reisman, J., and Zorin, D. Computing discrete shape op- erators on general meshes. Computer Graphics Forum 25, 3 (2006), 547-556.
Continuous/discontinuous finite element modelling of Kirchhoff plate structures in R3 using tangential differential calculus. P Hansbo, M G Larson, Computational Mechanics. 60Hansbo, P., and Larson, M. G. Contin- uous/discontinuous finite element modelling of Kirchhoff plate structures in R3 using tangential differential calculus. Computational Mechanics 60, 4 (2017), 693-702.
Analysis of elastic plates in flexure by a simplified finite element method. Acta Polytechnica Scandinavica. K Hellan, Civil Engineering Series. 46Hellan, K. Analysis of elastic plates in flexure by a simplified finite element method. Acta Poly- technica Scandinavica, Civil Engineering Series 46 (1967).
Finite element bending analysis for plates. L Herrmann, J. Eng. Mech. Div. A.S.C.E. EM5. 93Herrmann, L. Finite element bending analysis for plates. J. Eng. Mech. Div. A.S.C.E. EM5 93 (1967), 13-26.
An assumed strain triangular curved solid shell element formulation for analysis of plates and shells undergoing finite rotations. International. W I Hong, J H Kim, Y H Kim, S W Lee, Journal for Numerical Methods in Engineering. 52Hong, W. I., Kim, J. H., Kim, Y. H., and Lee, S. W. An assumed strain triangular curved solid shell element formulation for analysis of plates and shells undergoing finite rotations. In- ternational Journal for Numerical Methods in Engineering 52, 7 (2001), 747-761.
Convergence of an adaptive mixed finite element method for Kirchhoff plate bending problems. J Huang, X Huang, Y Xu, Huang, J., Huang, X., and Xu, Y. Con- vergence of an adaptive mixed finite element method for Kirchhoff plate bending problems.
. SIAM Journal on Numerical Analysis. 49SIAM Journal on Numerical Analysis 49, 2 (2011), 574-607.
Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. T Hughes, J Cottrell, Y Bazilevs, Computer Methods in Applied Mechanics and Engineering. 194Hughes, T., Cottrell, J., and Bazilevs, Y. Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Computer Methods in Applied Mechanics and Engineering 194, 39 (2005), 4135 -4195.
On the convergence of a mixed finite element method for plate bending moments. C Johnson, Numer. Math. 21Johnson, C. On the convergence of a mixed fi- nite element method for plate bending moments. Numer. Math. 21 (1973), 43-62.
Isogeometric shell analysis with Kirchhoff-Love elements. J Kiendl, K.-U Bletzinger, J Linhard, W R , Computer Methods in Applied Mechanics and Engineering. 198Kiendl, J., Bletzinger, K.-U., Linhard, J., and W R. Isogeometric shell analysis with Kirchhoff-Love elements. Computer Methods in Applied Mechanics and Engineering 198, 49 (2009), 3902 -3914.
A proposed standard set of problems to test finite element accuracy. R H Macneal, R L Harder, Finite Elements in Analysis and Design. 1Macneal, R. H., and Harder, R. L. A pro- posed standard set of problems to test finite ele- ment accuracy. Finite Elements in Analysis and Design 1, 1 (1985), 3 -20.
The constant-moment platebending element. L S Morley, Journal of Strain Analysis. 6Morley, L. S. D. The constant-moment plate- bending element. Journal of Strain Analysis 6, 1 (1971), 20-24.
The standard NAFEMS benchmarks. NAFEMS. National Agency for Finite Element Methods & Standards (Great BritainNational Agency for Finite Element Methods & Standards (Great Britain). The standard NAFEMS benchmarks. NAFEMS, 1990.
A new family of mixed finite elements in R3. J C Nédélec, Numerische Mathematik. 50Nédélec, J. C. A new family of mixed finite elements in R3. Numerische Mathematik 50, 1 (1986), 57-81.
Rotation-free triangular plate and shell elements. E Oñate, F Zárate, International Journal for Numerical Methods in Engineering. 47Oñate, E., and Zárate, F. Rotation-free tri- angular plate and shell elements. International Journal for Numerical Methods in Engineering 47, 13 (2000), 557-603.
Anisotropic mixed finite elements for elasticity. A Pechstein, J Schöberl, International Journal for Numerical Methods in Engineering. 90Pechstein, A., and Schöberl, J. Anisotropic mixed finite elements for elas- ticity. International Journal for Numerical Methods in Engineering 90, 2 (2012), 196-217.
The TDNNS method for Reissner-Mindlin plates. A Pechstein, J Schöberl, Numerische Mathematik. 137Pechstein, A., and Schöberl, J. The TDNNS method for Reissner-Mindlin plates. Numerische Mathematik 137, 3 (2017), 713-740.
An analysis of the TDNNS method using natural norms. A Pechstein, J Schöberl, Numerische Mathematik. 139Pechstein, A., and Schöberl, J. An analy- sis of the TDNNS method using natural norms. Numerische Mathematik 139, 1 (2018), 93-120.
The problem of membrane locking in finite element analysis of cylindrical shells. J Pitkäranta, Numerische Mathematik. 61Pitkäranta, J. The problem of membrane locking in finite element analysis of cylindrical shells. Numerische Mathematik 61, 1 (1992), 523-542.
NETGEN an advancing front 2d/3d-mesh generator based on abstract rules. J Schöberl, Computing and Visualization in Science. 1Schöberl, J. NETGEN an advancing front 2d/3d-mesh generator based on abstract rules. Computing and Visualization in Science 1, 1 (1997), 41-52.
C++11 implementation of finite elements in NGSolve. Institute for Analysis and Scientic Computing. J Schöberl, Vienna University of TechnologySchöberl, J. C++11 implementation of finite elements in NGSolve. Institute for Analysis and Scientic Computing, Vienna University of Tech- nology (2014).
Kirchhoff-Love shell theory based on tangential differential calculus. D Schöllhammer, T.-P Fries, Computational Mechanics. Schöllhammer, D., and Fries, T.-P. Kirchhoff-Love shell theory based on tangential differential calculus. Computational Mechanics (2018).
On a stress resultant geometrically exact shell model. part I: Formulation and optimal parametrization. J Simo, D Fox, Computer Methods in Applied Mechanics and Engineering. 72Simo, J., and Fox, D. On a stress resultant geometrically exact shell model. part I: Formu- lation and optimal parametrization. Computer Methods in Applied Mechanics and Engineering 72, 3 (1989), 267 -304.
Tangentialdisplacement and normal-normal-stress continuous mixed finite elements for elasticity. A Sinwel, J Schöberl, Math. Models Methods Appl. Sci. 21Sinwel, A., and Schöberl, J. Tangential- displacement and normal-normal-stress continu- ous mixed finite elements for elasticity. Math. Models Methods Appl. Sci. 21, 8 (2011), 1761- 1782.
A comprehensive introduction to differential geometry. M Spivak, Publish or Perish, Inc1Houston, Texas3th ed.Spivak, M. A comprehensive introduction to differential geometry, 3th ed., vol. 1. Publish or Perish, Inc., Houston, Texas, 1999.
Postprocessing schemes for some mixed finite elements. ESAIM: Mathematical Modelling and Numerical Analysis -Modélisation Mathématique et. R Stenberg, Analyse Numérique. 25Stenberg, R. Postprocessing schemes for some mixed finite elements. ESAIM: Mathematical Modelling and Numerical Anal- ysis -Modélisation Mathématique et Analyse Numérique 25, 1 (1991), 151-167.
Popular benchmark problems for geometric nonlinear analysis of shells. K Y Sze, X H Liu, S H Lo, Finite Elem. Anal. Des. 40Sze, K. Y., Liu, X. H., and Lo, S. H. Popu- lar benchmark problems for geometric nonlinear analysis of shells. Finite Elem. Anal. Des. 40, 11 (2004), 1551-1569.
Refined consistent formulation of a curved triangular finite rotation shell element. F Van Keulen, J Booij, International Journal for Numerical Methods in Engineering. 39van Keulen, F., and Booij, J. Refined con- sistent formulation of a curved triangular finite rotation shell element. International Journal for Numerical Methods in Engineering 39, 16 (1996), 2803-2820.
A simple triangular finite element for nonlinear thin shells: statics, dynamics and anisotropy. N Viebahn, P M Pimenta, J Schröder, Computational Mechanics. 59Viebahn, N., Pimenta, P. M., and Schröder, J. A simple triangular finite ele- ment for nonlinear thin shells: statics, dynamics and anisotropy. Computational Mechanics 59, 2 (2017), 281-297.
High Order Finite Element Methods for Electromagnetic Field Computation. S Zaglmayr, Johannes Kepler Universität LinzPhD thesisZaglmayr, S. High Order Finite Element Methods for Electromagnetic Field Computation. PhD thesis, Johannes Kepler Universität Linz, 2006.
O Zienkiewicz, Taylor , R , The Basis. Oxford15th ed. Butterworth-HeinemannZienkiewicz, O., and Taylor, R. The Fi- nite Element Method. Vol. 1: The Basis, 5th ed. Butterworth-Heinemann, Oxford, 2000.
|
[] |
[] |
[
"M L Mcclure es:[email protected] \nAstronomy & Astrophysics\nUniversity of Toronto\n50 Saint George StM5S 3H4TorontoONCanada\n",
"C C Dyer [email protected] \nAstronomy & Astrophysics\nUniversity of Toronto at Scarborough\n1265 Military TrailM1C 1A4TorontoONCanada\n"
] |
[
"Astronomy & Astrophysics\nUniversity of Toronto\n50 Saint George StM5S 3H4TorontoONCanada",
"Astronomy & Astrophysics\nUniversity of Toronto at Scarborough\n1265 Military TrailM1C 1A4TorontoONCanada"
] |
[] |
Based on general relativity, it can be argued that deviations from a uniform Hubble flow should be thought of as variations in the Universe's expansion velocity field, rather than being thought of as peculiar velocities with respect to a uniformly expanding space. The aim of this paper is to use the observed motions of galaxies to map out variations in the Universe's expansion, and more importantly, to investigate whether real variations in the Hubble expansion are detectable given the observational uncertainties. All-sky maps of the observed variation in the expansion are produced using measurements obtained along specific lines-of-sight and smearing them across the sky using a Gaussian profile. A map is produced for the final results of the HST Extragalactic Distance Scale Key Project for the Hubble constant, a comparison map is produced from a set of essentially independent data, and Monte Carlo techniques are used to analyse the statistical significance of the variation in the maps. A statistically significant difference in expansion rate of 9 km s −1 Mpc −1 is found to occur across the sky. Comparing maps of the sky at different distances appears to indicate two distinct sets of extrema with even stronger statistically significant variations. Within our supercluster, variations tend to occur near the supergalactic plane, and beyond our supercluster, variations tend to occur away from the supergalactic plane. Comparison with bulk flow studies shows some concordance, yet also suggests the bulk flow studies may suffer confusion, failing to discern the influence of multiple perturbations.
|
10.1016/j.newast.2007.03.005
|
[
"https://arxiv.org/pdf/astro-ph/0703556v1.pdf"
] | 11,020,614 |
astro-ph/0703556
|
c3df11a92999e25774b92f8ffaefa2169fbfd4cb
|
21 Mar 2007
M L Mcclure es:[email protected]
Astronomy & Astrophysics
University of Toronto
50 Saint George StM5S 3H4TorontoONCanada
C C Dyer [email protected]
Astronomy & Astrophysics
University of Toronto at Scarborough
1265 Military TrailM1C 1A4TorontoONCanada
21 Mar 2007Preprint submitted to New Astronomy 27 October 2018 * Corresponding author.arXiv:astro-ph/0703556v1 Anisotropy in the Hubble constant as observed in the HST Extragalactic Distance Scale Key Project results (C. C. Dyer). 1 Present address: Maths & Applied Maths, University of Cape Town, Rondebosch 7701, South Africa.cosmology: cosmological parameters, cosmology: large-scale structure of universe
Based on general relativity, it can be argued that deviations from a uniform Hubble flow should be thought of as variations in the Universe's expansion velocity field, rather than being thought of as peculiar velocities with respect to a uniformly expanding space. The aim of this paper is to use the observed motions of galaxies to map out variations in the Universe's expansion, and more importantly, to investigate whether real variations in the Hubble expansion are detectable given the observational uncertainties. All-sky maps of the observed variation in the expansion are produced using measurements obtained along specific lines-of-sight and smearing them across the sky using a Gaussian profile. A map is produced for the final results of the HST Extragalactic Distance Scale Key Project for the Hubble constant, a comparison map is produced from a set of essentially independent data, and Monte Carlo techniques are used to analyse the statistical significance of the variation in the maps. A statistically significant difference in expansion rate of 9 km s −1 Mpc −1 is found to occur across the sky. Comparing maps of the sky at different distances appears to indicate two distinct sets of extrema with even stronger statistically significant variations. Within our supercluster, variations tend to occur near the supergalactic plane, and beyond our supercluster, variations tend to occur away from the supergalactic plane. Comparison with bulk flow studies shows some concordance, yet also suggests the bulk flow studies may suffer confusion, failing to discern the influence of multiple perturbations.
Introduction
Conventionally, the Hubble flow is thought of as being completely uniform and isotropic. Deviations from a uniform Hubble flow are eliminated by imparting objects' observed residual recessional velocities into peculiar velocities, such that objects are thought to move with respect to a uniformly expanding space. However, empirically it is only valid to consider the velocity field of the matter and how everything is moving relative to everything else in the Universe. It is not possible to infer the existence of an absolute space that expands uniformly and that objects have peculiar velocities with respect to. Thus, deviations from a uniform Hubble flow should properly be considered deviations in the Universe's expansion itself.
Interestingly, Raychaudhuri (1955) showed that (ignoring vorticity) if a velocity field has locally isotropic expansion, then the space is locally isotropic. Yet we know from examples such as gravitational lensing that inhomogeneities alter the curvature of space such that it is not locally isotropic. Thus, since space is not locally isotropic, then the Universe's expansion can not be locally isotropic either. Whether to conceive of the Universe expanding non-uniformly or whether to conceive of it expanding uniformly with superimposed peculiar velocities is more than just a conceptual issue, however.
According to Raychaudhuri's equation (1955), the existence of shear in a velocity field will lead to a decrease in the volume expansion. Since inhomogeneities should introduce tidal forces and shear the velocity field, then the existence of overdensities and underdensities in the Universe should lead to shear throughout the Universe that decreases the Universe's volume expansion compared with that of a homogeneous universe. This effect should only be significant when measured locally in the vicinity of an inhomogeneity: the global influence should be quite small. Raychaudhuri's equation also shows that the existence of vorticity (and also velocity dispersion in the Newtonian version) will lead to an increase in the volume expansion. When structures start to collapse in the Universe and eventually become supported by vorticity or velocity dispersion, those regions of space cease shrinking, which can lead to an increase in the global expansion of the Universe. Thus, it is important to consider the influence inhomogeneities may have on the Universe's expansion.
The Cosmological Principle-that the Universe is homogeneous and isotropicis generally assumed to hold, since averaged over large enough scales the Uni-verse will appear homogeneous. However, general relativity is needed to understand not only small dense systems, but large diffuse systems such as the Universe, and according to Einstein's field equations, the spacetime corresponding to a homogeneous universe can not be used to represent a spatiallyaveraged inhomogeneous universe. This is because Einstein's field equations do not equate the spacetime to the mass-energy distribution directly. The energy-momentum tensor T ab depends on the Ricci tensor R ab and scalar R, which stem from taking derivatives of the metric tensor g ab , with Einstein's equations equating R ab − 1 2 Rg ab = κT ab .
If the left-hand side of the field equations for a homogeneous universe is equated to the spatially-averaged mass-energy of an inhomogeneous universe, there will generally be a discrepancy between the two sides of the field equations, which will act like a cosmological constant and either accelerate or decelerate the universe's expansion from that expected for a homogeneous universe. Thus, even if the Universe may look homogeneous on large enough scales, assuming the Universe to expand uniformly is ultimately misleading. Several researchers have suggested this effect may even explain the Universe's apparent acceleration (reported by Perlmutter et al. , 1999) as being due to structure formation- Bildhauer and Futamase (1991), Bene et al. (2003), and Kolb et al. (2005)-although Russ et al. (1997) argue that the effect of inhomogeneities should be small.
Also, conceiving of the Universe's expansion as uniform and assigning the galaxies peculiar velocities, bulk flow studies such as that of Hudson et al. (2004) have continued to find that the peculiar velocities with respect to the Cosmic Microwave Background (CMB) frame are correlated such that volumes of space of order 100 Mpc in radius are moving with bulk velocities of approximately 300-700 km s −1 . This suggests inhomogeneities significantly perturb the velocity field of the Universe. The existence of the Universe's large-scale structure of voids and superclusters suggests the voids are underdense regions that have been decelerated less due to gravity so they have ballooned up into roughly spherical regions without undergoing structure formation, while the superclusters are overdense regions where gravity has overcome the Universe's expansion such that they have reached turnaround and collapsed in their densest regions. Moffat and Tatarski (1995) looked at what observational effects we would theoretically observe if we were to inhabit a local void. Via comparison of their theoretical curves with a survey of redshift-distance determinations, they found the data were better fit by a model with a local void than by a homogeneous universe. Zehavi et al. (1998) used 44 type Ia supernova H 0 values to show that we may just inhabit an underdense region of the Universe (where the expansion in the velocity field has been slowed less due to gravity than in more dense regions of the Universe). Referring to fig. 4 of Freedman et al. (2001), it appears that the H 0 values tend to fall off beyond a distance of 100 Mpc, which suggests the Universe may be expanding faster locally. A here-there difference in the Universe's expansion could be an alternative to the notion of a now-then difference, which is the assumption the Universe's supposed acceleration (Perlmutter et al. , 1999) rests on, so it is important to account for the possible influence of inhomogeneities on the Universe's expansion if the cosmological parameters are to be properly determined.
Thus, in this paper we will not assume the existence of a uniform spatial expansion with peculiar velocities superimposed. We will use H 0 values measured along different lines-of-sight to see whether local variation in H 0 exists, and to produce all-sky maps of the observed variation across the sky. If more variation exists in the maps than should be expected due to measurement errors in the data, and if the high and low values of H 0 are correlated in position on the sky, then this will be taken as evidence that the expansion is indeed locally anisotropic across the sky. Since bulk flow studies find bulk flows of a few hundred km s −1 on 100 Mpc scales, which is predicted depending on the cosmological model (e.g. see Zaroubi , 2002), and bulk flows only show the net flow of a sample volume rather than the individual variations in the velocity field, then it would be expected that variations in H 0 observed on this scale should be at least a few km s −1 Mpc −1 .
While it is easy enough to measure how fast objects are expanding away from us via redshifts, it is the determination of accurate distances that is problematic in the determination of H 0 . Historically, the errors in H 0 have been so great that it would be difficult to study real variation in the Universe's expansion rate. The most accurate work to date to study H 0 is the HST Extragalactic Distance Scale Key Project (Freedman et al. , 2001, hereafter the HST Key Project), which yielded distances accurate enough for a meaningful study of real variation in H 0 , especially since most of the errors are systematic and shared by all the H 0 determinations. Thus, we will map the directional variation in H 0 using the HST Key Project data, comparing with a second set of data to examine whether the same general trend is observed. The data selection will be discussed in Sect. 2. The technique used to generate all-sky H 0 maps and study the significance of the variations and the impact of distance will be outlined in Sect. 3. In Sect. 4 the results will be examined from various frames of reference and a comparison with bulk flow studies will be made.
Selection of Data
HST Key Project data
The HST Key Project data for H 0 have been selected as the primary data set. This set offers a reasonably large set of values that is distributed nicely for the purpose of making all-sky maps. Also, despite the fact that the H 0 values depend on the data of other researchers, the values were all analysed by the HST Key Project team to be consistent with each other. Thus, the systematic errors should be similar in most cases to minimize the effect on the study of variations in H 0 so that most of the uncertainty in the relative values of H 0 will just be in the random errors.
There are 74 values that were published in the final HST Key Project paper (Freedman et al. , 2001) Freedman et al. (2001). One of these values was for Pavo 2, which is a component that was separated from the Pavo Cluster due to some of the galaxies yielding different Tully-Fisher distances, but since the galaxies had similar recessional velocities, they yielded quite different H 0 values. For our purposes, it only seems fair to include the values for both Pavo components.
Reported in Table 1 are the objects used, their celestial co-ordinates, and their H 0 values with corresponding 1-σ random errors. The celestial co-ordinates for the objects were obtained via SIMBAD.
Comparison data
In an effort to test whether variations detected in the HST Key Project data exist independently of this data set, another set of data has been compiled. This data set has been constructed by conducting a literature search for determinations of H 0 and using papers that report measurements of H 0 along individual lines-of-sight, include uncertainties, and for which the observed objects are within the distance range of the HST Key Project data. This yields 57 values, which will hereafter be referred to as the comparison values. Reported in Table 2 are the objects used, celestial co-ordinates (obtained via SIMBAD), and H 0 values with their reported errors.
It should be noted that the comparison values stem from a mixed bag of methods and different researcher analysis; thus, the potential for this set to Liu (2001) be corrupted by systematics between values is greater than in the HST Key Project set. Also, various infall corrections have been made, and these are not always clearly stated in the papers, so these values may not consistently be in the CMB frame. One of the comparison values actually stems from a HST Key Project Cepheid distance value, while 4 of the HST Key Project values depend on comparison data surface brightness fluctuation distances, but the data sets are essentially independent.
3 Contour mapping
Technique
A method is required to generate an all-sky map based on a set of values located at specific positions on the sky. One method would be to fit spherical harmonics; however, this would require more higher order terms than could ever be convenient in order to not force structure into the map from the lower order terms. Our chosen method is to smear the H 0 values over the sky using a Gaussian profile for each data point.
The Gaussian smearing method involves laying out a grid on the sky and calculating weighted mean values of H 0 at each grid point, weighting each actual data point in the average according to its angular separation θ from the grid point such that the weightings fall off as a Gaussian. Thus, the weighting W of each data point is given by
W = 1 √ 2πσ e −θ 2 /(2σ 2 )
with the standard deviation σ controlling how broad the smearing is. Contours of constant H 0 are then interpolated within the grid of averaged H 0 values to generate contour maps of H 0 . The values are weighted only by their separations, not their uncertainties (the type Ia supernova values are more distant and have smaller uncertainties, but the effect of distance on the map will be specifically explored in Sect. 3.3).
Gaussian smearing succeeds in creating an all-sky map from a sample of data points associated with specific positions on the sky, and it also averages out the impact of errors associated with individual data points so that variations correlated with directions on the sky have the opportunity to manifest themselves. This will be sufficient for studying large-scale variations in H 0 , although there is no hope of studying any variations that do not have a large angular extent, as there is insufficient sky coverage. It should be kept in mind that this method will also smear out the extrema for any actual variation though, so the range of any real variation in the maps will be diminished somewhat.
For the contour maps presented in this paper, while the angular separations are calculated in spherical co-ordinates (using the dot product of unit vectors for the angular positions of the grid points and data points), the grid points are positioned for a cylindrical projection of the sky and are then mapped out in the form of a sinusoidal projection. The sinusoidal projection is a pseudocylindrical projection that preserves areas by keeping latitude lines parallel but shortening their length longitudinally according to the sine of the polar angle (or cosine of the declination).
Unless stated otherwise, the grid points are set 1 • apart in right ascension and 1 • apart in declination, as this appears to be a fine enough grid for the purpose of interpolating contours, and the Gaussian weighting profile falls off with a standard deviation of 25 • , as this is approximately the typical separation of the real data points and is a sufficiently broad smearing to fill in the holes in the distribution of data on the sky.
In Fig. 1 contour maps appear (in Galactic co-ordinates) for the HST Key Project data and the comparison data. The extrema are in similar directions for the two data sets and are similar in magnitude, except that one of the maxima in the comparison map is weaker. While the comparison map may not serve as a completely definitive cross-check of the reality of the variations observed in the HST Key Project map, it certainly seems to show some agreement, and it does not differ from the HST Key Project map as much as would be expected if the variation in H 0 were due to uncertainties in the grid H 0 values alone. This suggests the variation in the maps exists independently of the uncertainties in the particular determinations of H 0 . Since there is more uncertainty in the comparison map than the Key Project map, the differences between the maps probably mostly reflect errors in the comparison map rather than the Key Project map.
If the Pavo, Pavo 2, and Ursa Major determinations are removed from the HST Key Project data (due to their discrepant values), the map looks largely unchanged from the original: the primary maximum goes down slightly to 80 km s −1 Mpc −1 and the primary minimum goes up slightly to 71 km s −1 Mpc −1 while the secondary extrema remain the same, so that the secondary extrema now become the primary extrema in this case. If PGC 39724 and NGC 4709 are removed from the comparison data, again the map is largely unchanged: the primary maximum is slightly lower at just under 79 km s −1 Mpc −1 , while the other extrema appear unchanged. Looking at Fig. 1, it is apparent that the extrema tend not to be centred on the lines-of-sight to the individual data determinations, so the extrema are resulting from trends in the data, rather than from specific high or low H 0 values.
Magnitude of variation
As was previously discussed in Sect. 3.1, the Gaussian smearing lessens the range of any real variation. Using Gaussians with successively smaller standard deviations of 20 • , 15 • , and 10 • , as the standard deviation gets smaller, the extrema are picked out with less smearing, but errors in the data also start to have a greater impact. At 10 • , the range of variation is greater than 30 km s −1 Mpc −1 , but the grid values are mostly being determined by individual H 0 values so the errors in the grid values approach those of the H 0 determinations.
The actual extrema are separated by 38 km s −1 Mpc −1 , which may be an underestimate to the magnitude of any real variation if the standard deviation could approach zero, but the errors have already become so large that they are likely making the range of variation appear artificially large as it is. Being conservative, the range of variation appears to be ∼30 km s −1 Mpc −1 . Thus, large and small values for the standard deviation yield complementary aspects of the map: smaller standard deviations smooth out the range of variation less, while larger standard deviations yield the overall trend in the map with less error.
One may question whether the variation is statistically significant or whether it is likely that this much variation would be found in the map due to measurement errors in H 0 alone. Assuming that the 1-σ random errors are accurate, the significance of the result can be tested statistically by using Monte Carlo simulations of the data. This is accomplished using two different methods: one which assumes the systematic differences between the mean H 0 values are real for each of the 4 methods used to derive H 0 , and one which assumes that the systematic differences stem from variations in H 0 or errors. The first method involves calculating a new value of H 0 for each position by using the weighted mean value of H 0 corresponding to the actual distance method that was used to obtain the real H 0 value, and then adding Gaussian deviates (varied over a 3-σ range) to the data using the weighted mean 1-σ error for the corresponding method to simulate the expected scatter. The second method involves using the weighted mean value of H 0 for all the data for the H 0 value at each position and adding Gaussian deviates to the values according to the actual 1-σ errors for each data point.
For 10,000 different sets of simulated data for each Monte Carlo method, H 0 maps are calculated and the differences between the maximum and minimum H 0 grid point values are found. Comparing the random sky variations with the observed variations depends strongly on the value of the standard deviation used to smear the values over the sky. This is because smaller standard deviations smooth out the errors less and make it more likely to find greater variation in the maps. For the first method, the results are that the magnitude of variation is only as great as in the real data 5.61% of the time for a standard deviation of 45 • (which yields a range of 6.3 km s −1 Mpc −1 ), 9.90% of the time for a standard deviation of 35 • (which yields a range of 8.8 km s −1 Mpc −1 ), or 37.47% of the time for a standard deviation of 25 • (which yields a range of 12.9 km s −1 Mpc −1 ). The results for the second method are respectively 3.82%, 4.93%, or 13.08%. Thus, assuming the realistic case is somewhere between the two methods, and assuming anything with a less than 5% chance is "statistically significant," it appears that a statistically significant difference of 6 to 9 km s −1 Mpc −1 can be demonstrated with the broader values of the standard deviation.
While the above Monte Carlo methods should give a reasonable measure of the statistical significance of the variation, they depend on the 1-σ uncertainties reported for the data being accurate. Independent of the error bars, one can study whether the values are correlated in position on the sky, with higher values tending to be near higher values and lower values tending to be near lower values. Randomly reassigning the H 0 values to the actual H 0 positions on the sky and computing maps for several randomizations reveals whether the actual map has more variation than just the scatter in the data should produce. If there is real variation in the data, this method will not yield accurate measures of the statistical significance, since there will be extra scatter in the data that will tend to allow more variation in the map than errors alone should produce. However, this method at least yields an upper limit to the likelihood that as much variation could occur in the map if there were no real variation in the data.
For 10,000 randomizations of the above method, the results are that the range of variation is only as great as for the real map 14.78% of the time for a standard deviation of 45 • , 21.55% of the time for a standard deviation of 35 • , or 42.10% of the time for a standard deviation of 25 • . Thus, even if one is a complete skeptic that there is no real variation in the data, there appears to be more variation than there should be, although of weaker significance than the above Monte Carlo methods found. On the other hand, if there is real variation in the data, these are upper bounds on the percentages, so this weaker significance is not inconsistent with the results of the above methods. Since this method does not depend on uncertainties, it can also be applied to the comparison data. For the complete set of 133 data points, the results are that the variation is only as great 1.16% of the time for a standard deviation of 45 • (which yields a range of 6.6 km s −1 Mpc −1 ), 2.60% of the time for a standard deviation of 35 • (which yields a range of 8.9 km s −1 Mpc −1 ), or 19.29% of the time for a standard deviation of 25 • (which yields a range of 12.6 km s −1 Mpc −1 ). Thus, the difficulty in producing coincidental correlations in larger sets of data allows the conglomerate data set to substantiate a statistically significant variation of ∼9 km s −1 Mpc −1 .
Directional uncertainty and distance dependence
The directional uncertainty of the extrema in the map can be tested by adding Gaussian deviates (in a 3-σ range) to the 76 data points, computing the H 0 grid map, and seeing how much this affects the positions of the maximum and minimum H 0 grid values in the map. The extrema for each of 500 randomizations are plotted in Fig. 2. Each randomization is calculated with grid separations of 0.5 • so that the extrema can be plotted with less granularity. It can be seen that the probability distribution for the positions of the extrema is not the same in all directions, but primary and secondary extrema exist with directional uncertainties of order 10 • to 20 • .
To test the influence of depth on the map, an additional weighting factor for distance is added in the computation of the grid maps. Maps are computed for various nominal distances by weighting the data according to the fraction of the distance that is shared in common with a given nominal distance. For distances more distant than the nominal distance, the weightings go as the ratio of nominal distance to object distance. For distances closer than the nominal distance, the weightings go as the ratio of object distance to nominal distance.
About half of the HST Key Project data are for objects between 13 and 70 Mpc, with the other half between 70 and 467 Mpc. In Fig. 3 distance-weighted maps appear for 6 nominal distances: 30 Mpc, 50 Mpc, 80 Mpc, 120 Mpc, 180 Mpc, and 300 Mpc. It is apparent that nearby, one pair of extrema from Fig. 1 dominates, and the magnitude of variation of this pair falls off with distance. The minimum is near (α = 9 h 30 m , δ = +70 • ), and the maximum is near (α = 19 h 30 m , δ = −70 • ). Less apparent is that the grid values of H 0 at the secondary extrema from Fig. 1 remain roughly constant with distance and dominate only at the greatest distances where the range of variation from the first pair of extrema has become small enough. The secondary minimum is near (α = 18 h 0 m , δ = +15 • ), and the secondary maximum is near (α = 5 h 30 m , δ = +5 • ). It should be noted that more distant values of H 0 will always sample local expansion as well, and while the converse is not true, the weighting method allows values of H 0 for the full range of distances to affect all the maps to some degree. The extrema that dominate locally appear to imply some sort of local effect, which could be interpreted as a dipole due to a bulk flow of a local sample volume with respect to the CMB frame, since the extrema are roughly opposite on the sky. The secondary extrema remain constant with distance, and are also roughly opposite of each other on the sky, perhaps suggesting an independent bulk flow of a larger-scale volume.
Removing all the data points from the HST Key Project data set that have an uncertainty greater than 8 km s −1 Mpc −1 and producing a map with the remaining 44 points yields a map quite similar to the most distant maps of Fig. 3. The remaining data essentially consist of type Ia supernova measurements and a few other surface brightness fluctuation and fundamental plane measurements that are for distances about 50 Mpc and greater. The map ranges from 68 km s −1 Mpc −1 at the minimum to 81 s −1 Mpc −1 at the maximum, and the primary extrema of Fig. 1 now appear even weaker than in Fig. 3(f).
Since these data mostly have uncertainties of less than 3 km s −1 Mpc −1 , this provides stronger support for the reality of the extrema observed on large scales in Fig. 3.
It is difficult to make an all-sky map from only 76 points: weighting for distance only makes it more difficult to make meaningful all-sky maps. Thus, to test the significance of the observed variation with distance, the HST Key Project data are combined with the comparison data, and then divided into two sets according to distance. Randomly reassigning the H 0 values to the data positions for the nearby group of 67 data points (to get a limit on the statistical significance) yields the result that as much variation occurs only 0.16% of the time for a standard deviation of 35 • (which yields a range of variation of 18.8 km s −1 Mpc −1 ) or 7.20% of the time for a standard deviation of 25 • (which yields a range of variation of 25.3 km s −1 Mpc −1 ). Likewise, the more distant group of 66 data points gives respectively 24.39% (corresponding to a range of variation of 6.0 km s −1 Mpc −1 ) or 23.05% (corresponding to a range of 11.1 km s −1 Mpc −1 ). Thus, given that these percentages are upper limits, while the more nearby variation demonstrates that a 19 km s −1 Mpc −1 is certainly significant, the more distant variation can not be confirmed to be significant.
Unfortunately, there does not appear to be a reasonable way to use the 1σ errors from the combined data sets to properly determine the statistical significance as in Sect. 3.2, as the errors reported for the comparison data are reported inconsistently and do not share the same systematic errors. However, just using the 36 type Ia supernova values, since they have small random errors and will share systematic errors, and since they tend to be the most distant values, yields interesting results. It is found that as much variation occurs only 0.00% of the time for a standard deviation of 35 • (which yields a range of variation of 6.7 km s −1 Mpc −1 ), 0.02% of the time for a standard deviation of 25 • (which yields a range of variation of 9.7 km s −1 Mpc −1 ), and 0.12% of the time for a standard deviation of 15 • (which yields a range of variation of 12.9 km s −1 Mpc −1 ). If the 1-σ errors are truly accurate for the type Ia supernova measurements, it means it is essentially impossible to achieve this much variation by chance. To the extent that these 1-σ errors can be trusted, it suggests the variation seen on large scales is statistically significant. Fig. 1(a) transformed to (a) celestial co-ordinates, (b) ecliptic co-ordinates, (c) supergalactic co-ordinates, and (d) the CMB frame. The poles of the CMB frame are defined by the dipole in the CMB in the heliocentric frame, and the longitude right to left is defined such that the North Celestial Pole is at 90 • longitude. Positions of the actual data points are indicated by triangles, and the contours range from low (dark) to high (light) values of H 0 (in km s −1 Mpc −1 ) as indicated.
Discussion
The variation does not appear to be an artifact of Galactic dust, since there is no consistent difference looking in or out of the plane of the Galaxy. In fact, the overall structure in the map is inconsistent with the distribution of dust in the COBE dust maps (Schlegel et al. , 1998), so it seems unlikely that the observed variation in H 0 could be due to poor corrections for dust in our own Galaxy. In Fig. 4 the HST Key Project map of Fig. 1(a) has been transformed respectively to celestial co-ordinates, ecliptic co-ordinates, supergalactic co-ordinates, and a CMB-dipole-oriented frame of reference for comparison. Figures 4(a) and 4(b) demonstrate that the extrema do not appear to be an artifact of any local frame of reference.
The supergalactic map is interesting because the extrema that predominate locally are near the supergalactic plane, suggesting they may be associated with local conglomerations of matter (which tend to be arranged along the supergalactic plane). Meanwhile, the extrema that dominate farther out are oriented closer to the supergalactic poles, so if these extrema are a real phe- nomenon, we may be getting a clear view looking out of the supergalactic plane and observing effects associated with a much larger scale.
The CMB-frame map is oriented with its north pole in the direction of the Sun's peculiar velocity with respect to the CMB. From the map it is apparent that there is little difference between directions oriented along the CMB dipole. All of the extrema are near the equator of the map, roughly perpendicular to the CMB dipole. Since the H 0 values have been corrected to be in the CMB frame of reference, this suggests there is no error due to the correction for the CMB dipole: it seems too good in fact. Interestingly, this variation seems to be in agreement with the CMB anisotropy observed by Tegmark et al. (2003), who found that the CMB quadrupole and octupole are aligned such that the extrema are in a plane roughly perpendicular to the direction of the dipole. If our dipole motion with respect to the CMB is related to the existence of the H 0 /CMB anisotropy perpendicular to this direction, it suggests this motion and the variations related to it may both be stemming from large-scale inhomogeneities. Inoue and Silk (2006) have suggested the CMB quadrupole and octupole alignment can be explained by a pair of voids a few hundred Mpc distant in the direction (l = 330 • , b = −30 • ), which is interestingly in the direction of the H 0 maxima in our maps.
In Fig. 5 recent bulk flow directions are plotted on top of the map of Fig. 2. The bulk flow directions tend to lie on the outskirts of the uncertainty in the maxima, so they are not totally consistent with the map. They should be, as bulk flows will be oriented toward directions associated with higher recessional velocities in the CMB frame and hence the directions of higher H 0 . However, the bulk flow directions do lie in the higher H 0 regions of the map, so they are not that disconcordant. One thing to note is that the bulk flow directions appear to be sandwiched between the primary and secondary maxima. This suggests that by only looking at the net flow, bulk flow studies may be missing the distinction between two separate effects and missing the actual directions of interest.
Conclusion
It appears that a statistically significant variation in H 0 of at least 9 km s −1 Mpc −1 exists in the HST Key Project data. The approximate directional uncertainty is 10 • to 20 • . Maps weighted for distance appear to indicate two sets of extrema that dominate on different distance scales. Within our supercluster, differences as great as ∼35 km s −1 Mpc −1 are observed, and these tend to occur near the supergalactic plane with a minimum near (α = 9 h 30 m , δ = +70 • ) and a maximum near (α = 19 h 30 m , δ = −70 • ). Beyond our supercluster, differences as great as ∼20 km s −1 Mpc −1 are observed, and these tend to occur away from the supergalactic plane with a minimum near (α = 18 h 0 m , δ = +15 • ) and a maximum near (α = 5 h 30 m , δ = +5 • ). Within 70 Mpc, a combination of the HST Key Project data and the comparison data shows a statistically significant difference of 19 km s −1 Mpc −1 . Beyond 50 Mpc, the HST Key Project type Ia supernova data yield a statistically significant difference of 13 km s −1 Mpc −1 (assuming the reported 1-σ errors are reliable).
Further study of the resilience of this result requires more data specifically selected for achieving optimal sky coverage. It would also be interesting to have data for a greater range of distances to see how far out a statistically significant variation can be detected and over how large a scale the Universe's expansion needs to be sampled before it appears to become uniform.
Real variation in H 0 is not really unexpected given the degree of structure and mass inhomogeneity present in the Universe. One implication of this variation is it also partially explains why H 0 has historically been plagued by so much uncertainty.
Fig. 1 .
1Hubble constant contour maps (in Galactic co-ordinates for a sinusoidal projection of the sky) for (a) the 76 HST Key Project H 0 values and (b) 57 comparison H 0 values. Positions of the actual data points are indicated by triangles, and the contours range from low (dark) to high (light) values of H 0 (in km s −1 Mpc −1 ) as indicated.
Fig. 2 .
2Hubble constant contour map of Fig. 1(a) with 500 random extrema for maps calculated with Gaussian deviates for the 76 HST Key Project H 0 values. Positions of the minima and maxima are indicated by dark and light dots respectively, and the contours range from low (dark) to high (light) values of H 0 (in km s −1 Mpc −1 ) as indicated.
Fig. 3 .
3Hubble constant contour maps (in Galactic co-ordinates for a sinusoidal projection of the sky) for the 76 HST Key Project H 0 values weighted for nominal distances of (a) 30 Mpc, (b) 50 Mpc, (c) 80 Mpc, (d) 120 Mpc, (e) 180 Mpc, and (f) 300 Mpc. Positions of the actual data points are indicated by triangles, and the contours range from low (dark) to high (light) values of H 0 (in km s −1 Mpc −1 ) as indicated.
Fig. 4 .
4Hubble constant contour map of
Fig. 5 .
5Hubble constant contour map of Fig. 2 with recent determinations of bulk flows (triangles). From high to low latitude, the bulk flows are those of da Costa et al. (2000), Dekel et al. (1999), Willick (1999), Hudson et al. (2004), Parnovsky et al. (2001), and Giovanelli et al. (1998). Positions of the minima and maxima are indicated by dark and light dots respectively, and the contours range from low (dark) to high (light) values of H 0 (in km s −1 Mpc −1 ) as indicated.
Table 1 HST
1Key Project H 0 DataObject ID
α
δ
H 0
or Cluster (hours) (degrees) (km/s/Mpc)
Reference
SN 1990O
17.15
+16.2
67.3 ± 2.3
Freed. (2001)
SN 1990T
19.59
−56.2
75.6 ± 3.1
Freed. (2001)
SN 1990af
21.35
−62.4
75.8 ± 2.8
Freed. (2001)
SN 1991S
10.29
+22.0
69.8 ± 2.8
Freed. (2001)
SN 1991U
13.23
−26.1
83.7 ± 3.4
Freed. (2001)
SN 1991ag
20.00
−55.2
73.7 ± 2.9
Freed. (2001)
SN 1992J
10.09
−26.4
74.5 ± 3.1
Freed. (2001)
SN 1992P
12.42
+10.2
64.8 ± 2.2
Freed. (2001)
SN 1992ae
21.28
−61.3
81.6 ± 3.4
Freed. (2001)
SN 1992ag
13.24
−23.5
76.1 ± 2.7
Freed. (2001)
SN 1992al
20.46
−51.2
72.8 ± 2.4
Freed. (2001)
SN 1992aq
23.04
−37.2
64.7 ± 2.4
Freed. (2001)
SN 1992au
00.10
−49.6
69.4 ± 2.9
Freed. (2001)
SN 1992bc
03.05
−39.3
67.0 ± 2.1
Freed. (2001)
SN 1992bg
07.42
−62.3
70.6 ± 2.4
Freed. (2001)
SN 1992bh
04.59
−58.5
66.7 ± 2.3
Freed. (2001)
SN 1992bk
03.43
−53.4
73.6 ± 2.6
Freed. (2001)
SN 1992bl
23.15
−44.4
72.7 ± 2.6
Freed. (2001)
SN 1992bo
01.22
−34.1
69.7 ± 2.4
Freed. (2001)
SN 1992bp
03.36
−18.2
76.3 ± 2.6
Freed. (2001)
SN 1992br
01.45
−56.1
67.2 ± 3.1
Freed. (2001)
SN 1992bs
03.29
−37.2
67.8 ± 2.8
Freed. (2001)
SN 1993B
10.35
−34.3
69.8 ± 2.4
Freed. (2001)
SN 1993O
13.31
−33.1
65.9 ± 2.1
Freed. (2001)
SN 1993ag
10.03
−35.3
69.6 ± 2.4
Freed. (2001)
SN 1993ah
23.52
−27.6
71.9 ± 2.9
Freed. (2001)
SN 1993ac
05.46
+63.2
72.9 ± 2.7
Freed. (2001)
SN 1993ae
01.29
−01.6
75.6 ± 3.1
Freed. (2001)
SN 1994M
12.31
+00.4
74.9 ± 2.6
Freed. (2001)
Table 2
2Comparison H 0 DataObject ID
α
δ
H 0
or Cluster (hours) (degrees) (km/s/Mpc)
Reference
Coma
13.00
+28.0
71 ± 30
Her. (1995)
Coma
13.00
+28.0
78 ± 11
Whit. (1995)
Virgo
12.50
+13.2
80 ± 16
Zas. (1996)
NGC 7331
22.62
+34.4
70 ± 14
Zas. (1996)
Virgo
12.50
+13.2
87 ± 7
Ford (1996)
Fornax
03.64
−35.5
73 ± 5
Ford (1996)
NGC 5846
15.11
+01.6
65 ± 8
Forb. (1996)
NGC 1365
03.56
−36.1
75 ± 5
Mad. (1996)
Coma
13.00
+28.0
75 ± 6
Gregg (1997)
IC 4051
13.00
+28.0
68 ± 6
Baum (1997)
Coma
13.00
+28.0
70 ± 7
Hjor. (1997)
NGC 4889
13.00
+28.0
85 ± 10
Jen. (1997)
NGC 3309
10.61
−27.5
46 ± 5
Jen. (1997)
NGC 4881
13.00
+28.0
71 ± 11
Thom. (1997)
Abell 2256
17.06
+78.7
72 ± 22
Myers (1997)
Coma
13.00
+28.0
67 ± 26
Myers (1997)
Abell 262
01.88
+36.1
82 ± 8
Lauer (1998)
Abell 3560
13.53
−33.2
86 ± 7
Lauer (1998)
Abell 3565
13.61
−34.0
83 ± 6
Lauer (1998)
Abell 3742
21.11
−47.1
78 ± 6
Lauer (1998)
Coma
13.00
+28.0
60 ± 11
Sal. (1998)
PGC 14638
04.20
−32.9
60 ± 15
Pat. (1998)
PGC 39724
12.33
+29.6
44 ± 15
Pat. (1998)
PGC 51233
14.34
+03.9
60 ± 15
Pat. (1998)
PGC 00218
00.05
+16.1
58 ± 15
Pat. (1998)
PGC 10208
02.70
+00.4
60 ± 15
Pat. (1998)
PGC 35164
11.44
+43.6
59 ± 15
Pat. (1998)
PGC 43798
12.89
+02.2
39 ± 15
Pat. (1998)
Fornax
03.64
−35.5
74 ± 5
Tul. (2000)
Table 2 -
2continued
Object ID
α
δ
H 0
or Cluster
(hours) (degrees) (km/s/Mpc)
Reference
Ursa Major
11.50
+55.0
59 ± 6
Tul. (2000)
Pisces Fil.
01.12
+32.4
79 ± 2
Tul. (2000)
Coma
13.00
+28.0
83 ± 2
Tul. (2000)
Abell 1367
11.74
+19.8
77 ± 2
Tul. (2000)
Antlia
10.50
−35.3
86 ± 3
Tul. (2000)
Cen 30
12.77
−41.0
83 ± 3
Tul. (2000)
Pegasus
23.34
+08.2
77 ± 3
Tul. (2000)
Hydra I
10.61
−27.5
70 ± 2
Tul. (2000)
Cancer
08.35
+21.0
80 ± 2
Tul. (2000)
Abell 400
02.96
+06.6
76 ± 2
Tul. (2000)
Abell 2634
23.64
+27.0
70 ± 2
Tul. (2000)
Abell 262
01.88
+36.1
77 ± 4
Jen. (2001)
Abell 496
04.56
−13.2
74 ± 2
Jen. (2001)
Abell 779
09.33
+33.8
73 ± 2
Jen. (2001)
Abell 1060
10.61
−27.5
74 ± 4
Jen. (2001)
Abell 1656(a)
12.99
+28.0
79 ± 3
Jen. (2001)
Abell 1656(b)
12.99
+28.0
82 ± 3
Jen. (2001)
Abell 2199
16.48
+39.6
71 ± 2
Jen. (2001)
Abell 2666
23.85
+27.1
68 ± 2
Jen. (2001)
Abell 3389
06.36
−65.0
70 ± 2
Jen. (2001)
Abell 3565
13.61
−34.0
78 ± 4
Jen. (2001)
Abell 3581
14.12
−27.2
74 ± 2
Jen. (2001)
Abell 3656
19.98
−38.3
72 ± 3
Jen. (2001)
Abell 3742
21.11
−47.1
81 ± 4
Jen. (2001)
NGC 4073
12.07
+01.9
64 ± 2
Jen. (2001)
NGC 4709
12.83
−41.4
107 ± 7
Jen. (2001)
NGC 5193
13.53
−33.2
85 ± 6
Jen. (2001)
Coma
13.00
+28.0
71 ± 8
We would like to acknowledge useful suggestions and comments from Roberto Abraham, Byron Desnoyers Winmill, John Dubinski, Shoko Sakai, John Tonry, Howard Yee, and the referee. This work was supported by NSERC through a Post-Graduate Scholarship to MLM and a Discovery grant to CCD.
. W A Baum, M Hammergren, B Thomsen, AJ. 113Preprint astro-ph/0308161 v4Baum, W. A., Hammergren, M., Thomsen, B., et al., 1997. AJ, 113, 1483. Bene, G., Czinner, V., Vasúth, M., 2003. Preprint astro-ph/0308161 v4.
. S Bildhauer, T Futamase, L N Da Costa, M Bernardi, M V Alonso, Gen. Relativ. Grav. 23455ApJBildhauer, S., Futamase, T., 1991. Gen. Relativ. Grav., 23, 1251. da Costa, L. N., Bernardi, M., Alonso, M. V., et al., 2000. ApJ, 537, L81. Dekel, A., Eldar, A., Kolatt, T., et al., 1999. ApJ, 522, 1. Ferrarese, L., Mould, J. R., Kennicutt, R. C., Jr., et al., 2000. ApJ, 529, 745. Forbes, D. A., Brodie, J. P., Huchra, J., 1996. AJ, 112, 2448. Ford, H. C., Hui, X., Ciardullo, R., Jacoby, G. H., Freeman, K. C., 1996. ApJ, 458, 455.
. W L Freedman, B F Madore, B K Gibson, ApJ. 55391ApJFreedman, W. L., Madore, B. F., Gibson, B. K., et al., 2001. ApJ, 553, 47. Giovanelli, R., Haynes, M. P., Freudling, W., da Costa, L. N., 1998. ApJ, 505, L91.
. M D Gregg, Gregg, M. D., 1997. NewA, 1, 363.
. T Herbig, C R Lawrence, A C S Readhead, S Gulkis, L5, J Hjorth, N R Tanvir, ApJ. 44968AJHerbig, T., Lawrence, C. R., Readhead, A. C. S., Gulkis, S., 1995. ApJ, 449, L5. Hjorth, J., Tanvir, N. R., 1997. AJ, 482, 68.
. M J Hudson, R J Smith, J R Lucey, E Branchini, K T Inoue, J Silk, MNRAS. 35223ApJHudson, M. J., Smith, R. J., Lucey, J. R., Branchini, E., 2004. MNRAS, 352, 61. Inoue, K. T., Silk, J., 2006. ApJ, 648, 23.
. J B Jensen, Univ. HawaiiPh.D. ThesisJensen, J. B., 1997. Ph.D. Thesis, Univ. Hawaii.
. J B Jensen, J L Tonry, R I Thompson, ApJ. 550503Jensen J. B., Tonry, J. L., Thompson, R. I., et al., 2001. ApJ, 550, 503.
. E W Kolb, S Matarrese, A Notari, A Riotto, Phys. Rev. D. 7123524Kolb, E. W., Matarrese, S., Notari, A., Riotto, A., 2005. Phys. Rev. D, 71, 023524.
. T R Lauer, J L Tonry, M Postman, E A Ajhar, J A Holtzman, ApJ. 577Lauer, T. R., Tonry, J. L., Postman, M., Ajhar, E. A., Holtzman, J. A., 1998. ApJ, 499, 577.
. M C Liu, J R Graham, ApJ. 55731Liu, M. C., Graham, J. R., 2001. ApJ, 557, L31.
. B F Madore, W L Freedman, R C Kennicutt, Am. Astron. Soc. Meeting. 189Madore, B. F., Freedman, W. L., Kennicutt, R. C., et al., 1996. Am. Astron. Soc. Meeting, 189, 108.04.
. J W Moffat, D C Tatarski, ApJ. 17Moffat, J. W., Tatarski, D. C., 1995. ApJ, 453, 17.
. S T Myers, J E Baker, A C S Readhead, E M Leitch, T Herbig, ApJ. 4851Myers, S. T., Baker, J. E., Readhead, A. C. S., Leitch, E. M., Herbig, T., 1997. ApJ, 485, 1.
. S L Parnovsky, Y N Kudrya, V E Karachentseva, I D Karachentsev, Astron. Lett. 27765Parnovsky, S. L., Kudrya, Y. N., Karachentseva, V. E., Karachentsev, I. D., 2001. Astron. Lett. 27, 765.
. G Paturel, P Lanoix, P Teerikorpi, A&A. 339671Paturel, G., Lanoix, P., Teerikorpi, P., et al., 1998. A&A, 339, 671.
. S Perlmutter, G Aldering, G Goldhaber, Phys. Rev. 5171123ApJPerlmutter, S., Aldering, G., Goldhaber, G., et al., 1999. ApJ, 517, 565. Raychaudhuri, A., 1955. Phys. Rev., 98, 1123.
. H Russ, M H Soffel, M Kasai, G Börner, S Sakai, J R Mould, S M G Hughes, Phys. Rev. D. 698. Sakai, S.56ApJ. Private communicationRuss, H., Soffel, M. H., Kasai, M., Börner, G., 1997. Phys. Rev. D, 56, 2044. Sakai, S., Mould, J. R., Hughes, S. M. G., et al., 2000. ApJ, 529, 698. Sakai, S., 2001. Private communication.
. M Salaris, S Cassisi, MNRAS. 298166Salaris, M., Cassisi, S., 1998. MNRAS, 298, 166.
. D J Schlegel, D P Finkbeiner, M Davis, M Tegmark, A De Oliveira-Costa, A J Hamilton, Phys. Rev. D. 500123523ApJSchlegel, D. J., Finkbeiner, D. P., Davis, M., 1998. ApJ, 500, 525. Tegmark, M., de Oliveira-Costa, A., Hamilton, A. J., 2003. Phys. Rev. D, 68, 123523.
. B Thomsen, W A Baum, M Hammergren, G Worthey, ApJ. 37Thomsen, B., Baum, W. A., Hammergren, M., Worthey, G., 1997. ApJ, 483, 37.
. R B Tully, M J Pierce, ApJ. 744Tully, R. B., Pierce, M. J., 2000. ApJ, 533, 744.
. B C Whitmore, W B Sparks, R A Lucas, F D Macchetto, J A Biretta, ApJ. 73Whitmore, B. C., Sparks, W. B., Lucas, R. A., Macchetto, F. D., Biretta, J. A., 1995. ApJ, 454, L73.
. J A Willick, astro-ph/0206052 v2ApJ. 647. Zaroubi, S.522Willick, J. A., 1999. ApJ, 522, 647. Zaroubi, S., 2002. Preprint astro-ph/0206052 v2.
. A V Zasov, D V Bizyaev, Astron. Lett. 2271Zasov, A. V., Bizyaev, D. V., 1996. Astron. Lett., 22, 71.
. I Zehavi, A G Riess, R P Kirshner, A Dekel, ApJ. 483Zehavi, I., Riess, A. G., Kirshner, R. P., Dekel, A., 1998. ApJ, 503, 483.
|
[] |
[
"Maximum principle for the finite element solution of time dependent anisotropic diffusion problems",
"Maximum principle for the finite element solution of time dependent anisotropic diffusion problems"
] |
[
"Xianping Li ",
"Weizhang Huang "
] |
[] |
[] |
Preservation of the maximum principle is studied for the combination of the linear finite element method in space and the θ-method in time for solving time dependent anisotropic diffusion problems. It is shown that the numerical solution satisfies a discrete maximum principle when all element angles of the mesh measured in the metric specified by the inverse of the diffusion matrix are nonobtuse and the time step size is bounded below and above by bounds proportional essentially to the square of the maximal element diameter. The lower bound requirement can be removed when a lumped mass matrix is used. In two dimensions, the mesh and time step conditions can be replaced by weaker Delaunay-type conditions. Numerical results are presented to verify the theoretical findings.
|
10.1002/num.21784
|
[
"https://arxiv.org/pdf/1209.5657v2.pdf"
] | 119,596,134 |
1209.5657
|
a8d0310295ac54a09efafa38ac16899433a20320
|
Maximum principle for the finite element solution of time dependent anisotropic diffusion problems
Xianping Li
Weizhang Huang
Maximum principle for the finite element solution of time dependent anisotropic diffusion problems
AMS 2010 Mathematics Subject Classification 65M6065M50 Key words finite elementtime dependentanisotropic diffusionmaximum principle
Preservation of the maximum principle is studied for the combination of the linear finite element method in space and the θ-method in time for solving time dependent anisotropic diffusion problems. It is shown that the numerical solution satisfies a discrete maximum principle when all element angles of the mesh measured in the metric specified by the inverse of the diffusion matrix are nonobtuse and the time step size is bounded below and above by bounds proportional essentially to the square of the maximal element diameter. The lower bound requirement can be removed when a lumped mass matrix is used. In two dimensions, the mesh and time step conditions can be replaced by weaker Delaunay-type conditions. Numerical results are presented to verify the theoretical findings.
Introduction
We are concerned with the linear finite element solution of the initial-boundary value problem (IBVP) of a linear diffusion equation,
u t − ∇ · (D ∇u) = f (x, t),
in Ω T = Ω × (0, T ] u(x, t) = g(x, t),
on ∂Ω × [0, T ] u(x, 0) = u 0 (x),
in Ω × {t = 0} (1) where Ω ⊂ R d (d ≥ 1) is a connected polygonal or polyhedral domain, T > 0 is a fixed time, f (x, t), g(x, t) and u 0 (x) are given functions, and D is the diffusion matrix. We assume that D = D(x) is a general symmetric and strictly positive definite matrix-valued function on Ω T . It includes both isotropic and anisotropic diffusion as special examples. In the former case, D takes the form α(x)I, where I is the d × d identity matrix and α = α(x) is a scalar function. In the latter case, on the other hand, D has not-all-equal eigenvalues at least on a certain portion of Ω T . Note that we consider only time independent D in this work. In principle, the procedure used in this work can also apply to the time dependent situation. For that situation, however, different meshes are needed for different time steps and the numerical solution has to be interpolated between these meshes. Then, a conservative interpolation scheme must be employed in order for the underlying scheme to preserve the maximum principle, non-negativity, or monotonicity. The development of conservative interpolation schemes and their use for unstructured meshes is an interesting research topic in its own right (e.g., see [1]) and beyond the scope of the current study. To avoid this possible complexity, we restrict our attention to the time independent diffusion matrix in this work. Anisotropic diffusion problems arise from various areas of science and engineering including plasma physics [2,3,4,5,6,7], petroleum reservoir simulation [8,9,10,11,12], and image processing [13,14,15,16,17,18]. IBVP (1) is a prototype of those anisotropic diffusion problems. It satisfies the maximum principle
max (x,t)∈Ω T v(x, t) = max 0, max (x,t)∈∂Ω T v(x, t) , ∀ v satisfying v t − ∇ · (D∇v) ≤ 0 in Ω T (2)
where ∂Ω T denotes the parabolic boundary (i.e., ∂Ω × {0 < t ≤ T } ∪ Ω × {t = 0}). When a standard numerical method such as a finite element or a finite difference method is used to solve this problem, the numerical solution may violate the maximum principe and contain spurious oscillations. It is of practical and theoretical importance to study when a numerical solution satisfies a discrete maximum principle (DMP) (cf. (40) in Sect. 3) as well as develop DMP-preserving numerical schemes.
The research topic has attracted considerable attention from researchers since 1970's and success has been made for elliptic diffusion problems; e.g, see [19,20,21,22,23,24,25,26,27,28,29,30,12,31,32,33,34,35,36]. For example, it is shown in [19,21] that for isotropic diffusion problems, the requirement of all element angles of the mesh to be nonobtuse is sufficient for the linear finite element approximation to satisfy DMP. In two dimensions, this nonobtuse angle condition can be replaced by a weaker, so-called Delaunay condition [34] which requires the sum of any pair of angles facing a common interior edge to be less than or equal to π. For anisotropic diffusion problems, Drǎgǎnescu et al. [22] show that the nonobtuse angle condition fails to guarantee DMP satisfaction for a linear finite element approximation. Various techniques have been proposed to reduce spurious oscillations, including local matrix modification [26,29], mesh optimization [12], and mesh adaptation [28]. An anisotropic nonobtuse angle condition, which uses element angles measured in the metric specified by D −1 instead of angles measured in the Euclidean metric (as in the nonobtuse angle condition), is developed in [27] to guarantee DMP satisfaction for anisotropic diffusion problems. A weaker, Delaunay-type mesh condition is obtained in [23] for two-dimensional problems. The results of [23,27] are extended in [30] to problems containing convection and reaction terms.
On the other hand, less progress has been made for time-dependent problems; e.g., see [37,38,39,40,41,42,43,44,45,46,47,6,48,49,50]. Most of the existing research has focused on isotropic diffusion problems. For example, Fujii [44] considers the heat equation and shows that the time step size should be bounded from below and above for a linear finite element approximation to satisfy DMP when the mesh satisfies the nonobtuse angle condition. He also shows that the lower bound requirement can be removed when a lumped mass matrix is used. The study is extended in [38] to a more general isotropic diffusion problem with a reaction term. Thomée and Wahlbin [48] consider general anisotropic diffusion problems and show that a semi-discrete conventional finite element solution does not satisfy DMP in general. Slope limiters are employed in [6] to improve DMP satisfaction for anisotropic thermal conduction in magnetized plasmas. Nonlinear finite volume methods are developed by Le Potier [51,52] for time dependent problems.
The objective of this paper is to investigate conditions for the finite element approximation of IBVP (1) to satisfy DMP for a general diffusion matrix function. We are particularly interested in lower and upper bounds on the time step size when the θ-method and the conventional linear finite element method are used for temporal and spatial discretization, respectively. Two types of simplicial mesh are considered, meshes satisfying the anisotropic nonobtuse angle condition [27] or a Delaunay-type mesh condition [23]. It is known that those meshes lead to DMP-satisfaction linear finite element approximations to steady-state anisotropic diffusion problems. A lumped mass matrix is also studied. The results obtained in this paper can be viewed as a generalization of Fujii's [44] to anisotropic diffusion problems although such generalization is not trivial.
The outline of this paper is as follows. In Sect. 2, the linear finite element solution of IBVP (1) is described. Sect. 3 is devoted to the development of DMP-satisfaction conditions. Numerical examples are presented in Sect. 4 to verify the theoretical findings. Finally, Sect. 5 contains conclusions.
Linear finite element formulation
Consider the linear finite element solution of IBVP (1). Assume that an affine family of simplicial triangulations {T h } is given for the physical domain Ω. Define
U g = {v ∈ H 1 (Ω) | v| ∂Ω = g}.
Denote the linear finite element space associated with mesh T h by U h g . A linear finite element solution
u h (t) ∈ U h g for t ∈ (0, T ] to IBVP (1) is defined by Ω ∂u h ∂t v h dx + Ω (∇v h ) T D∇u h dx = Ω f v h dx, ∀v h ∈ U h 0 (3) where U h 0 = U h g with g = 0.
This equation can be rewritten as
K∈T h K ∂u h ∂t v h dx + K∈T h |K|(∇v h ) T D K ∇u h dx = K∈T h K f v h dx, ∀v h ∈ U h 0(4)
where |K| is the volume of element K and
D K = 1 |K| K D dx.
Equation (4) can be expressed in a matrix form. Denote the numbers of the elements, vertices, and interior vertices of T h by N e , N v , and N vi , respectively. Assume that the vertices are ordered in such a way that the first N vi vertices are the interior vertices. Then U h 0 and u h can be expressed as
U h 0 = span{φ 1 , · · · , φ N vi }, u h = N vi j=1 u j φ j + Nv j=N vi +1 u j φ j ,(5)
where φ j is the linear basis function associated with the j th vertex, a j . We approximate the boundary and initial conditions in (1) as
u j (t) = g j ≡ g(a j , t), j = N vi + 1, ..., N v(6)
u j (0) = u 0 (a j ), j = 1, ..., N v .
Substituting (5) into (4), taking v h = φ i (i = 1, ..., N vi ), and combining the resulting equations with (6), we obtain the linear algebraic system
M du dt + A u = f ,(8)
where u = (u 1 , ..., u N vi , u N vi +1 , ..., u Nv ) T , f = (f 1 , ..., f N vi , g N vi +1 , ..., g Nv ) T ,
M = M 11 M 12 0 0 , A = A 11 A 12 0 I ,(9)
and I is the identity matrix of size (N v − N vi ). The entries of mass matrix M , stiffness matrix A, and right-hand-side vector f are given by
m ij = K∈T h K φ j φ i dx, i = 1, ..., N vi , j = 1, ..., N v (10) a ij = K∈T h |K| (∇φ i ) T D K ∇φ j , i = 1, ..., N vi , j = 1, ..., N v (11) f i = K∈T h K f φ i dx, i = 1, ..., N vi .(12)
We use the θ-method with a constant time step ∆t for time integration. Let u n and u n+1 be the computed solutions at the current and next time steps, respectively. Applying the θ-method to the first N vi equations, we get
[M 11 M 12 ] u n+1 − u n ∆t + [A 11 A 12 ]((1 − θ)u n + θu n+1 ) =f n+θ ,(13)
wheref
n+θ = [f 1 (t n + θ∆t), ..., f N vi (t n + θ∆t)] T .
For the last N v − N vi equations (corresponding to the boundary condition), we use
u n+1 j = g(a j , t n+1 ), j = N vi + 1, ..., N v .(14)
Combining (13) and (14), we have
Bu n+1 = Cu n + ∆t f n+θ ,(15)
where
B = M 11 M 12 0 I + θ∆t A 11 A 12 0 0 ,(16)C = M 11 M 12 0 0 − (1 − θ)∆t A 11 A 12 0 0 ,(17)f n+θ = f 1 (t n + θ∆t), · · · , f N vi (t n + θ∆t), 1 ∆t g(a N vi +1 , t n+1 ) · · · , 1 ∆t g(a Nv , t n+1 ) T ,(18)u 0 = u 0 = (u 0 (a 1 ), ..., u 0 (a Nv )) T .(19)
It is worth noting that the right-hand side vector, f n+θ , is formed from the values of the right-hand side function f (x, t) and the boundary function g(x, t). We are interested in conditions under which the scheme satisfies DMP. Figure 1: Sketch of coordinate transformations fromK to K and to K. Here,K is the reference element and F K is the affine mapping fromK to element K.
a K 3 a K 1 a K 2 K a K 3 a K 1 a K 2 K a 3â1 a 2 K D − 1 2 K x F K
Conditions for DMP satisfaction
In this section we develop the conditions (on the mesh and time step size) under which scheme (15) satisfies DMP. The main tool is a result from [33] which states that the solution of a linear algebraic system satisfies DMP when the corresponding coefficient matrix is an M -matrix and has nonnegative row sums. We first discuss the general dimensional case along with the anisotropic nonobtuse angle condition developed in [27] and then study the two dimensional case with the Delaunay-type mesh condition developed in [23]. We introduce some notation. Consider a generic element K ∈ T h and denote its vertices by a K 1 , a K 2 , ..., a K d+1 . Denote the face opposite to vertex a K i (i.e., the face not having a K i as its vertex) by S K i and its unit inward (pointing to a K i ) normal by n K i . The distance (or height) from vertex
a K i to face S K i is denoted by h K i . Define q-vectors as q K i = n K i h K i , i = 1, ..., d + 1.(20)
Obviously, we have h K i = 1/ q K i . We now consider the mapping D
a K i = D − 1 2 K a K i , S K i = D − 1 2 K S K i , | K| = det(D K ) − 1 2 |K|, q K i = D 1 2 K q K i , h K i = 1 q K i .(21)
The dihedral angle between surfaces S K i and S K j (i = j) is denoted by α K ij . It can be expressed as
cos( α K ij ) = − ( q K i ) T q K j q K i · q K j = − (q K i ) T D K q K j q K i D K q K j D K , i = j (22) where q K i D K = (q K i ) T D K q K i .
Note that α K ij can be considered as a dihedral angle of K measured in the metric specified by D −1 K .
General dimensional case: d ≥ 1
We now are ready for the development of the DMP satisfaction conditions for scheme (15) for the general dimensional case. We first have the following four lemmas.
Lemma 3.1. For any element K ∈ T h and i, j = 1, ..., d + 1,
(∇φ i ) T D K ∇φ j = − cos( α K ij ) h K i h K j , for i = j 1 ( h K i ) 2 , for i = j (23)
where φ i and φ j are the linear basis functions associated with the vertices a K i and a K j , respectively. In two dimensions (d = 2),
|K|(∇φ i ) T D K ∇φ j = − det(D K ) 2 cot( α K ij ), i = j, i, j = 1, 2, 3.(24)
Proof. see [23,30].
Lemma 3.2.
The stiffness matrix A defined in (9) and (11) is an M -matrix and has nonnegative row sums if the mesh satisfies the anisotropic nonobtuse angle condition
0 < α K ij ≤ π 2 , ∀i, j = 1, ..., d + 1, i = j, ∀K ∈ T h .(25)
Proof. See [27, Theorem 2.1 and its proof].
Lemma 3.3. Matrix B defined in (16) (0 < θ ≤ 1)
is an M -matrix if the mesh satisfies (25) and the time step size satisfies
∆t ≥ 1 θ(d + 1)(d + 2) max K∈T h max i,j=1,...,d+1 i =j h K i h K j cos( α K ij )λ min (D K ) .(26)
Proof. We first show that M + θ∆tA is a Z-matrix, i.e., it has positive diagonal and nonpositive off-diagonal entries. From (9) we only need to show
m ii + θ∆t a ii > 0, i = 1, ..., N vi (27) m ij + θ∆t a ij ≤ 0 ∀i = j, i = 1, ..., N vi , j = 1, ..., N v .(28)
Let ω i be the patch of the elements containing vertex a i . Notice that ∇φ
i = 0 when K / ∈ ω i . Recall from [53] that K φ i φ j dx = |K| (d + 1)(d + 2) , K φ 2 i dx = 2|K| (d + 1)(d + 2) .(29)
Then (27) follows immediately from (10) and Lemma 3.2. For (28), from (10), (11), and (29) we have
m ij + θ∆t a ij = K∈T h K φ j φ i dx + θ∆t K∈T h |K| (∇φ i ) T D K ∇φ j = K∈ω i ∩ω j K φ j φ i dx + θ∆t |K| (∇φ i ) T D K ∇φ j = K∈ω i ∩ω j K φ j φ i dx + θ∆t |K| (∇φ i K ) T D K ∇φ j K ,(30)
where i K and j K denote the local indices (on element K) of vertices a i and a j . From (29) and Lemma 3.1, we get
m ij + θ∆t a ij = K∈ω i ∩ω j |K| 1 (d + 1)(d + 2) − θ∆t cos( α K i K j K ) h K i K h K j K .(31)
The right-hand side term is nonpositive if
∆t ≥ 1 θ(d + 1)(d + 2) max K∈T h max i,j=1,...,d+1 i =j h K i h K j cos( α K ij ) .(32)
Moreover, (21) implies
h K i = 1 q i = 1 q T i D K q i . Thus, we have h K i λ max (D K ) ≤ h K i ≤ h K i λ min (D K ) .(33)
From this, we can see that (26) implies (32). Hence, we have shown that B is a Z-matrix when (26) holds.
To show B is an M -matrix, we recall from (16) that
B = M 11 + θ∆tA 11 M 12 + θ∆tA 12 0 I .
The fact that B is a Z-matrix means that M 11 + θ∆tA 11 is also a Z-matrix and M 12 + θ∆tA 12 ≤ 0. It is easy to show that M 11 + θ∆tA 11 is positive definite, which in turn implies M 11 + θ∆tA 11 is an M -matrix. Notice
B −1 = (M 11 + θ∆tA 11 ) −1 −(M 11 + θ∆tA 11 ) −1 (M 12 + θ∆tA 12 ) 0 I .
This means B −1 ≥ 0 and hence B is an M -matrix.
Lemma 3.4. Matrix C defined in (17) (0 ≤ θ ≤ 1)
is nonnegative if the mesh satisfies (25) and the time step size satisfies (10). To see if the diagonal entries are also nonnegative, from (10), (11), and (29) we have
∆t ≤ 2 (1 − θ)(d + 1)(d + 2) min K∈T h min i=1,...,d+1 (h K i ) 2 λ max (D K ) .(34)Proof. For off-diagonal entries (i = j, i = 1, ..., N vi , j = 1, ..., N v ), m ij − (1 − θ)∆t a ij , are nonnegative since a ij ≤ 0 under condition (25) (cf. Lemma 3.2) and m ij ≥ 0 from definitionm ii − (1 − θ)∆t a ii = K∈ω i |K| 2 (d + 1)(d + 2) − (1 − θ)∆t ( h K i K ) 2 .(35)
The right-hand side term is nonnegative if
∆t ≤ 2 (1 − θ)(d + 1)(d + 2) min K∈T h min i=1,...,d+1 ( h K i ) 2 .
From (33) we see that this condition holds when (34) is satisfied.
We are now in a position to prove our first main theoretical result.
Theorem 3.1. Scheme (15) satisfies a discrete maximum principle if the mesh satisfies the anisotropic nonobtuse angle condition (25) and the time step size satisfies (26) and (34), i.e.,
1 θ(d + 1)(d + 2) max K∈T h max i,j=1,...,d+1 i =j h K i h K j cos( α K ij )λ min (D K ) ≤ ∆t ≤ 2 (1 − θ)(d + 1)(d + 2) min K∈T h min i=1,...,d+1 (h K i ) 2 λ max (D K ) .(36)
Proof. Scheme (15) can be expressed as
AU = F,(37)
where
A = I 0 −C B −C B · · · · · · −C B , U = u 0 u 1 u 2 . . . u N , F = u 0 ∆tf θ ∆tf 1+θ . . . ∆tf N −1+θ ,(38)
and B and C are defined in (17). Scheme (37) satisfies a DMP if coefficient matrix A is an M -matrix and has nonnegative row sums. From Lemmas 3.3 and 3.4 we know that B is an M -matrix and C ≥ 0. As a result, A is a Z-matrix. Moreover, we can show A −1 ≥ 0. Indeed, from (37) we know that u 0 = u 0 and thus if u 0 ≥ 0, we have u 0 ≥ 0. Next, from the scheme we have u 1 = B −1 ∆tf θ + B −1 Cu 0 . Recall that C ≥ 0 and B is an M -matrix and thus B −1 ≥ 0. Combining these results, we can conclude that f θ ≥ 0 implies u 1 ≥ 0. Similarly, we can show u n ≥ 0 if f n−1+θ ≥ 0, n = 2, ..., N . Thus, we have shown that F ≥ 0 implies U ≥ 0. This implies A −1 ≥ 0 and A is an M -matrix. We notice that the sum of each of the second to the last (block) rows is
B − C = ∆t A 11 ∆t A 12 0 I .
Since A has nonnegative row sums (cf. Lemma 3.2), A has nonnegative row sums. Thus, we have proven that A is an M -matrix and has nonnegative row sums. Form [33][Theorem 1], we conclude that the solution of (37) satisfies
max i=1,...,(N +1)Nv U i = max 0, max i∈S(F + ) U i ,(39)
where S(F + ) is the set of the indices with F i > 0. When f (x, t) ≤ 0, from (18) we know that F i > 0 holds only for those indices corresponding to the boundary points on ∂Ω T . Moreover, from (16), (17), and (38) we see that at the boundary points, U i is equal to either the boundary function g or the initial function u 0 . Since a piecewise linear function attains its maximum value at vertices, (39) implies that when f (x, t) ≤ 0, the solution of (15) satisfies a DMP max n=0,...,N max x∈Ω U n (x) = max 0, max
n=1,...,N max x∈∂Ω U n (x), max x∈Ω U 0 (x) ,(40)
where
U n (x) = N vi j=1 u n j φ j (x) + Nv j=N vi +1
u n j φ j (x), n = 0, ...., N.
Hence, we have proven that scheme (15) satisfies DMP.
Remark 3.1. Consider a special case with D = αI, where α is a positive constant. It is known (e.g., see Emert and Nelson [54]) that the height (or altitude), volume, and cosine of the dihedral angles of a regular d-dimensional simplex K are given by
h K = e K d + 1 2d , |K| = √ d + 1 d! √ 2 d e d K , cos(α K ij ) = 1 d ,(41)
where e K is the edge length. Thus, if the elements of T h are all regular simplexes, (36) reduces to
max K∈T h e 2 K 2θα(d + 2) ≤ ∆t ≤ min K∈T h e 2 K (1 − θ)αd(d + 2) .(42)
If further the mesh is uniform (and thus all mesh elements have the same volume and same edge length (e)), the above condition becomes
e 2 2θα(d + 2) ≤ ∆t ≤ e 2 (1 − θ)αd(d + 2) ,(43)
which is exactly the result of Theorem 20 of [38] where the maximum principle of linear finite element approximation of isotropic diffusion problems is studied. Interestingly, we can rewrite (43)
|Ω|d! √ d + 1 2 d ≤ ∆t ≤ 2N − 2 d e (1 − θ)αd(d + 2) |Ω|d! √ d + 1 2 d .(44)
Remark 3.2. Another special case is that the mesh is uniform in the metric specified by D −1 . It is known [55] that such a mesh satisfies the so-called alignment and equidistribution conditions
1 d tr((F K ) T D −1 K F K ) det((F K ) T D −1 K F K ) 1 d = 1, ∀K ∈ T h (45) |K| det(D −1 K ) = σ h N e , ∀K ∈ T h(46)
where tr(·) and det(·) denote the trace and determinant of a matrix, F K is the Jacobian matrix of the affine mapping F K from the reference elementK to element K, and
σ h = K∈T h |K| det(D −1 K ).(47)
Geometrically, the alignment condition (45) implies that the element K in Fig. 1 is a regular simplex while the equidistribution condition indicates that all elements have a constant volume σ h /N e in the metric D −1 .
For such a mesh, it is more suitable to replace (36) by
1 θ(d + 1)(d + 2) max K∈T h max i,j=1,...,d+1 i =j h K i h K j cos( α K ij ) ≤ ∆t ≤ 2 (1 − θ)(d + 1)(d + 2) min K∈T h min i=1,...,d+1 ( h K i ) 2 .(48)
Using the same procedure as in Remark 3.1 and noticing that K is regular, we can get
N − 2 d e θ(d + 2) σ h d! √ d + 1 2 d ≤ ∆t ≤ 2N − 2 d e (1 − θ)d(d + 2) σ h d! √ d + 1 2 d .(49)
Notice that the difference between (44) and (49) lies in that the factor, |Ω|/α, has been replaced by the volume of Ω in the metric D −1 , σ h .
Remark 3.3. It is known [27] that a mesh, generated as a uniform mesh in the metric specified by M K = θ K D −1 K for all K ∈ T h , where θ K is an arbitrary piecewise constant, scalar function defined on Ω, satisfies the anisotropic nonobtuse angle condition (25). The reader is referred to [27] for more information on the generation of such meshes.
The lower bound requirement on ∆t in (36) can be avoided by using a lumped mass matrix. In this case, scheme (15) is modified into The following theorem can be proven in a similar manner as for Theorem 3.1.
M 11 0 0 I + θ∆t A 11 A 12 0 0 u n+1 = M 11 0 0 0 − (1 − θ)∆t A 11 A 12 0 0 u n + ∆tf n+θ ,(50)
Theorem 3.2. Scheme (50) with a lumped mass matrix satisfies a discrete maximum principle if the mesh satisfies the anisotropic nonobtuse angle condition (25) and the time step size satisfies
∆t ≤ 1 (1 − θ)(d + 1) min K∈T h min i=1,...,d+1 (h K i ) 2 λ max (D K ) .(51)
Remark 3.4. If the mesh is uniform in the metric specified by D −1 , the condition (51) reduces to
∆t ≤ N − 2 d e (1 − θ)d σ h d! √ d + 1 2 d ,(52)
where σ h is defined in (47).
Two dimensional case: d = 2
The results in the previous subsection are valid for all dimensions. However, it is known [23] that a Delaunay-type mesh condition, which is weaker than the nonobtuse angle condition (25), is sufficient for a linear finite element approximation to satisfy DMP in two dimensions for steady-state problems.
It is interesting to know if this is also true for time-dependent problems.
Consider an arbitrary interior edge e ij . Denote the two vertices of the edge by a i and a j and the two elements sharing this common edge by K and K . Let the local indices of the vertices on K be i K and j K . The angle of K opposite e ij is denoted by α K i K ,j K (when measured in the Euclidean metric) and by α K i K ,j K when measured in the metric D −1 K . Similarly, we have α K i K ,j K and α K i K ,j K . Lemma 3.5. The stiffness matrix A defined in (9) and (11) is an M -matrix and has nonnegative row sums if the mesh satisfies the Delaunay-type mesh condition
1 2 α K i K ,j K + arccot det(D K ) det(D K ) cot( α K i K ,j K ) + α K i K ,j K + arccot det(D K ) det(D K ) cot( α K i K ,j K ) ≤ π, ∀ interior edges e ij .(53e ij |K| + |K | det(D K ) cot( α K i K ,j K ) + det(D K ) cot( α K i K ,j K ) ,(54)
where the maximum is taken over all interior edges and K and K are the two elements sharing the common edge e ij .
Proof. Inequality (54) follows from (24), (29), and (30).
Lemma 3.7. Matrix C defined in (17) (0 < θ ≤ 1) is nonnegative if the mesh satisfies (53) and the time step size satisfies
∆t ≤ 1 6(1 − θ) min i |ω i | K∈ω i |K| λ max (D K ) (h K i K ) −2 ,(55)
where the minimum is taken over all interior vertices and ω i is the patch of the elements containing a i as its vertex.
Proof. The proof is similar to that of Lemma 3.4. Indeed, Lemma 3.5 implies that the off-diagonal entries of C are nonnegative under condition (53). For diagonal entries, from (35) we get
m ii − (1 − θ)∆ta ii = |ω i | 6 − (1 − θ)∆t K∈ω i |K| ( h K i K ) 2 .
From (33), we can see that the right-side term of the above equation is nonnegative when (55) holds.
Using the above results we can prove the following theorems in a similar manner as for Theorems 3.1 and 3.2. Theorem 3.3. In two dimensions, scheme (15) satisfies a discrete maximum principle if the mesh satisfies the Delaunay-type mesh condition (53) and the time step size satisfies (54) and (55), i.e.,
1 6θ max e ij |K| + |K | det(D K ) cot( α K i K ,j K ) + det(D K ) cot( α K i K ,j K ) ≤ ∆t ≤ 1 6(1 − θ) min i |ω i | K∈ω i |K| λ max (D K ) (h K i K ) −2 ,(56)
where the maximum is taken over all interior edges, K and K are the two elements sharing the common edge e ij , and the minimum is taken over all interior vertices and ω i is the patch of the elements containing a i as its vertex.
Theorem 3.4. In two dimensions, scheme (50) with a lumped mass matrix satisfies a discrete maximum principle if the mesh satisfies the Delaunay-type mesh condition (53) and the time step size satisfies
∆t ≤ 1 3(1 − θ) min i |ω i | K∈ω i |K| λ max (D K ) (h K i K ) −2 .(57)
Remark 3.5. Conditions (56) and (57) (for d = 2) reduce to (49) and (52), respectively for a uniform mesh in the metric specified by D −1 but are weaker than conditions (36) and (51) for general meshes.
Numerical results
In this section we present numerical results obtained for three examples in two dimensions to demonstrate the significance of both mesh conditions (25) and (53) which has eigenvalues 100 and 1. The principal eigenvectors are in the northeast direction. This example satisfies the maximum principle and the exact solution (whose analytical expression is unavailable) stays between 0 and 4. Our goal is to produce a numerical solution which also satisfies DMP and stays between 0 and 4.
We first consider Mesh45 and Mesh135. Mesh45 satisfies the anisotropic nonobtuse angle condition (25) since its maximum angle in the metric M = D −1 is 0.47π. It is known [23] that (25) implies the Delaunay-type mesh condition, (53). By direct calculation we can find that the maximum of the left-hand-side term of (53) is 0.94π. On the other hand, Mesh135 satisfies neither of (25) and (53), with the maximum angle in the metric M = D −1 being 0.94π and the maximum of the left-hand-side term of (53) being 1.87π.
The solution contours (after 10 time steps) using Mesh45 and Mesh135 with h = 2.5 × 10 −2 and ∆t = 1.5 × 10 −4 are shown in Fig. 4, where h denotes the maximal height of triangular elements of the mesh and u min is the minimum of the numerical solution. No undershoot occurs in the numerical solution obtained with Mesh45.
The results for Mesh45 are listed in Table 1. They show that for meshes with h ≤ 2.5 × 10 −2 , ∆t Del is smaller than the step size ∆t = 1.5 × 10 −4 used in the computation. As a consequence, time condition (56) (and mesh condition (53)) is satisfied and Theorem 3.3 implies that the numerical solution satisfies DMP. Table 1 confirms that no undershoot occurs in the numerical solution or u min = 0. On the other hand, for h = 5.0 × 10 −2 , neither of time conditions (36) and (56) is satisfied and undershoot with u min = −1.41 × 10 −7 is observed.
The table also records the numerical results obtained for h = 2.5 × 10 −2 and h = 1.25 × 10 −2 with decreasing ∆t. One can see that no undershoot occurs when ∆t ≥ ∆t Del . However, undershoot occurs when ∆t continues to decrease and pass ∆t Del . This is consistent with Theorem 3.3. It is pointed out that ∆t Del < ∆t Ani for all the cases listed in the table. Moreover, for some cases we have ∆t Del < ∆t < ∆t Ani and no undershoot occurs in the numerical solution. These indicate that time condition (56) (related to the Delaunay-type mesh condition) is weaker than (36) (related to the anisotropic nonobtuse angle condition).
Recall that Mesh135 does not satisfy mesh condition (25) nor (53). Thus, there is no guarantee that the numerical solution obtained with Mesh135 satisfies DMP. Indeed, Table 2 shows that undershoot occurs in all numerical solutions obtained with various sizes of Mesh135 and various ∆t.
Next we consider M DM P meshes which are generated as (quasi-)uniform ones in the metric specified by M = D −1 . Recall from Remark 3.3 that such meshes satisfy the anisotropic nonobtuse angle condition (25). In our computation, M DM P meshes are generated using BAMG (bidimensional anisotropic mesh generator) code developed by Hecht [56]. An example is shown in Fig. 5(b). Notice that the elements are aligned with the principal diffusion direction (northeast). Since the diffusion tensor D is constant, the mesh is generated initially based on M DM P = D −1 and then kept for the subsequent time steps.
The results obtained with M DM P meshes are similar to those obtained with Mesh45. For example, for the M DM P mesh shown in Fig. 5 (b), it is found numerically that ∆t Ani = 4.30 × 10 −2 and ∆t Del = 1.63 × 10 −3 . Theorem 3.3 ensures that no undershoot occurs in the numerical solution when ∆t ≥ ∆t Del . It is emphasized that (53) and (56) are not necessary for DMP satisfaction and the numerical solution may be free of undershoot for some smaller values of ∆t. In fact, no undershoot is observed numerically for ∆t ≥ 10 −4 . An undershoot-free solution obtained with the mesh shown in Fig. 5 (b) and time step size ∆t = 1.5 × 10 −4 is shown in Fig. 5 (a). For the same mesh with ∆t = 1.0 × 10 −5 , undershoot is observed with u min = −1.45 × 10 −6 .
Finally, we consider the lumped mass method. Theorem 3.4 implies that there is no constraint placed on ∆t for the DMP satisfaction of the numerical solution with the lumped mass matrix and implicit Euler discretization. Indeed, for all Mesh45 meshes and ∆t considered in Table 1, no undershoot is observed numerically for the lumped mass method. The same also holds for M DM P meshes. For
where k 1 = 100, k 2 = 1, and θ = θ(x, y) is the angle of the tangential direction at point (x, y) along circles centered at (0.5, 0.5). This diffusion matrix D also has eigenvalues 1 and 100 but has its principal eigen-direction along the tangential direction of circles centered at (0.5, 0.5). A physical example with such a diffusion matrix is the toroidal magnetic field in a Tokamak device confining fusion plasma [57]. This problem also satisfies the maximum principle and the solution stays between 0 and 4. For this example, neither Mesh45 nor Mesh135 (cf. Fig. 2) satisfies the Delaunay-type mesh condition (53). In the metric specified by M = D −1 , the maximum of the left-hand side of the inequality is 1.87π for both Mesh45 and Mesh135. Due to the symmetry of the diffusion matrix, both Mesh45 and Mesh135 lead to almost the same results for this example except that undershoot occurs at different locations. Fig. 6 shows the results obtained with these meshes for ∆t = 5 × 10 −5 . Table 3 lists numerical results obtained with Mesh45 and M DM P meshes. Recall that Theorem 3.3 does not apply to Mesh45 meshes since they do not satisfy (53). As a matter of fact, numerical solutions obtained with this type of meshes with or without mass lumping violate DMP and exhibit undershoot. On the other hand, M DM P meshes generated with M = D −1 satisfy the mesh condition. For the lumped mass method, no undershoot occurs in the numerical solution for all values of ∆t. )), k 1 = 100 cos((x 2 + y 2 ) π 6 ), k 2 = 10 sin((x 2 + y 2 + 1) π 6 ).
Notice that D is a function of x and y and both its eigenvalues and eigenvectors vary with location. Numerical results are shown in Table 4 and Fig. 7. Similar observations can be made as in the previous example. More specifically, both Mesh45 and Mesh135 does not satisfy the Delaunay-type mesh condition (53) and thus there is no guarantee that the obtained numerical solution is undershootfree. On the other hand, M DM P meshes generated with M = D −1 satisfy (53). The numerical solution is guaranteed to be undershoot-free for sufficiently large ∆t for the standard linear finite element method and for all ∆t for the lumped mass method.
Conclusions
In the previous sections we have studied the conditions under which a full discretization for IBVP (1) with a general diffusion matrix function satisfies a discrete maximum principle. The discretization is realized using the θ-method in time and the linear finite element method in space. The main theoretical results are given in Theorems 3.1, 3.2, 3.3, and 3.4.
Specifically, the numerical solution obtained with the full discrete scheme satisfies a discrete maximum principle when the mesh satisfies the anisotropic nonobtuse angle condition (25) step size satisfies condition (36). As shown in [27], a mesh satisfying (25) can be generated as a uniform mesh in the metric specified by α D −1 with α being a scalar function defined on Ω T . On the other hand, condition (36) essentially requires the time step size to satisfy
C 1 h 2 ≤ ∆t ≤ C 2 1 − θ h 2 ,(59)
where C 1 and C 2 are positive constants, h is the maximal element diameter, and θ ∈ (0, 1] is the parameter used in the θ-method. Obviously, this condition is restrictive. This is especially true when the numerical scheme with θ ∈ [0.5, 1] is known to be unconditionally stable and no constraint is placed on ∆t for the sake of stability. Moreover, the presence of the lower bound for ∆t and the numerical results showing the violation of the maximum principle as ∆t → 0 seem to support the finding of Thomée and Wahlbin [48] that a semi-discrete standard Galerkin finite element solution violates DMP since the semi-discrete scheme can be considered as the limit of the full discrete scheme as ∆t → 0. Furthermore, Theorems 3.2 and 3.4 show that the lower bound requirement on ∆t can be removed when a lumped mass matrix is used. Finally, in two dimensions, the mesh and time step conditions can be replaced with weaker conditions (53) and (56), respectively. Numerical results in Sect. 4 confirm the theoretical findings. 3.70e-4 9.47e-5 1.5e-4 0 1.25e-2 9.25e-5 2.37e-5 1.5e-4 0 6.25e-3 2.31e-5 5.92e-6 1.5e-4 0 3.125e-3 5.78e-6 1.48e-6 1.5e-4 0 2.5e-2 3.70e-4 9.47e-5 1.5e-4 0 2.5e-2 3.70e-4 9.47e-5 1.0e-4 0 2.5e-2 3.70e-4 9.47e-5 5.0e-5 -7.91e-10 1.25e-2 9.25e-5 2.37e-5 1.5e-4 0 1.25e-2 9.25e-5 2.37e-5 1.0e-5 -1.31e-6 h ∆t Ani ∆t Del ∆t u min 5.0e-2 1.48e-4 2.08e-6 1.5e-4 -8.99e-2 2.5e-2 3.70e-5 5.21e-7 1.5e-4 -6.57e-2 1.25e-2 9.25e-6 1.30e-7 1.5e-4 -1.58e-2 1.25e-2 9.25e-6 1.30e-7 1.0e-7 -2.26e-2 6.25e-3 2.31e-6 3.26e-8 5.0e-4 -1.59e-3 6.25e-3 2.31e-6 3.26e-8 1.5e-5 -1.43e-2 6.25e-3 2.31e-6 3.26e-8 1.5e-6 -2.11e-2 Mesh N e ∆t Ani ∆t Del ∆t u min u min (lumped mass) Mesh45 3072 3.47e-4 1.17e-2 1.0e-4 -4.31e-2 -4.11e-2 5.0e-5 -4.91e-2 -4.78e-2 2.0e-5 -5.49e-2 -5.36e-2 1.0e-5 -5.70e-2 -5.26e-2 M DM P 3381 8.61e-2 3.06e-2 5.0e-2 0 0 1.0e-4 0 0 5.0e-5 0 0 2.0e-5 -1.20e-5 0 1.0e-5 -5.02e-4 0 N e ∆t Ani ∆t Del ∆t u min u min (lumped mass) 3180 1.83e-2 6.38e-4 1.0e-4 0 0 5.0e-5 0 0 1.0e-5 0 0 2.5e-6 -7.67e-5 0 1.0e-6 -6.21e-3 0
List of Figures
: K → K; see Fig. 1. The q-vectors and heights associated with K are denoted by q K i , and h K i . We have relations
whereM 11 is the lumped mass matrix with diagonal entries m ii = Nv j=1 m ij , i = 1, ..., N vi .
Figure 2 :
2and time step conditions (56) and (57) for DMP satisfaction. Three types of mesh are considered. The first is denoted by Mesh45 where the elements are isosceles right triangles with longest sides in the northeast direction. The second one is denoted by Mesh135 where the elements are isosceles right triangles with longest sides in the northwest direction. Examples of Mesh45 and Mesh135 are shown in Figs. 2(a) and (b). The third type of mesh, denoted by M DM P , is a uniform mesh in the metric M DM P = D −1 which guarantees satisfaction of mesh condition (25) (cf. Remark 3.3).The implicit Euler method (corresponding to θ = 1 in(15)) is used in our computation. For this method, conditions (36), (51), (56), and (57) place no constraint on the upper bound of ∆t. For this reason, we consider only the lower bound for the time step size. The lower bound in (36) (related to the anisotropic nonobtuse angle condition) is denoted by ∆t Ani and that in (56) (related to the Delaunay-type mesh condition) by ∆t Del . Unless stated otherwise, the presented results are obtained after 10 steps of time integration. Example 4.1. The first example is in the form of IBVP (1) with f ≡ 0, Ω = [0, 1] 2 \[0.4, 0.6] 2 , g = 0 on Γ out , g = 4 on Γ in ,where Γ out and Γ in are the outer and inner boundaries of Ω, respectively; seeFig. 3(a). The initial solution u 0 (x, y) is given asu 0 (x,y) Ω\[0.2, 0.8] 2 increases linearly, from Γ mid to Γ in Examples of Mesh45 and Mesh135. where Γ mid is the boundary of subdomain [0.2, 0.8] 2 ; see Fig. 3. The diffusion matrix is taken as
Figure 3 :
3The physical domain, boundary condition, and initial solution for Example 4.1.
2 Figure 4 :
24, u min = −6.57 × 10 −Solution contours obtained for Mesh45 and Mesh135 with h = 2.5×10 −2 and ∆t = 1.5×10 −4 for Example 4.1. example, for the mesh shown in Fig. 5(b), no undershoot is observed in the numerical solution for ∆t = 10 −4 , 10 −5 , and 10 −6 . For Mesh135 meshes, mesh condition (25) or (53) is not satisfied and thus Theorem 3.4 does not hold. For example, for a case with a Mesh135 mesh with h = 1.25 × 10 −2 and ∆t = 1.5 × 10 −4 , the numerical solution violates DMP and has a minimum u min = −1.60 × 10 −2 .
Example 4. 2 .
2The second example is the same as Example 4.1 except that the diffusion matrix is taken as a function of x and y, i.e., D = cos θ − sin θ sin θ cos θ k 1 0 0 k 2 cos θ sin θ − sin θ cos θ ,
2362 Figure 5 :
23625): M DM P mesh, N e = An M DM P mesh (with N e = 2362 and N v = 1357) and the corresponding solution obtained with ∆t = 1.5 × 10 −4 for Example 4.1.
3381 Figure 6 :
33816): M DM P , N e = Results obtained with ∆t = 5 × 10 −5 for Example 4.2.
3180 Figure 7 :
31807): M DM P , N e = Results obtained with ∆t = 1 × 10 −5 for Example 4.3.
Figure 1 :
1Sketch of coordinate transformations fromK to K and to K.
Figure 2 :
2Examples of Mesh45 and Mesh135.
Figure 3 :
3The physical domain, boundary condition, and initial solution for Example 4.1.
Figure 4 :
4Solution contours obtained for Mesh45 and Mesh135 with h = 2.5 × 10 −2 and ∆t = 1.5 × 10 −4 for Example 4.1.
Figure 5 :
5An M DM P mesh (with N e = 2362 and N v = 1357) and the corresponding solution obtained with ∆t = 1.5 × 10 −4 for Example 4.1.
Figure 6 :
6Results obtained with ∆t = 5 × 10 −5 for Example 4.2.
Figure 7 :
7Results obtained with ∆t = 1 × 10 −5 for Example 4.3.Tables
in terms of the number of the elements, N e . Indeed, since the mesh is uniform, the elements have a constant volume |Ω|/N e . From (41), we havee =
√
2N
− 1
d
e
|Ω|d!
√
d + 1
1
d
.
Inserting this into (43), we get
N
− 2
d
e
θα(d + 2)
This is consistent with Theorem 3.4. For the standard finite element method, there is no undershoot for relatively large ∆t. It is interesting to point out that for this example with variable D, the lower bounds ∆t Ani and ∆t Del are far too pessimistic. A several magnitude smaller ∆t can still lead to numerical solutions free of undershoot.Example 4.3. This example is the same as the previous examples except that the diffusion matrix is taken as in the form (58) withθ =
1
2
arctan(cos(
πx
4
Table 1 :
1Numerical results obtained with Mesh45 for Example 4.1.h
∆t Ani
∆t Del
∆t
u min
5.0e-2
1.48e-3 3.79e-4 1.5e-4 -1.41e-7
2.5e-2
Table 2 :
2Numerical results obtained with Mesh135 for Example 4.1.
Table 3 :
3Results obtained with Mesh45 and M DM P meshes for Example 4.2.
Table 4 :
4Results obtained with M DM P meshes for Example 4.3.
Acknowledgment. This work was supported in part by NSF under grant DMS-1115118.
Conservative interpolation between unstructured meshes via supermesh construction. P E Farrell, M D Piggott, C C Pain, G J Gorman, C R Wilson, Comput. Methods Appl. Mech. Engrg. 198P. E. Farrell, M. D. Piggott, C. C. Pain, G. J. Gorman, and C. R. Wilson. Conservative interpo- lation between unstructured meshes via supermesh construction. Comput. Methods Appl. Mech. Engrg. 198:2632-2642, 2009.
A mixed implicit-explicit finite difference scheme for heat transport in magnetised plasmas. S Gűnter, K Lackner, J. Comput. Phys. 228S. Gűnter and K. Lackner. A mixed implicit-explicit finite difference scheme for heat transport in magnetised plasmas. J. Comput. Phys., 228:282-293, 2009.
Finite element and higher order difference formulations for modelling heat transport in magnetised plasmas. S Gűnter, K Lackner, C Tichmann, J. Comput. Phys. 226S. Gűnter, K. Lackner, and C. Tichmann. Finite element and higher order difference formulations for modelling heat transport in magnetised plasmas. J. Comput. Phys., 226:2306-2316, 2007.
Modelling of heat transport in magnetised plasmas using non-aligned coordinates. S Gűnter, Q Yu, J Kruger, K Lackner, J. Comput. Phys. 209S. Gűnter, Q. Yu, J. Kruger, and K. Lackner. Modelling of heat transport in magnetised plasmas using non-aligned coordinates. J. Comput. Phys., 209:354-370, 2005.
Plasma Physics. K Nishikawa, M Wakatani, Springer-VerlagBerlin Heidelberg; New YorkK. Nishikawa and M. Wakatani. Plasma Physics. Springer-Verlag Berlin Heidelberg, New York, 2000.
Preserving monotonicity in anisotropic diffusion. P Sharma, G W Hammett, J. Comput. Phys. 227P. Sharma and G. W. Hammett. Preserving monotonicity in anisotropic diffusion. J. Comput. Phys., 227:123-142, 2007.
Waves in Plasmas. T Stix, Amer. Inst. Phys. T. Stix. Waves in Plasmas. Amer. Inst. Phys., New York, 1992.
Discretization on unstructured grids for inhomogeneous, anisotropic media. I. Derivation of the methods. I Aavatsmark, T Barkve, Ø Bøe, T Mannseth, SIAM J. Sci. Comput. 19I. Aavatsmark, T. Barkve, Ø. Bøe, and T. Mannseth. Discretization on unstructured grids for inhomogeneous, anisotropic media. I. Derivation of the methods. SIAM J. Sci. Comput., 19:1700- 1716 (electronic), 1998.
Discretization on unstructured grids for inhomogeneous, anisotropic media. II. Discussion and numerical results. I Aavatsmark, T Barkve, Ø Bøe, T Mannseth, SIAM J. Sci. Comput. 19I. Aavatsmark, T. Barkve, Ø. Bøe, and T. Mannseth. Discretization on unstructured grids for inhomogeneous, anisotropic media. II. Discussion and numerical results. SIAM J. Sci. Comput., 19:1717-1736 (electronic), 1998.
Discretisation and multigrid solution of elliptic equations with mixed derivative terms and strongly discontinuous coefficients. P I Crumpton, G J Shaw, A F Ware, J. Comput. Phys. 116P. I. Crumpton, G. J. Shaw, and A. F. Ware. Discretisation and multigrid solution of elliptic equations with mixed derivative terms and strongly discontinuous coefficients. J. Comput. Phys., 116:343-358, 1995.
Basic Applied Reservoir Simulation. SPE textbook series. T Ertekin, J H Abou-Kassem, G R King, 7Richardson, TexasT. Ertekin, J. H. Abou-Kassem, and G. R. King. Basic Applied Reservoir Simulation. SPE textbook series, Vol. 7, Richardson, Texas, 2001.
Unstructured grid optimization for improved monotonicity of discrete solutions of elliptic equations with highly anisotropic coefficients. M J Mlacnik, L J Durlofsky, J. Comput. Phys. 216M. J. Mlacnik and L. J. Durlofsky. Unstructured grid optimization for improved monotonicity of discrete solutions of elliptic equations with highly anisotropic coefficients. J. Comput. Phys., 216:337-361, 2006.
Non-texture inpainting by curvature driven diffusions (CDD). T F Chan, J Shen, J. Vis. Commun. Image Rep. 12T. F. Chan and J. Shen. Non-texture inpainting by curvature driven diffusions (CDD). J. Vis. Commun. Image Rep, 12:436-449, 2000.
Variational PDE models in image processing. T F Chan, J Shen, L Vese, Not. AMS J. 50T. F. Chan, J. Shen, and L. Vese. Variational PDE models in image processing. Not. AMS J., 50:14-26, 2003.
New PDE-based methods for image enhancement using SOM and Bayesian inference in various discretization schemes. D A Karras, G B Mertzios, Meas. Sci. Technol. 20104012D. A. Karras and G. B. Mertzios. New PDE-based methods for image enhancement using SOM and Bayesian inference in various discretization schemes. Meas. Sci. Technol., 20:104012, 2009.
Optimal approximations by piecewise smooth functions and associated variational problems. D Mumford, J Shah, Commun. Pure Appl. Math. 42D. Mumford and J. Shah. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math, 42:577-685, 1989.
Scale-space and edge detection using anisotropic diffusion. P Perona, J Malik, IEEE Trans. Pattern Anal. Mach. Intel. 12P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intel., 12:629-639, 1990.
Anisotropic Diffusion in Image Processing. J Weickert, Teubner-VerlagStuttgart, GermanyJ. Weickert. Anisotropic Diffusion in Image Processing. Teubner-Verlag, Stuttgart, Germany, 1998.
The discrete maximum principle for linear simplicial finite element approximations of a reaction-diffusion problem. J Brandts, S Korotov, M Křížek, Lin. Alg. Appl. 429J. Brandts, S. Korotov, and M. Křížek. The discrete maximum principle for linear simplicial finite element approximations of a reaction-diffusion problem. Lin. Alg. Appl., 429:2344-2357, 2008.
Discrete maximum principle for finite difference operators. P G Ciarlet, Aequationes Math. 4P. G. Ciarlet. Discrete maximum principle for finite difference operators. Aequationes Math., 4:338-352, 1970.
Maximum principle and uniform convergence for thefinite element method. P G Ciarlet, P.-A Raviart, Comput. Meth. Appl. Mech. Engrg. 2P. G. Ciarlet and P.-A. Raviart. Maximum principle and uniform convergence for thefinite element method. Comput. Meth. Appl. Mech. Engrg., 2:17-31, 1973.
Failure of the discrete maximum principle for an elliptic finite element problem. A Drǎgǎnescu, T F Dupont, L R Scott, Math. Comp. 74A. Drǎgǎnescu, T. F. Dupont, and L. R. Scott. Failure of the discrete maximum principle for an elliptic finite element problem. Math. Comp., 74:1-23, 2004.
Discrete maximum principle and a delaunay-type mesh condition for linear finite element approximations of two-dimensional anisotropic diffusion problems. W Huang, arXiv:1008.0562v1Numer. Math. Theory Meth. Appl. 4W. Huang. Discrete maximum principle and a delaunay-type mesh condition for linear finite element approximations of two-dimensional anisotropic diffusion problems. Numer. Math. Theory Meth. Appl., 4:319-334, 2011. (arXiv:1008.0562v1).
Discrete maximum principles for finite element solutions of nonlinear elliptic problems with mixed boundary conditions. J Karátson, S Korotov, Numer. Math. 99J. Karátson and S. Korotov. Discrete maximum principles for finite element solutions of nonlinear elliptic problems with mixed boundary conditions. Numer. Math., 99:669-698, 2005.
On discrete maximum principles for nonlinear elliptic problems. J Karátson, S Korotov, M Křížek, Math. Comput. Sim. 76J. Karátson, S. Korotov, and M. Křížek. On discrete maximum principles for nonlinear elliptic problems. Math. Comput. Sim., 76:99-108, 2007.
A constrained finite element method satisfying the discrete maximum principle for anisotropic diffusion problems. D Kuzmin, M J Shashkov, D Svyatskiy, J. Comput. Phys. 228D. Kuzmin, M. J. Shashkov, and D. Svyatskiy. A constrained finite element method satisfying the discrete maximum principle for anisotropic diffusion problems. J. Comput. Phys., 228:3448-3463, 2009.
An anisotropic mesh adaptation method for the finite element solution of heterogeneous anisotropic diffusion problems. X P Li, W Huang, arXiv:1003.4530v2J. Comput. Phys. 229X. P. Li and W. Huang. An anisotropic mesh adaptation method for the finite element solu- tion of heterogeneous anisotropic diffusion problems. J. Comput. Phys., 229:8072-8094, 2010 (arXiv:1003.4530v2).
Mesh adaptation and discrete maximum principle for 2D anisotropic diffusion problems. X P Li, D Svyatskiy, M Shashkov, LA-UR 10-01227Los Alamos National Laboratory, Los Alamos, NMTechnical ReportX. P. Li, D. Svyatskiy, and M. Shashkov. Mesh adaptation and discrete maximum principle for 2D anisotropic diffusion problems. Technical Report LA-UR 10-01227, Los Alamos National Laboratory, Los Alamos, NM, 2007.
Enforcing the discrete maximum principle for linear finite element solutions of second-order elliptic problems. R Liska, M Shashkov, Comm. Comput. Phys. 3R. Liska and M. Shashkov. Enforcing the discrete maximum principle for linear finite element solutions of second-order elliptic problems. Comm. Comput. Phys., 3:852-877, 2008.
Maximum principle in linear finite element approximations of anisotropic diffusion-convection-reaction problems. C Lu, W Huang, J Qiu, arXiv:1201.3564v12012C. Lu, W. Huang, and J. Qiu. Maximum principle in linear finite element approximations of anisotropic diffusion-convection-reaction problems. (submitted), 2012 (arXiv:1201.3564v1).
The finite volume scheme preserving extremum principle for diffusion equations on polygonal meshes. Z Sheng, G Yuan, J. Comput. Phys. 230Z. Sheng and G. Yuan. The finite volume scheme preserving extremum principle for diffusion equations on polygonal meshes. J. Comput. Phys., 230:2588-2604, 2011.
On a maximum principle for matrices, and on conservation of monotonicity. With applications to discretization methods. G Stoyan, Z. Angew. Math. Mech. 62G. Stoyan. On a maximum principle for matrices, and on conservation of monotonicity. With applications to discretization methods. Z. Angew. Math. Mech., 62:375-381, 1982.
On maximum principles for monotone matrices. G Stoyan, Lin. Alg. Appl. 78G. Stoyan. On maximum principles for monotone matrices. Lin. Alg. Appl., 78:147-161, 1986.
An Analysis of the Finite Element Method. G Strang, G J Fix, Prentice HallEnglewood Cliffs, NJG. Strang and G. J. Fix. An Analysis of the Finite Element Method. Prentice Hall, Englewood Cliffs, NJ, 1973.
Maximum principle for P1-conforming finite element approximations of quasi-linear second order elliptic equations. J Wang, R Zhang, arXiv:1105.1466v3J. Wang and R. Zhang. Maximum principle for P1-conforming finite element approximations of quasi-linear second order elliptic equations. 2011. (arXiv:1105.1466v3).
Monotone finite volume schemes for diffusion equations on polygonal meshes. C Yuan, Z Sheng, J. Comput. Phys. 227C. Yuan and Z. Sheng. Monotone finite volume schemes for diffusion equations on polygonal meshes. J. Comput. Phys., 227:6288-6312, 2008.
Well-posedness and maximum principle for PDE based models in image processing. L , Tel-Aviv UniversityPhD thesisL. Dascal. Well-posedness and maximum principle for PDE based models in image processing. PhD thesis, Tel-Aviv University, 2006.
Discrete maximum principle for finite element parabolic models in higher dimensions. I Faragó, Math. Comput. Simulation. 80I. Faragó. Discrete maximum principle for finite element parabolic models in higher dimensions. Math. Comput. Simulation, 80:1601-1611, 2010.
Discrete maximum principle and adequate discretizations of linear parabolic problems. I Faragó, R Horváth, SIAM J. Sci. Comput. 28I. Faragó and R. Horváth. Discrete maximum principle and adequate discretizations of linear parabolic problems. SIAM J. Sci. Comput., 28:2313-2336, 2006.
A review of reliable numerical models for three-dimensional linear parabolic problems. I Faragó, R Horváth, Int. J. Numer. Meth. Engng. 70I. Faragó and R. Horváth. A review of reliable numerical models for three-dimensional linear parabolic problems. Int. J. Numer. Meth. Engng., 70:25-45, 2007.
Continuous and discrete parabolic operators and their qualitative properties. I Faragó, R Horváth, IMA J. Numer. Anal. 29I. Faragó and R. Horváth. Continuous and discrete parabolic operators and their qualitative properties. IMA J. Numer. Anal., 29:606-631, 2009.
Discrete maximum principle for linear parabolic problems solved on hybrid meshes. I Faragó, R Horváth, S Korotov, Appl. Numer. Math. 53I. Faragó, R. Horváth, and S. Korotov. Discrete maximum principle for linear parabolic problems solved on hybrid meshes. Appl. Numer. Math., 53:249-264, 2005.
Discrete maximum principles for nonlinear parabolic pde systems. I Faragó, S Karátson, Korotov, 93Tampere University of Technology. Department of MathematicsTechnical ReportI. Faragó, Karátson, and S. Korotov. Discrete maximum principles for nonlinear parabolic pde systems. Technical Report 93, Tampere University of Technology. Department of Mathematics, 2009.
Some remarks on finite element analysis of time-dependent field problems. H Fujii, Theory and Proactice in Finite Element Structural Analysis. TokyoUniversity of TokyoH. Fujii. Some remarks on finite element analysis of time-dependent field problems. In Theory and Proactice in Finite Element Structural Analysis, pages 91-106. University of Tokyo, Tokyo, 1973.
Stability of semidiscrete formulations for parabolic problems at small time steps. I Harari, Comput. Methods Appl. Mech. Engrg. 193I. Harari. Stability of semidiscrete formulations for parabolic problems at small time steps. Comput. Methods Appl. Mech. Engrg., 193:1491-1516, 2004.
The discrete maximum principle in finite-element thermal radiation analysis. M Lobo, A F Emery, Numer. Heat Transfer, Part B. 24M. Lobo and A. F. Emery. The discrete maximum principle in finite-element thermal radiation analysis. Numer. Heat Transfer, Part B, 24:209-227, 1993.
Time step constraints in finite element analysis of the poisson type equations. V Murti, S Valliappan, N Khalili-Naghadeh, Comput. Struct. 31V. Murti, S. Valliappan, and N. Khalili-Naghadeh. Time step constraints in finite element analysis of the poisson type equations. Comput. Struct., 31:269-273, 1989.
On the existence of maximum principles in parabolic finite element equations. V Thomée, L B Wahlbin, Math. Comput. 77V. Thomée and L. B. Wahlbin. On the existence of maximum principles in parabolic finite element equations. Math. Comput., 77:11-19, 2008.
Discrete maximum principle for parabolic problems solved by prismatic finite elements. T Vejchodský, S Korotov, A Hannukainen, 77AS CR. Institute of MathematicsTechnical ReportT. Vejchodský, S. Korotov, and A. Hannukainen. Discrete maximum principle for parabolic problems solved by prismatic finite elements. Technical Report 77, Institute of Mathematics, AS CR, Prague, 2008.
Minimum time-step criteria for the galerkin finite element methods applied to one-dimensional parabolic partial differential equations. C Yang, Y Gu, Numer Meth. P. D. E. 22C. Yang and Y. Gu. Minimum time-step criteria for the galerkin finite element methods applied to one-dimensional parabolic partial differential equations. Numer Meth. P. D. E., 22:259-273, 2006.
Schéma volumes finis monotone pour des opérateurs de diffusion fortement anisotropes sur des maillages de triangles non structurés. C , Le Potier, C. R. Math. Acad. Sci. 341C. Le Potier. Schéma volumes finis monotone pour des opérateurs de diffusion fortement anisotropes sur des maillages de triangles non structurés. C. R. Math. Acad. Sci. Paris, 341:787- 792, 2005.
A nonlinear finite volume scheme satisfying maximum and minimum principles for diffusion operators. C , Le Potier, Int. J. Finite. 6220C. Le Potier. A nonlinear finite volume scheme satisfying maximum and minimum principles for diffusion operators. Int. J. Finite Vol., 6(2):20 pp, 2009.
The Finite Element Method for Elliptic Problems. P G Ciarlet, North-HollandAmsterdamP. G. Ciarlet. The Finite Element Method for Elliptic Problems. North-Holland, Amsterdam, 1978.
Volume and Surface Area for Polyhedra and Polytopes. J Emert, R Nelson, Math. Mag. 70J. Emert and R. Nelson. Volume and Surface Area for Polyhedra and Polytopes. Math. Mag. 70:365-371, 1997.
Adaptive Moving Mesh Methods. W Huang, R D Russell, Springer-VerlagBerlin Heidelberg; New YorkW. Huang and R. D. Russell. Adaptive Moving Mesh Methods. Springer-Verlag Berlin Heidelberg, New York, 2011.
BAMG -Bidimensional Anisotropic Mesh Generator homepage. F Hecht, F. Hecht. BAMG -Bidimensional Anisotropic Mesh Generator homepage.
Analysis of recurrent patterns in toroidal magnetic fields. A R Sanderson, G Chen, X Tricoche, D Pugmire, S Kruger, J Breslau, IEEE Trans. Vis. Comput. Graph. 16A. R. Sanderson, G. Chen, X. Tricoche, D. Pugmire, S. Kruger, and J. Breslau. Analysis of recurrent patterns in toroidal magnetic fields. IEEE Trans. Vis. Comput. Graph., 16:1431-1440, 2010.
|
[] |
[
"Magnetic Impurities in Mott-Hubbard Antiferromagnets",
"Magnetic Impurities in Mott-Hubbard Antiferromagnets"
] |
[
"Avinash Singh \nTheoretische Physik III\nUniversität Augsburg\n86135AugsburgGermany\n\nDepartment of Physics\nIndian Institute of Technology\n208016KanpurIndia\n",
"Prasenjit Sen \nDepartment of Physics\nIndian Institute of Technology\n208016KanpurIndia\n"
] |
[
"Theoretische Physik III\nUniversität Augsburg\n86135AugsburgGermany",
"Department of Physics\nIndian Institute of Technology\n208016KanpurIndia",
"Department of Physics\nIndian Institute of Technology\n208016KanpurIndia"
] |
[] |
A formalism is developed to treat magnetic impurities in a Mott-Hubbard antiferromagnetic insulator within a representation involving multiple orbitals per site. Impurity scattering of magnons is found to be strong when the number of orbitals N ′ on impurity sites is different from the number N on host sites, leading to strong magnon damping and singular correction to low-energy magnon modes in two dimensions. The impurity-scattering-induced softening of magnon modes leads to enhancement in thermal excitation of magnons, and hence to a lowering of the Néel temperature in layered or three dimensional systems. Weak impurity scattering of magnons is obtained in the case N ′ = N , where the impurity is represented in terms of modified hopping strength, and a momentum-independent multiplicative renormalization of magnon energies is obtained, with the relative magnon damping decreasing to q 2 for long-wavelength modes. Split-off magnon modes are obtained when the impurity-host coupling is stronger, and implications are discussed for twomagnon Raman scattering. The mapping between antiferromagnets and superconductors is utilized to contrast formation of impurity-induced states. 71.27.+a, 75.10.Jm, 75.10.Lp, 75.30.Ds
|
10.1103/physrevb.57.10598
|
[
"https://arxiv.org/pdf/cond-mat/9802052v1.pdf"
] | 119,480,915 |
cond-mat/9802052
|
78935e5be9f243eafb3797a815d8dad46db57172
|
Magnetic Impurities in Mott-Hubbard Antiferromagnets
4 Feb 1998
Avinash Singh
Theoretische Physik III
Universität Augsburg
86135AugsburgGermany
Department of Physics
Indian Institute of Technology
208016KanpurIndia
Prasenjit Sen
Department of Physics
Indian Institute of Technology
208016KanpurIndia
Magnetic Impurities in Mott-Hubbard Antiferromagnets
4 Feb 1998
A formalism is developed to treat magnetic impurities in a Mott-Hubbard antiferromagnetic insulator within a representation involving multiple orbitals per site. Impurity scattering of magnons is found to be strong when the number of orbitals N ′ on impurity sites is different from the number N on host sites, leading to strong magnon damping and singular correction to low-energy magnon modes in two dimensions. The impurity-scattering-induced softening of magnon modes leads to enhancement in thermal excitation of magnons, and hence to a lowering of the Néel temperature in layered or three dimensional systems. Weak impurity scattering of magnons is obtained in the case N ′ = N , where the impurity is represented in terms of modified hopping strength, and a momentum-independent multiplicative renormalization of magnon energies is obtained, with the relative magnon damping decreasing to q 2 for long-wavelength modes. Split-off magnon modes are obtained when the impurity-host coupling is stronger, and implications are discussed for twomagnon Raman scattering. The mapping between antiferromagnets and superconductors is utilized to contrast formation of impurity-induced states. 71.27.+a, 75.10.Jm, 75.10.Lp, 75.30.Ds
A formalism is developed to treat magnetic impurities in a Mott-Hubbard antiferromagnetic insulator within a representation involving multiple orbitals per site. Impurity scattering of magnons is found to be strong when the number of orbitals N ′ on impurity sites is different from the number N on host sites, leading to strong magnon damping and singular correction to low-energy magnon modes in two dimensions. The impurity-scattering-induced softening of magnon modes leads to enhancement in thermal excitation of magnons, and hence to a lowering of the Néel temperature in layered or three dimensional systems. Weak impurity scattering of magnons is obtained in the case N ′ = N , where the impurity is represented in terms of modified hopping strength, and a momentum-independent multiplicative renormalization of magnon energies is obtained, with the relative magnon damping decreasing to q 2 for long-wavelength modes. Split-off magnon modes are obtained when the impurity-host coupling is stronger, and implications are discussed for twomagnon Raman scattering. The mapping between antiferromagnets and superconductors is utilized to contrast formation of impurity-induced states.
I. INTRODUCTION
While the problem of static impurities in antiferromagnetic insulators is more than twenty five years old, [1] it has attracted renewed attention after the discovery of high-T c cuprate superconductors, [2] since their parent compounds are antiferromagnetic insulators. From the very early days of high-T c superconductivity a number of doping studies have been done with various static impurities -both magnetic, [3] and nonmagnetic [4][5][6] replacing copper from the Cu-O planes as in La 2 CuO 4 . Susceptibility measurements in high-T c cuprates doped with magnetic impurities like Fe, Ni, Co give evidence of local-moment formation, [4] which is expected to be intrinsically associated with the magnetic impurities. This is unlike the case of nonmagnetic impurities such as Zn, Al, Ga etc. which, despite being intrinsically nonmagnetic, give rise to local moments in the copper-oxide planes when doped in cuprate antiferromagnets. This was inferred earlier from the Curie-Weiss behavior of the magnetic susceptibility, [4,7] and has been recently confirmed in the Y-NMR studies of doped 1-2-3 systems as seen in the progressively increasing linewidth of the Y-NMR signal with decreasing temperature. [6,8] Xiao et al. have also ascertained the spin states of different magnetic dopants from the observed local moments, [3] and find, for example, that Fe is in a spin-5 2 state, whereas Ni is in a spin-1 state. They also find a correlation between T c reduction and size of the local moment, consistent with the magnetic pair breaking mechanism.
Although theoretically the problem of magnetic impurities in an antiferromagnet has been studied recently within the Heisenberg representation of localized spins, [9] no such comprehensive study exists within the Mott-Hubbard model, which provides a good description of the 3d holes in the Cu-O planes of high-T c superconductors. Recently the problem of nonmagnetic impurities in the Mott-Hubbard antiferromagnet was addressed and defect states, local-moment formation, impurity-scattering of magnons, and finite-temperature magnetic dynamics in layered systems were studied. [10,11] Other recent works on static vacancies in antiferromagnets include exact diagonalization studies with Heisenberg model, [12] linear spin wave theory, [13] and exact diagonalization of strongly correlated small clusters. [14] While nonmagnetic impurities can be simply represented by spinindependent impurity potential, the situation is more complex for magnetic impurities. In this paper we introduce several representations to treat magnetic impurities in different situations. A simple extension to spindependent impurity potential is followed by a more sophisticated approach involving a generalized N -orbital Hubbard model with multiple orbitals per site. Broadly there are two distinct classes depending on whether the number of orbitals N ′ at the impurity site is the same as or different from the number of orbitals N at the host sites. In the case N ′ = N the magnetic impurity is represented through a modified hopping strength t ′ between the impurity orbitals and the neighboring host orbitals. In the strong-correlation limit (U ≫ t) wherein the Mott-Hubbard AF with N orbitals per site maps to the spin S = N /2 quantum Heisenberg AF, the modified hopping strength translates into modified exchange coupling J ′ = 4t ′2 /U between the impurity spin and the neighboring host spins. This describes the situation where, in spin language, the impurity spin S ′ is equal to the host spin S, but is coupled to its neighbors with a different exchange interaction J ′ . Similarly the case N ′ = N with no modification in hopping strength or Hubbard interaction energy corresponds to the situation where the impurity spin is different from the host spins (S ′ = S).
II. SINGLE-ORBITAL MAGNETIC IMPURITY
In this section we consider a single-orbital magnetic impurity embedded in an AF host which is described by the Hubbard model with one orbital per site with exactly half filling. For concreteness we consider the square lattice, generalization to other bipartite lattices being straightforward. The host Hamiltonian is
H 0 = −t ij σ (a † iσ a jσ + a † jσ a iσ ) + U i n i↑ n i↓ ,(1)
where t is the nearest-neighbor (NN) hopping strength and U the on-site Coulomb repulsion. The AF state and its associated features such as sublattice magnetization, magnon energies, quantum corrections etc. have been studied earlier in detail. [15] We model the single-orbital impurity in terms of locally modified hopping term t ′ between the impurity orbital and its NN host orbitals.
The Hamiltonian with such an impurity on site I can be written as below, where the sum is over all nearest neighbors J of the impurity sites I, and δt = t ′ − t is the hopping perturbation around the impurity site,
H = H 0 + δt IJ σ (a † Iσ a Jσ + a † Jσ a Iσ )(2)
We start with the perturbative method where the impurity-induced perturbation
[δχ 0 ] ≡ [χ 0 ] − [χ 0
host ] to the zeroth-order, antiparallel-spin, particle-hole propagator is obtained in powers of δt/t, and resulting corrections to its eigenvalues then yield the renormalization in magnon energies. [16] Diagrammatic contributions to [δχ 0 ] to first order in δt, and their evaluation in the strong-correlation limit have been discussed earlier in context of the hopping disorder problem. [16] We obtain for the diagonal, off-diagonal, and nearest-neighbor diagonal terms, expressed in units of −t 2 /∆ 3 for convenience,
[δχ 0 ] II = z 2 δt t ; [δχ 0 ] IJ = [δχ 0 ] JI = [δχ 0 ] JJ = 1 2 δt t(3)
where z = 4 is the coordination number for the square lattice, 2∆ ≈ U is the Hubbard gap, and only terms upto order (t 2 /∆ 3 ) have been retained, appropriate to the strong-correlation limit. We notice that the sum of all matrix elements diagonal in sublattice basis,
[δχ 0 ] II + [δχ 0 ] JJ is precisely equal to the sum of off-diagonal ma- trix elements [δχ 0 ] IJ + [δχ 0 ]
JI . An immediate consequence of this correlation is that the Goldstone mode is preserved and that generally the effective scattering of low-energy, long-wavelength magnon modes is weak. If the impurity is on an A-sublattice site, then for the first-order correction we obtain after summing over nearest neighbor terms, (4) where α and β are the magnon amplitudes on A and B sublattices respectively, and γ q = (cos q x + cos q y )/2. An identical result is obtained when the impurity is on a B-sublattice site, because in this case α and β are simply exchanged in the above equation, and since [δχ 0 ] II = z[δχ 0 ] JJ , this expression is symmetric under exchange of α and β.
δλ (1) q ≡ q|[δχ 0 ]|q = α 2 [δχ 0 ] II + αβzγ q [δχ 0 ] IJ + βαzγ q [δχ 0 ] JI + β 2 z[δχ 0 ] JJUsing α = 1 N (1 − ω 0 q ) and β = − 1 N (1 + ω 0 q ), where ω 0 q = 1 − γ 2
q is the host magnon energy in units of 2J for the momentum-q mode, we obtain after summing over contributions from all impurities
δλ (1) q = xz δt t (1 − γ 2 q ),(5)
where x is the total impurity concentration, and impurities are assumed to be evenly distributed between the two sublattices. The renormalized magnon energy, given by the pole in the magnon propagator, is now obtained from the solution of the equation 1 − ω 2 + γ 2 q + δλ
(1) q = 0, and upto first order in the effective impurity strength xδt/t we obtain
ω q = ω 0 q 1 + x z δt t .(6)
This result agrees exactly with the calculations [9] on the Heisenberg model in that there are no singular corrections to the magnon energy in the case S ′ = S, and the correction is proportional to xδt/t = (1/2)xδJ/J. Turning now to the magnon-energy renormalization of the localized, high-energy modes with energy near 2J, which correspond to local spin deviation, we have α = 0, β = 1, so that δλ (1) = 1 2 z(δt/t). This implies that the magnon energy gets shifted from 2J to
ω = 2J 1 + z 2 δt t .(7)
In this case the impurity concentration does not enter the magnon-energy renormalization, rather it has a bearing on the spectral weight of these high-energy modes. Thus for δt positive, the magnon spectrum goes up by energy zJδt/t. This increase is expected from the simple picture of these high-energy modes corresponding to local spin deviations. The energy cost of making a spin deviation on the impurity site is zJ ′ /2, where J ′ /2 is the bond strength. With t ′ = t(1 + δt/t), to first order in δt/t we have ∆ǫ = z(J ′ − J)/2 = zJδt/t. The exact-eigenstates analysis also shows that precisely one magnon state at the upper end of the spectrum is split off from the magnon energy band. These split-off modes are strongly localized around the impurity sites, and hence correspond to local spin deviations. Furthermore, for different values of the impurity hopping t ′ /t it is seen from the magnon spectrum that the energy separation of the split-off state from the upper end of the spectrum increases roughly in proportion to δt, as obtained in the perturbative analysis. This exacteigenstates approach for obtaining magnon energies and wavefunctions from the fermionic eigensolutions in the self-consistent AF state has been described earlier. [17] III. SPIN-DEPENDENT IMPURITY POTENTIAL Nonmagnetic impurities in the Mott-Hubbard AF were modelled earlier via a spin-independent impurity potential term, and as a natural extension we therefore consider the following spin-dependent impurity term for magnetic impurities,
H mag imp = I Ψ † I [−σ 3 V ]Ψ I ,(8)
where Ψ I = (a I↑ a I↓ ). A spin-independent impurity potential ǫ 0 can be included for generality, however, we shall consider the limit V >> ǫ 0 , so that the potential for spin σ fermion is V σ ≈ −σV . We choose V to be positive for impurities on the A-sublattice sites, so that V ↑ is very low and V ↓ is very high. The sign of V is reversed for impurities on B-sublattice sites. This choice of potential ensures that the magnetization on the impurity sites follows the host AF ordering. Such a spin-dependent impurity potential can arise from a coupling − σ. S imp between the itinerant fermion spin σ and the static magnetic impurity spin S imp , resulting from a strong Hubbard interaction.
Since experiments on high-T c cuprates show the impurity spin to be antiferromagnetically coupled with the host spins, [3] we take the local field direction to be along the local magnetization direction (ẑ). The low potential (for spin-up) is justified in view of the fact that the ionization energy for both Fe +3 and Ni +2 i.e., the fourth and the third ionization energies respectively for Fe and Ni are much higher than the third ionization energy for Cu. Whereas the ionization energies for Fe +3 and Ni +2 are 54.8 eV and 35.17 eV respectively, the ionization energy for Cu +2 is 20.2 eV. We now examine formation of impurity-induced states due to this spin-dependent impurity potential. Within the T-matrix analysis, used earlier for nonmagnetic impurities, [10] energies of impurity-induced states are obtained from solutions of g σ II (ω) = 1/V σ . For large |V |/U these impurity states are formed at energies ∼ −σV for the two spins, and are essentially site localized and therefore decoupled from the system. Thus, for the magneticimpurity case when the impurity spin is antiferromagnetically coupled to the neighboring host spins, a significant difference from the nonmagnetic-impurity case is that there are no defect states formed in the Hubbard gap. Rather only impurity states are formed, far removed in energy from the Hubbard bands.
Within the above representation of magnetic impurities in terms of spin-dependent impurity potential, the fermion number is unchanged, unlike the case of nonmagnetic impurities where one fermion is removed for every added impurity. Hence the impurity sites do not quite act as spin vacancies. Nonetheless, the presence of a impurity potential term which breaks time-reversal symmetry leads to a partial decoupling of the impurity site from the host. This is most easily seen in the limit V → ∞ where the local antiparallel-spin, particle-hole excitations are suppressed by the large energy difference 2V , leading to an absence of the ω term, and therefore to strong magnon scattering. Quite generally, the particle-hole energy difference for antiparallel spins is modified by the spindependent impurity potential from 2∆ to 2∆ + 2V , leading to a modification in the ω term. For spin-independent impurity potential the particle-hole energies are shifted equally, and hence it is the removal of a fermion from the impurity site that is crucial. As a result of this decoupling of magnetic impurity sites, a qualitatively identical impurity-induced perturbation [δχ 0 (ω)] is obtained, leading to similar results for magnon renormalization as for the nonmagnetic-impurity case, where singular corrections were obtained for low-energy magnon modes in two dimensions. [11] The strong impurity-scattering of magnons also introduces significant damping, with the ratio of magnon damping term to its energy being simply proportional to the impurity concentration x for longwavelength modes.
IV. GENERALIZED HUBBARD-MODEL REPRESENTATION
In order to represent higher-spin magnetic impurities, we now generalize to the situation with N and N ′ orbitals on host and impurity sites respectively. An appropriate model for this case is the generalized Hubbard model with multiple orbitals per site. This model has been used earlier to study quantum corrections in the antiferromagnetic state in a spin-rotationally-symmetric formalism, where a systematic perturbative expansion in powers of 1/N was developed. [15] We introduce a slight extension here in this model which makes it equivalent, in the strong correlation limit, to the spin-S QHAF, where S = N /2. The modification is to allow the NN hopping term to operate between all orbitals, whereas the hopping term considered earlier was diagonal in the orbital index. [15] We therefore consider the following Hamiltonian for the AF host,
H = −t <ij>σαβ (a † iσα a jσβ + h.c.) + U N iαβ (a † i↑α a i↑α a † i↓β a i↓β + a † i↑α a i↑β a † i↓β a i↓α )(9)
where α and β are the orbital indices which run from 1 to N , and the two Hubbard interaction terms are re-spectively direct and exchange type interactions with re-impurity spin is identical to the host spin, the magnetic impurity is represented by locally modified hopping strength, and we find that the effective scattering of long-wavelength magnon modes is weak, leading to momentum-independent multiplicative renormalization of magnon energies. For positive hopping perturbation δt we find localized, split-off magnon modes corresponding to local spin deviations at impurity sites. These split-off modes will be relevant in two-magnon Raman scattering which probes high energy magnetic excitations. In the other case N ′ = N , when the impurity spin is different from the host spin, we obtain strong impurity scattering of magnon modes proportional to the difference (N ′ − N ), leading to singular corrections in two dimensions and strong magnon damping. The impurity-scattering-induced softening of magnon modes implies enhancement in thermal excitation of magnons, and hence to a lowering of the Néel temperature in layered or three dimensional systems. We also find that the process of putting additional impurity orbitals leads to enhanced impurity magnetization and localization of electronic states at the impurity, indicating partial decoupling of the impurity site from the host. A unique feature of having multiple impurity orbitals is the presence of exactly site-localized eigenstates in the electron spectrum which are completely antisymmetric between impurity orbitals. When the magnetic impurity is represented in terms of a spin-dependent impurity potential, we find that the breaking of time-reversal symmetry leads to a decoupling of the impurity site from the host, and strong magnon scattering similar to the case of spin vacancies is obtained. We also find that when the magnetic impurity spin is antiferromagnetically coupled to the neighboring host spins, only impurity states are formed, and there are no defect states formed within the Hubbard gap. The local moment associated with the magnetic impurity therefore intrinsically arises from the spin-density difference at the impurity site. Using the well-known particle-hole transformation, The problem of magnetic impurities in an AF can be mapped to that of nonmagnetic impurities in a superconductor, which is characterized by absence of defect states within the superconducting gap and robustness of superconducting gap. [19] Conversely, a nonmagnetic impurity in a positive-U Hubard AF maps onto a magnetic impurity in a negative-U Hubbard superconductor, and here defect states are formed within the gap in both cases. [20,21] 1
71.27.+a, 75.10.Jm, 75.10.Lp, 75.30.Ds
ACKNOWLEDGMENTSHelpful conversations with S. N. Basu, S. Tewari, D. Sa, V. Subrahmanyam, and V. A. Singh are gratefully acknowledged. This work was supported in part by a Research Grant (No. SP/S2/M-25/95) from the Department of Science and Technology, India. A.S. also acknowledges support from the Alexander von Humboldt Foundation.spect to orbital indices. In the symmetric case when the two interaction strengths are identical, as considered here, the system possesses spin-rotational symmetry. It has been shown earlier that in the symmetric case the two interaction terms can together be written as H int = −(U/N ) i ( S i . S i + n 2 i ), where S i and n i are the total spin and charge density operators, respectively. Spin-rotational-symmetry is therefore inherent in this impurity representation as well. Furthermore, in the strong correlation limit, a strong Hund's coupling exists which energetically favors the maximum multiplicity case (S = N /2) for the total spin operator S i .Magnetic impurities are represented by introducing N ′ = N orbitals at the impurity sites. We first examine the transverse spin fluctuation propagator in the host AF state,is the total spin operator. Again, at the RPA level the magnon propagator is given bywhere [χ 0 (ω)] now involves orbital summations, with matrix elements given by,Since each orbital is now connected via hopping to N orbitals on the NN sites, the electronic spectral weights are correspondingly modified. For example, in the strong-correlation limit, the on-site majority and minority spin densities in each orbital are now 1 − N t 2 /∆ 2 and N t 2 /∆ 2 respectively. A straightforward extension of the earlier analysis in the strong-correlation limit[15]leads to:where J = 4t 2 /U as usual, and D is dimensionality of the hypercubic lattice. Since different orbitals on the same site are not directly coupled, the intrasite propagator is diagonal in orbital index, and therefore the leading order diagonal terms (the 1/U and the ω term) are proportional to N . However, the NN hopping operates between all orbitals, and therefore the offdiagonal term and the next-to-leading order piece (arising from hopping) in the diagonal term are both proportional to N 2 . The magnon energies are now given by:We now introduce a magnetic impurity in the system with N ′ = N orbitals at the impurity site I. The resulting modification in the electronic spectral weights leads to the following changes in the [χ 0 (ω)] ij matrix elements for i, j in the vicinity of the impurity site I :Since now the local Hubbard interaction strength itself is not uniform but depends on the number of site orbitals, we have to multiply the [χ 0 ] matrix with the diagonal interaction matrix [U] containing elements U/N for host sites and U/N ′ for the impurity site. We therefore examine the local matrix elements of the matrix product [Uχ 0 ] ij = U ii χ 0 ij for i, j in the vicinity of the impurity site. The impurity-induced perturbation in the matrix elements of the product [Uχ 0 ] are obtained as below:We now obtain the magnon-energy renormalization by perturbatively obtaining the impurity-induced correction to the eigenvalues of the [Uχ 0 (ω)] matrix. As discussed earlier,[11]we treat δ[Uχ 0 (ω)] as the perturbation matrix, and determine corrections to eigenvalues of [χ 0 host (ω)]. Evaluating the first-order correction q|δ[Uχ 0 (ω)]|q from the magnon eigenvector |q , and retaining terms to first order only, we obtain:As for the nonmagnetic impurity case, we obtain here a correction which is linear in energy, and this signifies strong impurity scattering of magnons for longwavelength, low-energy modes, leading to singular corrections in two dimensions and strong magnon damping from second-order scattering processes.[11]The scattering term is explicitly proportional to the difference (N ′ − N ) between the number of orbitals on the impurity site and the host sites, which arises from the different dynamics of the impurity spin and the host spins. This generally implies that impurity scattering of magnons is strong when the impurity spin S ′ = N ′ /2 is different from the host spin S = N /2, in agreement with earlier studies within the Heisenberg model,[9]and the one-band model with nonmagnetic impurities where N = 1 and N ′ = 0.[11]V. CONCLUSIONSIn conclusion, we have developed a formalism to treat magnetic impurities in a Mott-Hubbard antiferromagnetic insulator within a representation involving multiple orbitals per site. For the case N ′ = N , when the
. R A For A Review, W J L Cowley, Buyers, Rev. Mod. Phys. 44406For a review, see R. A. Cowley and W. J. L. Buyers, Rev. Mod. Phys. 44, 406 (1972).
. J G Bednorz, K A Müller, Z. Phys. B. 64188J. G. Bednorz and K. A. Müller, Z. Phys. B 64, 188, (1986).
. G Xiao, M Z Cieplak, J Q Xiao, C L Chien, Phys. Rev. B. 428752G. Xiao, M. Z. Cieplak, J. Q. Xiao, and C. L. Chien, Phys. Rev. B 42, 8752 (1990).
. G Xiao, M Z Cieplak, A Gavrin, F H Streitz, A Bakhshai, C L Chien, Phys. Rev. Lett. 601446G. Xiao, M. Z. Cieplak, A. Gavrin, F. H. Streitz, A. Bakhshai and C. L. Chien, Phys. Rev. Lett, 60, 1446 (1988).
. R E Walstedt, R F Bell, L F Schneemeyer, T V Waszcazk, Phys. Rev. B. 48R. E. Walstedt, R. F. Bell, L. F. Schneemeyer, and T. V. Waszcazk, Phys. Rev. B 48, 10 646 (1993).
. A V Mahajan, H Alloul, G Collin, J F Marucco, Phys. Rev. Lett. 723100A. V. Mahajan, H. Alloul, G. Collin, and J. F. Marucco, Phys. Rev. Lett. 72, 3100 (1994).
. C. -S Gee, J. Superconductivity. 1C. -S. Gee et al., J. Superconductivity, 1, 63, (1988).
. H Alloul, P Mendels, H Casalta, J F Marucco, J Arabski, Phys. Rev. Lett. 673140H. Alloul, P. Mendels, H. Casalta, J. F. Marucco, and J. Arabski, Phys. Rev. Lett. 67, 3140 (1991).
. C C Wan, A B Harris, D Kumar, Phys. Rev. B. 481036C. C. Wan, A. B. Harris and D. Kumar, Phys. Rev. B 48, 1036 (1993).
. P Sen, S Basu, A Singh, Phys. Rev. B. 5010381P. Sen, S. Basu and A. Singh, Phys. Rev. B 50, (RC) 10381 (1994).
. P Sen, A Singh, Phys. Rev. B. 53328P. Sen and A. Singh, Phys. Rev. B 53, 328 (1996).
. N D Bulut, D J Hone, E Y Scalapino, Loh, Phys. Rev. Lett. 622192N. Bulut. D. Hone, D. J. Scalapino, and E. Y. Loh, Phys. Rev. Lett. 62, 2192 (1989).
. W Brenig, A Kampf, Phys. Rev. B. 43W. Brenig and A. Kampf, Phys. Rev. B 43, 12 914 (1991).
. D Poilblanc, D J Scalapino, W Hanke, Phys. Rev. Lett. 72884D. Poilblanc, D. J. Scalapino, and W. Hanke, Phys. Rev. Lett. 72, 884 (1994).
. A Singh, Phys. Rev. B. 433617A. Singh, Phys. Rev. B 43, 3617 (1991).
. S Basu, A Singh, Phys. Rev. B. 55338S. Basu and A. Singh, Phys. Rev. B 55, 12 338 (1997).
. S Basu, A Singh, Phys. Rev. B. 536406S. Basu and A. Singh, Phys. Rev. B 53, 6406 (1996).
. S Basu, A Singh, Phys. Rev. B. 546356S. Basu and A. Singh, Phys. Rev. B 54, 6356 (1996).
. P W Anderson, J. Phys. Chem. Solids. 1126P. W. Anderson, J. Phys. Chem. Solids 11, 26 (1959).
. H Shiba, Prog. Theor. Phys. 40435H. Shiba, Prog. Theor. Phys. 40, 435 (1968).
. K Maki, Superconductivity , R. D. ParksDekkerIINew YorkK. Maki, Superconductivity, Vol II, edited by R. D. Parks (Dekker, New York, 1969).
|
[] |
[
"Fully denaturing two-dimensional electrophoresis of membrane proteins: a critical update",
"Fully denaturing two-dimensional electrophoresis of membrane proteins: a critical update"
] |
[
"Thierry Rabilloud [email protected] \nBiophysique et Biochimie des Systèmes Intégrés\nCEA-DSV/iRTSV/LBBSI\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France\n\nBiophysique et Biochimie des Systèmes Intégrés\nUMR 5092\nCNRS\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France\n\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9\n",
"Mireille Chevallet \nBiophysique et Biochimie des Systèmes Intégrés\nCEA-DSV/iRTSV/LBBSI\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France\n\nBiophysique et Biochimie des Systèmes Intégrés\nUMR 5092\nCNRS\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France\n\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9\n",
"Sylvie Luche \nBiophysique et Biochimie des Systèmes Intégrés\nCEA-DSV/iRTSV/LBBSI\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France\n\nBiophysique et Biochimie des Systèmes Intégrés\nUMR 5092\nCNRS\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France\n\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9\n",
"Cécile Lelong \nBiophysique et Biochimie des Systèmes Intégrés\nCEA-DSV/iRTSV/LBBSI\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France\n\nBiophysique et Biochimie des Systèmes Intégrés\nUMR 5092\nCNRS\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France\n\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9\n",
"Thierry Rabilloud ",
"/ Bbsi "
] |
[
"Biophysique et Biochimie des Systèmes Intégrés\nCEA-DSV/iRTSV/LBBSI\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France",
"Biophysique et Biochimie des Systèmes Intégrés\nUMR 5092\nCNRS\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France",
"CEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9",
"Biophysique et Biochimie des Systèmes Intégrés\nCEA-DSV/iRTSV/LBBSI\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France",
"Biophysique et Biochimie des Systèmes Intégrés\nUMR 5092\nCNRS\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France",
"CEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9",
"Biophysique et Biochimie des Systèmes Intégrés\nCEA-DSV/iRTSV/LBBSI\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France",
"Biophysique et Biochimie des Systèmes Intégrés\nUMR 5092\nCNRS\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France",
"CEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9",
"Biophysique et Biochimie des Systèmes Intégrés\nCEA-DSV/iRTSV/LBBSI\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France",
"Biophysique et Biochimie des Systèmes Intégrés\nUMR 5092\nCNRS\nCEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9France",
"CEA-Grenoble\n17 rue des martyrsF-38054GRENOBLE CEDEX 9"
] |
[] |
The quality and ease of proteomics analysis depends on the performance of the analytical tools used, and thus of the performances of the protein separation tools used to deconvolute complex protein samples. Among protein samples, membrane proteins are one of the most difficult sample classes, because of their hydrophobicity and embedment in the lipid bilayers.This review deals with the recent progresses and advances made in the separation of membrane proteins by two-dimensional electrophoresis separating only denatured proteins.Traditional 2D methods, i.e.methods using isoelectric focusing in the first dimension are compared to methods using only zone electrophoresis in both dimensions, i.e. electrophoresis in the presence of cationic or anionic detergents. The overall performances and fields of application of both types of method is critically examined, as are future prospects for this field
|
10.1002/pmic.200800043
|
[
"https://arxiv.org/pdf/0812.4736v1.pdf"
] | 4,792,596 |
0812.4736
|
57eb1d27ffa5d70879453735633c19cd684e8903
|
Fully denaturing two-dimensional electrophoresis of membrane proteins: a critical update
Thierry Rabilloud [email protected]
Biophysique et Biochimie des Systèmes Intégrés
CEA-DSV/iRTSV/LBBSI
CEA-Grenoble
17 rue des martyrsF-38054GRENOBLE CEDEX 9France
Biophysique et Biochimie des Systèmes Intégrés
UMR 5092
CNRS
CEA-Grenoble
17 rue des martyrsF-38054GRENOBLE CEDEX 9France
CEA-Grenoble
17 rue des martyrsF-38054GRENOBLE CEDEX 9
Mireille Chevallet
Biophysique et Biochimie des Systèmes Intégrés
CEA-DSV/iRTSV/LBBSI
CEA-Grenoble
17 rue des martyrsF-38054GRENOBLE CEDEX 9France
Biophysique et Biochimie des Systèmes Intégrés
UMR 5092
CNRS
CEA-Grenoble
17 rue des martyrsF-38054GRENOBLE CEDEX 9France
CEA-Grenoble
17 rue des martyrsF-38054GRENOBLE CEDEX 9
Sylvie Luche
Biophysique et Biochimie des Systèmes Intégrés
CEA-DSV/iRTSV/LBBSI
CEA-Grenoble
17 rue des martyrsF-38054GRENOBLE CEDEX 9France
Biophysique et Biochimie des Systèmes Intégrés
UMR 5092
CNRS
CEA-Grenoble
17 rue des martyrsF-38054GRENOBLE CEDEX 9France
CEA-Grenoble
17 rue des martyrsF-38054GRENOBLE CEDEX 9
Cécile Lelong
Biophysique et Biochimie des Systèmes Intégrés
CEA-DSV/iRTSV/LBBSI
CEA-Grenoble
17 rue des martyrsF-38054GRENOBLE CEDEX 9France
Biophysique et Biochimie des Systèmes Intégrés
UMR 5092
CNRS
CEA-Grenoble
17 rue des martyrsF-38054GRENOBLE CEDEX 9France
CEA-Grenoble
17 rue des martyrsF-38054GRENOBLE CEDEX 9
Thierry Rabilloud
/ Bbsi
Fully denaturing two-dimensional electrophoresis of membrane proteins: a critical update
Correspondence :
The quality and ease of proteomics analysis depends on the performance of the analytical tools used, and thus of the performances of the protein separation tools used to deconvolute complex protein samples. Among protein samples, membrane proteins are one of the most difficult sample classes, because of their hydrophobicity and embedment in the lipid bilayers.This review deals with the recent progresses and advances made in the separation of membrane proteins by two-dimensional electrophoresis separating only denatured proteins.Traditional 2D methods, i.e.methods using isoelectric focusing in the first dimension are compared to methods using only zone electrophoresis in both dimensions, i.e. electrophoresis in the presence of cationic or anionic detergents. The overall performances and fields of application of both types of method is critically examined, as are future prospects for this field
A historical introduction: the protoproteomics era
Because of their strategic localization at the interface between the cell and its external environment, which impart for many of their roles (transport, sensing, communication), membrane proteins have received a lot of attention from the entire field of biochemistry, and proteomics makes no exception to this rule.
As a matter of facts, the first attempts of separation of membrane proteins by 2D gels quickly followed the first detailed descriptions of 2D electrophoresis [1]. Several modifications of the basic 2D electrophoresis protocol were published in the 80's and early 90's, each being described as "optimized" for membrane proteins, but following the basic constraints in protein solubilization [2]. As expected from membrane protein chemistry, these protocols varied mainly by the detergent used in the IEF dimension, as this is a crucial point for the total 2D electrophoresis protocol. Compared to the initial 2D electrophoresis protocols, which used NP-40 or Triton X100 in combination with urea as the protein solubilization agent, these "improved" protocols used a variety of non-ionic or zwitterionic detergents, including CHAPS [3], linear sulfobetaines [4], amidosulfobetaines [5] or dodecylmaltoside [6].
However, most of these papers were poorly demonstrative, as they just relied on presence of additional spots in the improved system to claim for solubilization of membrane proteins. It must be kept in mind that protein identification means at that time were very far from what they are now, especially for direct protein identification. The most sensitive protein identification means was protein immunoblotting, but this realizes a targeted identification (where is protein X) rather than a naive identification (what protein is in this spot).
Despite this important difficulty, some direct evidence of the solubilization of membrane proteins could already be gained during this period. Demonstration of analysis by 2D electrophoresis of transferrin receptor [7] or ACTH receptor [8] in complex membrane samples was obtained by immunoblotting. Conversely, analysis by 2D electrophoresis of semi-purified membrane preparations allowed to identify some membrane receptors by classical staining [9][10][11]. But in a few cases, definitive evidence of poor performance of classical 2D electrophoresis protocols with well-known membrane proteins could also be established [12], and the overall situation of the performance of 2D electrophoresis for membrane proteins using these classical protocols based on urea-detergent as the solubilization agent in IEF has been reviewed [13].
2. The proteomics revolution: taking the measure of the problem The advent of ultrasensitive protein identification, first by Edman sequencing then by mass spectrometry (with a further important increase in sensitivity), have had a major impact on the whole protein biochemistry field. In the field of the wide-scale analysis of membrane proteins, this revolution was even more dramatic in its impact, as the community became aware rather fast of an important problem in the analysis of membrane proteins by 2D electrophoresis. Within a few years, it became obvious that most of the spots visualized on 2D gels of membrane preparations were mostly soluble contaminants, extrinsic proteins, and that only very few intrinsic membrane proteins, defined as proteins with one or several transmembrane helices, were present on those 2D gels [14,15]. Careful examination of the physico-chemical features of the proteins identified on 2D gels revealed that the general hydropathy of the polypeptide chain had a major impact on its ability to be seen in 2D gels [16]. As intrinsic membrane proteins are generally more hydrophobic than classical intracellular proteins, this explained at least in part why intrinsic membrane proteins were so poorly represented on classical 2D gels. However, more detailed examination of the features of membrane proteins separated on 2D gels revealed that the average hydropathy index (GRAVY) is not the best predictive index for visualization on 2D gels, and that the ratio between the number of predicted transmembrane domains over molecular weight [13] is a better predictive index.
It seems rather obvious that the main problem encountered is the solubilization of the hydrophobic membrane proteins under the conditions prevailing in the IEF dimension (low ionic strength, no ionic detergent). However, several experiments suggest that besides this solubilization problem, there is a protein detection problem for membrane proteins. This was clearly suggested by a publication on brain membrane proteins [17], but this can also be seen on figure 1.
However, in this latter example of inner mitochondrial membrane transporters, there is clearly a superposition of a detection problem and of a solubilization problem. The detection problem is highlighted by the fact that some membrane proteins are detection by the MSincompatible protocol and not by the MS-compatible protocol. In this case, excision on the gels stained with the MS-compatible protocol of gel pieces that are unstained but superposable to stained areas in the gel stained with the MS-incompatible protocol led to positive protein identification (e.g. ADT2, SFX3). This is indicative of a detection problem.
However, excision of other unstained gels pieces, in the areas where other missing transporters (e.g. phosphate transporter, glutamate-malate shuttle) should be (from their calculated pI and Mw), did not yield any additional identifications, showing that protein solubility in 2D gels is still a key issue, and needed to be improved.
Improvements of the standard 2D electrophoresis technique
The constraints induced by the IEF separation are rather strict: low ionic strength, no ionic detergent in the gel, and low amounts of ionic detergent in the sample. Consequently, there are only two parameters on which the experimentator can play with to increase the solubility o proteins are the nature and concentrations of chaotropes and the nature and concentrations of detergents. Both the chaotrope and detergent can be a single compound or a mixture of various compounds. The historical chaotrope in IEF is urea, as this is an efficient one and the only one to be compatible with acrylamide polymerization. However, the above-mentioned corpus of knowledge had shown that urea alone, whatever nonionic detergent it was used with, was poorly efficient for the solubilization of membrane proteins.
The situation was improved with the introduction of thiourea as an ancillary chaotrope in addition to urea [19]. While this initial report was purely qualitative and not related to membrane proteins, dedicated studies investigating several types of detergents in combination with a urea-thiourea chaotrope were carried out on various membrane systems.
As a matter of facts, the change in chaotrope dramatically affected the solubilizing power of even "mild" non ionic detergents, as shown by the work carried out on fat globules in human milk [20]. While the combination of urea and Triton X100 leaded to poor patterns, a combination of urea-thiourea and Triton X 100 solubilized and focused human butyrophilin, which is a protein with a transmembrane domain.
This chaotropic combination was tested with various types of detergents, including amidosulfobetaines [21] and other types of sulfobetaines [22], [23]. For this class of detergents, a detailed structure-efficiency study was carried out [24], and solubilization of bona fide intrinsic membrane proteins was demonstrated, including GPCR with seven transmembrane domains [17], red blood cell membrane proteins with various transmembrane domains, including the Band III protein with 12 transmembrane domains [22] which was reluctant to solubilization so far [12], or aquaporins and proton ATPase [21], [23], or mitochondrial transporters [25].
However, sulfobetaines were not the only type of detergent which proved useful. Nonionic detergents of the oligooxyethylene group proved efficient on the same proteins [26].
Detergents of the glucoside type also proved useful [26], and were able to solubilize transmembrane proteins from myelin [27]. Finally zwitterionic detergents of the phosphocholine type were also tested and able to solubilize membrane proteins from muscle [28].
These positive results prompted several groups to investigate the possibility to analyze membrane proteins by 2D electrophoresis on more complex systems, either from plants (e.g. [29]) or on animals (e.g. [30,31]). However, it is fair to say that the results were generally considered as disappointing. While it is true that the improved methods did allow to visualize some membrane proteins (e.g. in [32]), the general situation is that many membrane proteins are missing [33]. While this impression was first based on previous knowledge (e.g. in [33]), the concomitant use of proteomics strategies not based on 2D gels, such as shotgun strategies [34] or strategies based on 1D SDS gels [35], made obvious that many hydrophobic proteins are missing on classical 2D gels [36]. To sum up the situation with the example of the P450 cytochromes, the improvements made on protein solubilization for IEF allowed to go from none [37] to some (see figure 2), but these are obviously too few.
A happy exception to this rule lies in the bacterial membrane proteins, and especially in the proteins of the outer membrane (OMPs) of Gram-bacteria [40]. These proteins are essentially of the porin type, and their transmembrane part is not made of helices but of a beta barrel. Once properly denatured, these proteins are fairly soluble in the conditions prevailing in IEF, and can thus be analyzed with high resolution. this is true for bacterial porins [41] but also for eukaryotic porins [42].
Consequently, proteomics based on 2D gels has been widely applied to bacterial membrane proteomics. While the success has been limited for Gram+ bacteria [43,44], where there is only one membrane with most proteins spanning the membrane by helices, much more work has been devoted to Gram-bacteria, with greater success such as the identification of a new OMP [45], the study of E. coli either from the basic microbiology point of view [46] or from the side pathogenic bacterial strains [47,48], and more generally the study of various Gramorganisms of interest in various areas [49][50][51][52][53][54][55][56][57].
However, this exception shall not mask the general rule, which is that proteins having multiple transmembrane helices generally escape analysis by IEF-based two-dimensional electrophoresis [58]. The presence of lipids was claimed to be deleterious [59], but delipidation with organic solvents did not induce a major improvement in the solubilization of membrane proteins in classical 2D PAGE [33]. It was soon demonstrated that isoelectric precipitation is the major phenomenon to blame [60], which led to the attempt to solubilize the isoelectrically-precipitated proteins with hot SDS [61]. Unfortunately, this method did not really solve the problem, so that other solutions had to be sought.
2D electrophoresis without IEF
As it appears that limited solubility of the membrane proteins is the main limiting factor for their analysis by classical 2D PAGE, alternate methods must be found. In this respect, the contrast between the poor solubilizing performances under IEF conditions and the excellent ones in SDS PAGE gives valuable insights into the directions to be followed for dedicated solubilization systems for electrophoretic separation of membrane proteins. Of course, membrane proteins must be separated in the presence of detergents to cover the hydrophobic parts of the protein. But in addition, the electrostatic repulsion between molecules must be maximal to prevent aggregation. To this purpose, it is often advantageous to add extraneous charges to proteins via a charged protein-binding agent. Consequently, two main types of 2D gels can be used for separating membrane proteins:
In the first type, the first dimension uses native electrophoresis of membrane proteins and/or membrane complexes, generally with a charge-modifying agent. This concept, which traces back very early in electrophoresis and has been reviewed previously [62], has been refined and further developed more recently [63] and will be reviewed in another article of this issue [64].
In the second type, the first dimension separates denatured proteins, and in this case, the denaturing agent is most often also the charge transfer agent, and is made of an ionic detergent. Of course, it would be of little interest to use twice the simple SDS-PAGE technique, as the proteins would simply lay on the diagonal of the gel. Thus, the optimal system in the first dimension should offer a separation as different as possible from SDS PAGE, while keeping the high loading capacity and high solubilizing power of SDS PAGE.
Among the various electrophoretic systems designed to date, the urea-16BAC system originally devised by MacFarlane [65] has these desirable features. It shows a high loading capacity [66], while also showing a very different migration when compared to SDS [67]. Its ability to separate bona fide integral membrane proteins and thus its utility in membrane proteomics was demonstrated rather early [68], and when it became obvious that IEF-based 2D gels did not show adequate solubilization performances, this system became an obvious choice, as shown for example on bacterial membrane proteins [69]. It therefore received a lot of applications in various fields, spanning from bacterial proteins [70,71], to yeast [72], to mammalian cells or tissues [73,74] or to subcellular membranes [75]. Thus, this system, as well as closely related ones using other cationic detergents such as CTAB instead of 16-BAC [76] have gained increased popularity in membrane proteomics.
Compared to the initial description [65], the newest versions do not use the staining between the two dimensions, but in a simple equilibration in the SDS-containing buffer [76].
However, the cationic system is not as straightforward to use and not as versatile as SDS
PAGE.
For example, the polymerization of gels at low pH cannot be achieved by the classical and robust TEMED/persulfate system, and the more delicate ascorbate/ferrous ion/hydrogen peroxide system is often used. Alternatively, the methylene blue-based photopolymerization system [77], which has been shown to be efficient with other types of acidic, urea-containing gels [78], can be used to polymerize the acidic first-dimension gels [79].
In addition, and oppositely to SDS PAGE, the performance of the urea-cationic detergents systems is very sensitive to the pH of the separating gels, as shown on figure 3. This suggests that the solubilization is not fully driven by the cationic detergent, but rather both by the native charge of the proteins and the charges added by the detergent. This suggests in turn that the cationic detergents are less efficient than SDS for the solubilization of proteins, so that it can be expected that some membrane proteins are soluble in SDScontaining media and not in cationic detergents-containing media. This assumption has recently been demonstrated as true [82].
Thus, if optimal membrane protein solubilization is required, SDS must be used in both dimensions. Consequently, some tricks must be found to reach somewhat different separations in the two SDS-based separations. This can be achieved to some extent by changing the buffer system from one dimension to another, as this alters the resolution of the system [83]. To further enhance this differential mobility effect, nonionic modifyers can be used, such as glycerol [84] or urea [85]. Both approaches have been shown to solubilize adequately membrane proteins [82], [86], and the urea-based approach has been shown to be superior to the cationic detergents-based approaches in terms of hydrophobic proteins solubilization [82].
However, this comparison also showed that the resolution on 2D gels, in terms of spot spreading on the gels, clearly ranges in the order: double SDS techniques < cationic/anionic detergent techniques << IEF-based techniques. While this was easily predictable on the basis of the interdependency of the separation principles used for the two-dimensions of the system, this is not without consequences on how we can use these electrophoretic systems optimally in proteomics studies and on how we can improve their use.
Future prospects
One of the great strengths of classical, IEF-based 2D electrophoresis is its very high resolution. It allows to separate many post-translational variants, to separate spots enough so that image analysis is feasible to follow spots variations between various conditions investigated, and finally to make the basic assumption that one spot contains most often a single protein, or at least a single dominating protein, so that the spot volume changes can be attributed to the change in abundance of this protein.
When switching to 2D systems based only on detergent zone electrophoresis, the loss of resolution impacts the various features quite differently. While the ability of separating simple post-translational variants is irremediably lost because of the separation principles at play, the factors linked to spot crowding are impacted both by the resolution of the gel system and by the complexity of the sample.
Positional variability, i.e. the difficulties encountered because of gel to gel variations in spot positions, are of course more severe on crowded gels where only a fraction of the gel space is used to display proteins. These difficulties can be dealt with by multiplexing, i.e. by labeling several samples with different fluorescent probes and by comigrating them in a single gel [87]. this approach has been applied successfully to cationic/anionic 2D PAGE (e.g. in [88] ).
However, the other problem linked to spot crowding, i.e. fusion of several proteins in a single, average spot, cannot be solved easily, so that we are currently facing a difficult situation. So-called membrane preparations contain both intrinsic membrane proteins and more soluble proteins associated to membrane by various mechanisms. As a matter of facts, intrinsic membrane proteins often represent a low percentage (at least in mass) of the proteins present in the biochemical preparation, and conversely most of the proteins present in the preparations are soluble ones.
Consequently, the dilemma that we are facing now is quite simple: we can analyze easily the soluble components of a membrane preparation by classical 2D electrophoresis, but most of the scarce resolution space that we have in double-zone 2D PAGE is wasted to analyze those soluble components, and little is left to analyze the intrinsic proteins.
Thus, our efforts to increase the total performance of proteomics of intrinsic membrane proteins should go into two directions, i) finding double-zone 2D PAGE systems with increased resolution and ii) finding biochemical ways to enrich preparations in intrinsic membrane proteins. However, both directions are likely to be difficult to improve.
As to the electrophoresis systems, we are bound by the protein-binding capacities of detergents. A detergent showing a very different protein binding compared to SDS is desirable to induce a different migration, However, when this is the case, it is also likely that the detergent/protein ratio will be rather low, resulting in lower solubilizing performances, especially at the high protein concentrations prevailing in electrophoresis. Conversely, a good protein solubilizer will bind at a high detergent/protein ratio, thereby leading to a migration resembling the one shown in SDS and therefore to limited off-diagonal effects.
As to the enrichment in intrinsic membrane proteins, several solutions have been investigated, with limited success and robustness. The most widely used enrichment consists of washing membranes in high salt and/or high pH solutions. However, this leads to very limited enrichment in intrinsic membrane proteins, which represent ca. 10 percent of the proteins in the unwashed membranes and ca. 20 percent in the washed ones [89]. Enrichment by two-phase partitioning has also been described, but the real gain in performance has not been quantified [90].
Other strategies are based on the selective extraction or purification of defined classes of proteins. For example, classes of surface glycoproteins have been purified prior to proteomic analysis [91]. Besides the fact that the unglycosylated membrane proteins are lost by such an approach, the performances in terms of efficiency, i.e. number of missed glycoproteins / number of selected glycoproteins, are difficult to evaluate.
A direct and straightforward approach could be to select the intrinsic membrane proteins on the basis of their hydrophobicity, e.g. by extracting them into organic solvents [92,93].
Although this method clearly achieves a major enrichment in hydrophobic proteins [93], some more soluble, non membrane proteins are also extracted [92], and the methods has an opposite bias when compared to alkaline washes: while alkaline washes let a lot of non membrane proteins go with the membrane fraction, organic extraction leaves a lot of membrane proteins in the pellet with soluble ones.
Sometimes, a combination of electrophoretic approaches can be used to further increase the representation of membrane proteins. For example, membrane supercomplexes can be first isolated by native electrophoresis [64] and the isolated supercomplexes can be then analyzed by high-resolution elelctrophoresis, as shown in [94] 6. Concluding remarks Because of the high interest in membrane proteins, the proteomics of membrane proteins is still a hot topic in proteomics, as shown in a recent review in the field [95] Compared to the initial hopes, it is fair to say that two-dimensional electrophoretic separation of membrane proteins did not show adequate performances, as there seems to be an inverse correlation between resolution and solubilizing power. Consequently, current 2D protocols do not handle adequately the complexity of whole membrane preparations. This does not mean that this type of separation is useless, and the numerous references cited here clearly show that adapted 2D separations, or even better a combination of approaches such as in [96] can indeed lead to the identification of some membrane proteins. However, it is quite clear that we do not have for the moment the same degree of performance, general applicability and robustness on membrane proteins that we have on cytosolic proteins with classical 2D PAGE, and that further research both in electrophoretic system and in membrane protein enrichment is needed to improve the situation to a level of general application. Figure 1: Staining artefacts in membrane proteins Total mitochondrial proteins from human placenta (150 µg) were separated by two dimensional electrophoresis. The IPG gradient is a homemade 5.5-12 linear pH gradient, and the proteins have been extracted and focused in a 7M urea, 2M thiourea, 2% Brij56 and 0.4% pharmalytes 3-10 mixture. sample loading by anodic cup loading. The proteins were stained with silver, using either a mass spectrometry compatible silver ammonia protocol [18] (panel A), or the same protocol where the initial acid alcohol fixation is replaced by a 4% formaldehyde 20% ethanol fixation for 1 hour (panel B).
Figures
Protein identifications were carried out by MALDI peptide mass fingerprinting on the (A) gel. The IPG gradient is a homemade 3-10.5 linear pH gradient [38], and the proteins have been extracted and focused in a 7M urea, 2M thiourea, 2% Brij56 and 0.4% pharmalytes 3-10 mixture. sample loading (150µg) by in gel rehydration. Silver staining by silver nitrate staining [39] with formaldehyde developer.
Protein identifications were carried out by MALDI peptide mass fingerprinting.
Note the presence of cytochrome P450 isoforms in the 2D gel. The first dimension, 16BAC gels were run either at pH 2 in the phosphate/glycine system [65] (panel A), or at pH 5 in the acetate/beta alanine system [80] (panel B), or at pH 7 in the Hepes/histidine system [81] (panel C). protein detection by silver ammonia staining [18] Note the increased streaking with increasing running pH
Figure 2 :
2Two dimensional electrophoresis of mouse microsomal proteins Proteins from mouse liver microsomes were separated by two-dimensional electrophoresis.
Figure 3 :
3cationic/anionic 2D PAGE of bacterial membrane proteins Membrane proteins from B. subtilis (50µg) were separated by 16BAC/SDS PAGE.
Two-dimensional gel electrophoresis of membrane proteins. G F Ames, K Nikaido, Biochemistry. 15Ames, G.F., Nikaido, K., Two-dimensional gel electrophoresis of membrane proteins. Biochemistry 1976, 15, 616-623
Solubilization of proteins for electrophoretic analyses. T Rabilloud, Electrophoresis. 17Rabilloud, T., Solubilization of proteins for electrophoretic analyses. Electrophoresis 1996, 17, 813-829
The use of a zwitterionic detergent in two-dimensional gel electrophoresis of trout liver microsomes. G H Perdew, H W Schaup, D P Selivonchick, Anal Biochem. 135Perdew, G.H., Schaup, H.W., Selivonchick, D.P., The use of a zwitterionic detergent in two-dimensional gel electrophoresis of trout liver microsomes. Anal Biochem. 1983, 135, 453-455
Effect of "stacking" on the resolving power of ultrathin-layer two-dimensional gel electrophoresis. T Gyenes, E Gyenes, Anal Biochem. 165Gyenes, T., Gyenes, E., Effect of "stacking" on the resolving power of ultrathin-layer two-dimensional gel electrophoresis. Anal Biochem. 1987, 165, 155-160
Amidosulfobetaines, a family of detergents with improved solubilization properties: application for isoelectric focusing under denaturing conditions. T Rabilloud, E Gianazza, N Cattò, P G Righetti, Anal Biochem. 185Rabilloud, T., Gianazza, E., Cattò, N., Righetti, P.G., Amidosulfobetaines, a family of detergents with improved solubilization properties: application for isoelectric focusing under denaturing conditions. Anal Biochem. 1990, 185, 94-102
Dodecyl maltoside detergent improves resolution of hepatic membrane proteins in two-dimensional gels. F Witzmann, B Jarnot, D Parker, Electrophoresis. 12Witzmann, F., Jarnot, B. and Parker, D., Dodecyl maltoside detergent improves resolution of hepatic membrane proteins in two-dimensional gels. Electrophoresis 1991 12, 687-688
Cell surface membrane protein changes during the differentiation of cultured human promyelocytic leukemia HL-60 cells. R L Felsted, S K Gupta, C J Glover, S A Fischkoff, R E Gallagher, Cancer Res. 43Felsted, R.L., Gupta, S.K., Glover, C.J., Fischkoff, S.A., Gallagher, R.E., Cell surface membrane protein changes during the differentiation of cultured human promyelocytic leukemia HL-60 cells. Cancer Res. 1983, 43, 2754-2761
Characterization of the porcine ACTH receptor with the aid of a monoclonal antibody. H Lüddens, B Havsteen, Biol Chem Hoppe Seyler. 367Lüddens, H., Havsteen, B., Characterization of the porcine ACTH receptor with the aid of a monoclonal antibody. Biol Chem Hoppe Seyler 1986, 367, 539-547
Comparison of the molecular structure of GABA/benzodiazepine receptors purified from rat and human cerebellum. P Sweetnam, E Nestler, P Gallombardo, S Brown, Brain Res. 388Sweetnam, P., Nestler, E., Gallombardo, P., Brown, S. et al., Comparison of the molecular structure of GABA/benzodiazepine receptors purified from rat and human cerebellum. Brain Res. 1987, 388, 223-233
S A Alla, J Buschko, U Quitterer, A Maidhof, Structural features of the human bradykinin B2 receptor probed by agonists, antagonists, and anti-idiotypic antibodies. Alla, S.A., Buschko, J., Quitterer, U., Maidhof, A. et al., Structural features of the human bradykinin B2 receptor probed by agonists, antagonists, and anti-idiotypic antibodies.
. J Biol Chem. 268J Biol Chem. 1993, 268, 17277-17285
The isoelectric point of the human red cell glucose transporter. A K Englund, P Lundahl, Biochim Biophys Acta. 1065Englund, A.K., Lundahl, P., The isoelectric point of the human red cell glucose transporter. Biochim Biophys Acta 1991, 1065, 185-194
Over two hundred polypeptides resolved from the human erythrocyte membrane. R W Rubin, C Milikowski, Biochim Biophys Acta. 509Rubin, R.W., Milikowski, C., Over two hundred polypeptides resolved from the human erythrocyte membrane. Biochim Biophys Acta 1978, 509, 100-110
Membrane proteins and proteomics: un amour impossible? Electrophoresis. V Santoni, M Molloy, T Rabilloud, 21Santoni, V., Molloy, M., Rabilloud, T., Membrane proteins and proteomics: un amour impossible? Electrophoresis 2000, 21, 1054-1070
Large scale characterization of plant plasma membrane proteins. V Santoni, P Doumas, D Rouquié, M Mansion, Biochimie. 81Santoni, V., Doumas, P., Rouquié, D., Mansion, M. et al., Large scale characterization of plant plasma membrane proteins. Biochimie 1999, 81, 655-661
Two-dimensional electrophoresis of membrane proteins: a current challenge for immobilized pH gradients. C Adessi, C Miege, C Albrieux, T Rabilloud, Electrophoresis. 18Adessi, C., Miege, C., Albrieux, C., Rabilloud, T., Two-dimensional electrophoresis of membrane proteins: a current challenge for immobilized pH gradients. Electrophoresis 1997, 18, 127-35
Twodimensional gel electrophoresis for proteome projects: the effects of protein hydrophobicity and copy number. M R Wilkins, E Gasteiger, J C Sanchez, A Bairoch, D F Hochstrasser, Electrophoresis. 19Wilkins, M.R., Gasteiger, E., Sanchez, J.C., Bairoch, A., Hochstrasser, D.F., Two- dimensional gel electrophoresis for proteome projects: the effects of protein hydrophobicity and copy number. Electrophoresis 1998, 19, 1501-1505
Application of zwitterionic detergents to the solubilization of integral membrane proteins for two-dimensional gel electrophoresis and mass spectrometry. R Henningsen, B L Gale, K M Straub, D C Denagel, Proteomics. 2Henningsen, R., Gale, B.L., Straub, K.M., DeNagel, D.C., Application of zwitterionic detergents to the solubilization of integral membrane proteins for two-dimensional gel electrophoresis and mass spectrometry. Proteomics 2002, 2, 1479-1488.
Improved mass spectrometry compatibility is afforded by ammoniacal silver staining. M Chevallet, H Diemer, S Luche, A Van Dorsselaer, Proteomics. 6Chevallet, M., Diemer, H., Luche, S., van Dorsselaer, A. et al. , Improved mass spectrometry compatibility is afforded by ammoniacal silver staining. Proteomics 2006, 6, 2350-2354.
Improvement of the solubilization of proteins in two-dimensional electrophoresis with immobilized pH gradients. T Rabilloud, C Adessi, A Giraudel, J Lunardi, Electrophoresis. 18Rabilloud, T., Adessi, C., Giraudel, A., Lunardi, J., Improvement of the solubilization of proteins in two-dimensional electrophoresis with immobilized pH gradients. Electrophoresis 1997, 18, 307-316
Human proteome enhancement: high-recovery method and improved two-dimensional map of colostral fat globule membrane proteins. S Quaranta, M G Giuffrida, M Cavaletto, C Giunta, Electrophoresis. 22Quaranta, S., Giuffrida, M.G., Cavaletto, M., Giunta, C. et al., Human proteome enhancement: high-recovery method and improved two-dimensional map of colostral fat globule membrane proteins. Electrophoresis 2001, 22, 1810-1818
New zwitterionic detergents improve the analysis of membrane proteins by two-dimensional electrophoresis. M Chevallet, V Santoni, A Poinas, D Rouquié, Electrophoresis. 19Chevallet, M., Santoni, V., Poinas, A., Rouquié, D. et al., New zwitterionic detergents improve the analysis of membrane proteins by two-dimensional electrophoresis. Electrophoresis 1998, 19, 1901-1909
Analysis of membrane proteins by two-dimensional electrophoresis: comparison of the proteins extracted from normal or Plasmodium falciparum-infected erythrocyte ghosts. T Rabilloud, T Blisnick, M Heller, S Luche, Electrophoresis. 20Rabilloud, T., Blisnick, T., Heller, M., Luche, S. et al., Analysis of membrane proteins by two-dimensional electrophoresis: comparison of the proteins extracted from normal or Plasmodium falciparum-infected erythrocyte ghosts. Electrophoresis 1999, 20, 3603-3610.
Membrane proteomics: use of additive main effects with multiplicative interaction model to classify plasma membrane proteins according to their solubility and electrophoretic properties. V Santoni, S Kieffer, D Desclaux, F Masson, T Rabilloud, Electrophoresis. 21Santoni, V., Kieffer, S., Desclaux, D., Masson, F., Rabilloud, T., Membrane proteomics: use of additive main effects with multiplicative interaction model to classify plasma membrane proteins according to their solubility and electrophoretic properties. Electrophoresis 2000, 21, 3329-3344.
Structure-efficiency relationships of zwitterionic detergents as protein solubilizers in two-dimensional electrophoresis. C Tastet, S Charmont, M Chevallet, S Luche, T Rabilloud, Proteomics. 3Tastet, C., Charmont, S., Chevallet, M., Luche, S., Rabilloud, T., Structure-efficiency relationships of zwitterionic detergents as protein solubilizers in two-dimensional electrophoresis. Proteomics 2003, 3, 111-121.
Progress in the definition of a reference human mitochondrial proteome. P Lescuyer, J M Strub, S Luche, H Diemer, Proteomics. 3Lescuyer, P., Strub, J.M., Luche, S., Diemer, H. et al., Progress in the definition of a reference human mitochondrial proteome. Proteomics 2003, 3, 157-167
Evaluation of nonionic and zwitterionic detergents as membrane protein solubilizers in two-dimensional electrophoresis. S Luche, V Santoni, T Rabilloud, Proteomics. 3Luche, S., Santoni, V., Rabilloud, T., Evaluation of nonionic and zwitterionic detergents as membrane protein solubilizers in two-dimensional electrophoresis. Proteomics 2003, 3, 249-253.
Enhanced resolution of glycosylphosphatidylinositolanchored and transmembrane proteins from the lipid-rich myelin membrane by twodimensional gel electrophoresis. C M Taylor, S E Pfeiffer, Proteomics. 3Taylor, C.M. and Pfeiffer, S.E., Enhanced resolution of glycosylphosphatidylinositol- anchored and transmembrane proteins from the lipid-rich myelin membrane by two- dimensional gel electrophoresis. Proteomics 2003, 3, 1303-1312.
Solubilization of membrane proteins for two-dimensional gel electrophoresis: identification of sarcoplasmic reticulum membrane proteins. G J Babu, D Wheeler, O Alzate, M Periasamy, Anal Biochem. 325Babu, G.J., Wheeler, D., Alzate, O., Periasamy, M., Solubilization of membrane proteins for two-dimensional gel electrophoresis: identification of sarcoplasmic reticulum membrane proteins. Anal Biochem. 2004, 325, 121-125.
Sub-cellular proteomic analysis of a Medicago truncatula root microsomal fraction. B Valot, S Gianinazzi, E Dumas-Gaudot, Phytochemistry. 65Valot, B., Gianinazzi, S., Dumas-Gaudot, E., Sub-cellular proteomic analysis of a Medicago truncatula root microsomal fraction. Phytochemistry 2004, 65, 1721-1732
Analysis of proteins from membrane-enriched cerebellar preparations by two-dimensional gel electrophoresis and mass spectrometry. G Friso, L Wikström, Friso, G., Wikström, L., Analysis of proteins from membrane-enriched cerebellar preparations by two-dimensional gel electrophoresis and mass spectrometry.
. Electrophoresis. 20Electrophoresis. 1999, 20, 917-927
Optimizing protein solubility for two-dimensional gel electrophoresis analysis of human myocardium. B A Stanley, I Neverova, H A Brown, J E Van Eyk, Proteomics. 3Stanley, B.A., Neverova, I., Brown, H.A., Van Eyk, J.E., Optimizing protein solubility for two-dimensional gel electrophoresis analysis of human myocardium. Proteomics 2003, 3, 815-820
Unseen proteome: mining below the tip of the iceberg to find low abundance and membrane proteins. S K Pedersen, J L Harry, L Sebastian, J Baker, Pedersen, S.K., Harry, J.L, Sebastian, L., Baker, J. et al., Unseen proteome: mining below the tip of the iceberg to find low abundance and membrane proteins.
. J Proteome Res. 2J Proteome Res. 2003, 2, 303-311
Proteomic analysis of rat brain tissue: comparison of protocols for two-dimensional gel electrophoresis analysis based on different solubilizing agents. L Carboni, C Piubelli, P G Righetti, B Jansson, E Domenici, Electrophoresis. 23Carboni, L., Piubelli, C., Righetti, P.G., Jansson, B., Domenici, E., Proteomic analysis of rat brain tissue: comparison of protocols for two-dimensional gel electrophoresis analysis based on different solubilizing agents. Electrophoresis 2002, 23, 4132-4141.
Proteomics of integral membrane proteins--theory and application. A E Speers, C C Wu, Chem Rev. 107Speers, A.E., Wu, C.C., Proteomics of integral membrane proteins--theory and application. Chem Rev. 2007, 107, 3687-3714
Separation of human erythrocyte membrane associated proteins with one-dimensional and two-dimensional gel electrophoresis followed by identification with matrix-assisted laser desorption/ionization-time of flight mass spectrometry. T Y Low, T K Seow, M C Chung, Proteomics. 2Low, T.Y., Seow, T.K., Chung, M.C., Separation of human erythrocyte membrane associated proteins with one-dimensional and two-dimensional gel electrophoresis followed by identification with matrix-assisted laser desorption/ionization-time of flight mass spectrometry. Proteomics 2002, 2, 1229-1239
The membrane proteome of Halobacterium salinarum. C Klein, C Garcia-Rizo, B Bisle, B Scheffer, Proteomics. 5Klein, C., Garcia-Rizo, C., Bisle, B., Scheffer, B. et al., The membrane proteome of Halobacterium salinarum. Proteomics 2005, 5, 180-197.
Comparison of one-dimensional and two-dimensional gel electrophoresis as a separation tool for proteomic analysis of rat liver microsomes: Cytochromes P450 and other membrane proteins Proteomics. N Galeva, M Altermann, 2Galeva, N., Altermann, M., Comparison of one-dimensional and two-dimensional gel electrophoresis as a separation tool for proteomic analysis of rat liver microsomes: Cytochromes P450 and other membrane proteins Proteomics 2002, 2, 713-722
Formulations for immobilized pH gradients including pH extremes. E Gianazza, F Celentano, S Magenes, C Ettori, P G Righetti, Electrophoresis. 10Gianazza, E., Celentano, F., Magenes, S., Ettori, C., Righetti, P.G., Formulations for immobilized pH gradients including pH extremes. Electrophoresis 1989, 10, 806-808.
About the mechanism of interference of silver staining with peptide mass spectrometry. S Richert, S Luche, M Chevallet, A Van Dorsselaer, Proteomics. 4Richert, S., Luche, S., Chevallet, M., Van Dorsselaer, A. et al. , About the mechanism of interference of silver staining with peptide mass spectrometry. Proteomics 2004, 4, 909-916.
Two-dimensional electrophoresis and peptide mass fingerprinting of bacterial outer membrane proteins. M P Molloy, N D Phadke, J R Maddock, P C Andrews, Electrophoresis. 22Molloy, M.P., Phadke, N.D., Maddock, J.R., Andrews, P.C., Two-dimensional electrophoresis and peptide mass fingerprinting of bacterial outer membrane proteins. Electrophoresis 2001, 22, 1686-1696
Proteomic analysis of the Escherichia coli outer membrane. M P Molloy, B R Herbert, M B Slade, T Rabilloud, Eur J Biochem. 267Molloy, M.P., Herbert, B.R., Slade, M.B., Rabilloud, T. et al., Proteomic analysis of the Escherichia coli outer membrane. Eur J Biochem. 2000, 267, 2871-2881
Proteomic approach to the identification of voltage-dependent anion channel protein isoforms in guinea pig brain synaptosomes. S Liberatori, B Canas, C Tani, L Bini, Proteomics. 4Liberatori, S., Canas, B., Tani, C., Bini, L. et al., Proteomic approach to the identification of voltage-dependent anion channel protein isoforms in guinea pig brain synaptosomes. Proteomics 2004, 4, 1335-1340
Proteome analysis of membrane and cell wall associated proteins from Staphylococcus aureus. R Nandakumar, M P Nandakumar, M R Marten, J M Ross, J Proteome Res. 4Nandakumar, R., Nandakumar, M.P., Marten, M.R., Ross, J.M., Proteome analysis of membrane and cell wall associated proteins from Staphylococcus aureus. J Proteome Res. 2005, 4, 250-257.
Proteomic profiling of cell envelope-associated proteins from Staphylococcus aureus. C L Gatlin, R Pieper, S T Huang, E Mongodin, Proteomics. 6Gatlin, C.L., Pieper, R., Huang, S.T., Mongodin, E. et al., Proteomic profiling of cell envelope-associated proteins from Staphylococcus aureus. Proteomics 2006, 6, 1530-1549.
Identification of a novel heterodimeric outer membrane protein of Porphyromonas gingivalis by two-dimensional gel electrophoresis and peptide mass fingerprinting. P D Veith, G H Talbo, N Slakeski, E C Reynolds, Eur J Biochem. 268Veith, P.D., Talbo, G.H., Slakeski, N., Reynolds, E.C., Identification of a novel heterodimeric outer membrane protein of Porphyromonas gingivalis by two-dimensional gel electrophoresis and peptide mass fingerprinting. Eur J Biochem. 2001, 268, 4748-4757
Systematic identification of the subproteome of Escherichia coli cell envelope reveals the interaction network of membrane proteins and membrane-associated peripheral proteins. C Z Huang, X M Lin, L N Wu, D F Zhang, Huang, C.Z., Lin, X.M., Wu, L.N., Zhang, D.F. et al., Systematic identification of the subproteome of Escherichia coli cell envelope reveals the interaction network of membrane proteins and membrane-associated peripheral proteins.
. J Proteome Res. 5J Proteome Res. 2006, 5, 3268-3276.
Proteomic characterization of outer membrane vesicles from the extraintestinal pathogenic Escherichia coli tolR IHE3034 mutant. F Berlanda Scorza, F Doro, M J Rodríguez-Ortega, M Stella, Mol Cell Proteomics. in pressBerlanda Scorza, F., Doro, F., Rodríguez-Ortega, M.J., Stella, M. et al. , Proteomic characterization of outer membrane vesicles from the extraintestinal pathogenic Escherichia coli tolR IHE3034 mutant. Mol Cell Proteomics. in press
Uropathogenic Escherichia coli outer membrane antigens expressed during urinary tract infection. E C Hagan, H L Mobley, Infect Immun. 75Hagan, E.C., Mobley, H.L., Uropathogenic Escherichia coli outer membrane antigens expressed during urinary tract infection. Infect Immun. 2007, 75, 3941-3949
Outer membrane proteins of Fibrobacter succinogenes with potential roles in adhesion to cellulose and in cellulose digestion. H S Jun, M Qi, J Gong, E E Egbosimba, C W Forsberg, J Bacteriol. 189Jun, H.S., Qi, M., Gong, J., Egbosimba, E.E., Forsberg, C.W., Outer membrane proteins of Fibrobacter succinogenes with potential roles in adhesion to cellulose and in cellulose digestion. J Bacteriol. 2007, 189, 6806-6815
Analysis of the outer membrane proteome of Caulobacter crescentus by two-dimensional electrophoresis and mass spectrometry. N D Phadke, M P Molloy, S A Steinhoff, P J Ulintz, Proteomics. 1Phadke, N.D., Molloy, M.P., Steinhoff, S.A., Ulintz, P.J. et al., Analysis of the outer membrane proteome of Caulobacter crescentus by two-dimensional electrophoresis and mass spectrometry. Proteomics 2001, 1, 705-720
Proteomic screening of salt-stressinduced changes in plasma membranes of Synechocystis sp. strain PCC 6803. F Huang, S Fulda, M Hagemann, B Norling, Proteomics. 6Huang, F., Fulda, S., Hagemann, M., Norling, B., Proteomic screening of salt-stress- induced changes in plasma membranes of Synechocystis sp. strain PCC 6803. Proteomics 2006, 6, 910-920.
Immunoproteomics of outer membrane proteins and extracellular proteins of Shigella flexneri 2a 2457T. T Ying, H Wang, M Li, J Wang, Proteomics. 5Ying, T., Wang, H., Li, M., Wang, J., et al., Immunoproteomics of outer membrane proteins and extracellular proteins of Shigella flexneri 2a 2457T. Proteomics 2005, 5, 4777- 4793
Proteomic analysis of Brucella abortus cell envelope and identification of immunogenic candidate proteins for vaccine development. J P Connolly, D Comerci, T G Alefantis, A Walz, Proteomics. 6Connolly, J.P., Comerci, D., Alefantis, T.G., Walz, A. et al. , Proteomic analysis of Brucella abortus cell envelope and identification of immunogenic candidate proteins for vaccine development. Proteomics 2006, 6, 3767-3780
Global comparison of the membrane subproteomes between a multidrug-resistant Acinetobacter baumannii strain and a reference strain. A Siroy, P Cosette, D Seyer, C Lemaître-Guillier, J Proteome Res. 5Siroy, A., Cosette, P., Seyer, D., Lemaître-Guillier, C. et al., Global comparison of the membrane subproteomes between a multidrug-resistant Acinetobacter baumannii strain and a reference strain. J Proteome Res. 2006, 5, 3385-3398
Proteomic analysis of the sarcosine-insoluble outer membrane fraction of Pseudomonas aeruginosa responding to ampicilin, kanamycin, and tetracycline resistance. X Peng, C Xu, H Ren, X Lin, J Proteome Res. 4Peng, X., Xu, C., Ren, H., Lin, X. et al., Proteomic analysis of the sarcosine-insoluble outer membrane fraction of Pseudomonas aeruginosa responding to ampicilin, kanamycin, and tetracycline resistance. J Proteome Res. 2005, 4, 2257-2265.
Analysing the outer membrane subproteome of Methylococcus capsulatus (Bath) using proteomics and novel biocomputing tools. F S Berven, O A Karlsen, A H Straume, K Flikka, Arch Microbiol. 184Berven, F.S., Karlsen, O.A., Straume, A.H., Flikka, K. et al., Analysing the outer membrane subproteome of Methylococcus capsulatus (Bath) using proteomics and novel biocomputing tools. Arch Microbiol. 2006, 184, 362-377
Proteomic analysis of chlorosomedepleted membranes of the green sulfur bacterium Chlorobium tepidum. M Aivaliotis, W Haase, M Karas, G Tsiotis, Proteomics. 6Aivaliotis, M., Haase, W., Karas, M., Tsiotis, G., Proteomic analysis of chlorosome- depleted membranes of the green sulfur bacterium Chlorobium tepidum. Proteomics 2006, 6, 217-232
Expression profiling of lymphocyte plasma membrane proteins. M J Peirce, R Wait, S Begum, J Saklatvala, A P Cope, Mol Cell Proteomics. 3Peirce, M.J., Wait, R., Begum, S., Saklatvala, J., Cope, A.P., Expression profiling of lymphocyte plasma membrane proteins. Mol Cell Proteomics 2004, 3, 56-65
Protein delipidation and precipitation by tri-n-butylphosphate, acetone, and methanol treatment for isoelectric focusing and two-dimensional gel electrophoresis. R Mastro, M Hall, Anal Biochem. 273Mastro, R., Hall, M.,Protein delipidation and precipitation by tri-n-butylphosphate, acetone, and methanol treatment for isoelectric focusing and two-dimensional gel electrophoresis. Anal Biochem. 1999, 273, 313-315
The synaptic vesicle proteome: a comparative study in membrane protein identification. H D Coughenour, R S Spaulding, C M Thompson, Proteomics. 4Coughenour, H.D., Spaulding, R.S., Thompson, C.M., The synaptic vesicle proteome: a comparative study in membrane protein identification. Proteomics 2004, 4, 3141-3155.
Optimization of IPG strip equilibration for the basic membrane protein mABC1 Proteomics. J Mcdonough, E Marban, 5McDonough, J., Marban, E., Optimization of IPG strip equilibration for the basic membrane protein mABC1 Proteomics 2005, 5, 2892-2895
Electrophoresis and electrofocusing in detergentcontaining media: a disussion of basic concepts Electrophoresis. L M Hjelmeland, A Chrambach, 2Hjelmeland, L.M., Chrambach, A., Electrophoresis and electrofocusing in detergent- containing media: a disussion of basic concepts Electrophoresis 1981, 2, 1-11
Blue native electrophoresis for isolation of membrane protein complexes in enzymatically active form. H Schägger, G Von Jagow, Anal Biochem. 199Schägger, H., von Jagow G., Blue native electrophoresis for isolation of membrane protein complexes in enzymatically active form. Anal Biochem. 1991, 199, 223-231.
. H Schägger, this volumeSchägger, H., et al. this volume
16-BAC"), a cationic detergent, in an acidic polyacrylamide gel electrophoresis system to detect base labile protein methylation in intact cells. D E Macfarlane, Anal Biochem. 132Use of benzyldimethyl-n-hexadecylammonium chlorideMacFarlane, D.E., Use of benzyldimethyl-n-hexadecylammonium chloride ("16-BAC"), a cationic detergent, in an acidic polyacrylamide gel electrophoresis system to detect base labile protein methylation in intact cells. Anal Biochem. 1983, 132, 231-235.
Two dimensional benzyldimethyl-n-hexadecylammonium chloridesodium dodecyl sulfate preparative polyacrylamide gel electrophoresis: a high capacity high resolution technique for the purification of proteins from complex mixtures. D E Macfarlane, Anal Biochem. 176MacFarlane, D.E., Two dimensional benzyldimethyl-n-hexadecylammonium chloride- sodium dodecyl sulfate preparative polyacrylamide gel electrophoresis: a high capacity high resolution technique for the purification of proteins from complex mixtures. Anal Biochem. 1989, 176, 457-463
Effect of various detergents on protein migration in the second dimension of two-dimensional gels. M F Lopez, W F Patton, B L Utterback, N Chung-Welch, Anal Biochem. 199Lopez, M.F., Patton, W.F., Utterback, B.L., Chung-Welch, N. et al., Effect of various detergents on protein migration in the second dimension of two-dimensional gels. Anal Biochem. 1991, 199, 35-44
16-BAC/SDS-PAGE: a twodimensional gel electrophoresis system suitable for the separation of integral membrane proteins. J Hartinger, K Stenius, D Högemann, R Jahn, Anal Biochem. 240Hartinger, J., Stenius, K., Högemann, D., Jahn, R., 16-BAC/SDS-PAGE: a two- dimensional gel electrophoresis system suitable for the separation of integral membrane proteins. Anal Biochem. 1996, 240, 126-133.
Effectiveness and limitation of two-dimensional gel electrophoresis in bacterial membrane protein proteomics and perspectives. K Bunai, K Yamane, Bunai, K., Yamane, K., Effectiveness and limitation of two-dimensional gel electrophoresis in bacterial membrane protein proteomics and perspectives.
. J Chromatogr B. 815J Chromatogr B 2005, 815, 227-236
Profiling and comprehensive expression analysis of ABC transporter solute-binding proteins of Bacillus subtilis membrane based on a proteomic approach. K Bunai, M Ariga, T Inoue, M Nozaki, Electrophoresis. 25Bunai, K., Ariga, M., Inoue, T., Nozaki, M. et al., Profiling and comprehensive expression analysis of ABC transporter solute-binding proteins of Bacillus subtilis membrane based on a proteomic approach. Electrophoresis 2004, 25, 141-155
Quantitative profiling of the membrane proteome in a halophilic archaeon. B Bisle, A Schmidt, B Scheibe, C Klein, Mol Cell Proteomics. 5Bisle, B., Schmidt, A., Scheibe, B., Klein, C.et al., Quantitative profiling of the membrane proteome in a halophilic archaeon. Mol Cell Proteomics. 2006, 5, 1543-1558
Subproteomics: identification of plasma membrane proteins from the yeast Saccharomyces cerevisiae. C Navarre, H Degand, K L Bennett, J S Crawford, Proteomics. 2Navarre, C., Degand, H., Bennett, K.L., Crawford, J.S., et al., Subproteomics: identification of plasma membrane proteins from the yeast Saccharomyces cerevisiae. Proteomics 2002, 2, 1706-1714.
The human platelet membrane proteome reveals several new potential membrane proteins. J Moebius, R P Zahedi, U Lewandrowski, C Berger, Mol Cell Proteomics. 4Moebius, J., Zahedi, R.P., Lewandrowski, U., Berger, C. et al., The human platelet membrane proteome reveals several new potential membrane proteins. Mol Cell Proteomics. 2005, 4, 1754-1761
Mass spectrometrical identification of brain proteins including highly insoluble and transmembrane proteins. A Bierczynska-Krzysik, S U Kang, J Silberrring, G Lubec, Neurochem Int. 49Bierczynska-Krzysik, A., Kang, S.U., Silberrring, J., Lubec, G., Mass spectrometrical identification of brain proteins including highly insoluble and transmembrane proteins. Neurochem Int. 2006, 49, 245-255
Two-dimensional benzyldimethyl-nhexadecylammonium chloride/SDS-PAGE for membrane proteomics. R P Zahedi, C Meisinger, A Sickmann, Proteomics. 5Zahedi, R.P., Meisinger, C., Sickmann, A., Two-dimensional benzyldimethyl-n- hexadecylammonium chloride/SDS-PAGE for membrane proteomics. Proteomics 2005, 5, 3581-3588.
2-D differential membrane proteome analysis of scarce protein samples. S Helling, E Schmitt, C Joppich, T Schulenborg, Proteomics. 6Helling, S., Schmitt, E., Joppich, C., Schulenborg, T. et al., 2-D differential membrane proteome analysis of scarce protein samples. Proteomics 2006, 6, 4506-4513.
Photopolymerization of polyacrylamide gels with methylene blue. T Lyubimova, S Caglio, C Gelfi, P G Righetti, T Rabilloud, Electrophoresis. 14Lyubimova, T., Caglio, S., Gelfi, C., Righetti, P.G., Rabilloud, T., Photopolymerization of polyacrylamide gels with methylene blue. Electrophoresis. 1993, 14, 40-50.
One-and two-dimensional histone separations in acidic gels: usefulness of methylene blue-driven photopolymerization. T Rabilloud, V Girardot, J J Lawrence, Rabilloud, T., Girardot, V., Lawrence, J.J., One-and two-dimensional histone separations in acidic gels: usefulness of methylene blue-driven photopolymerization.
. Electrophoresis. 17Electrophoresis. 1996, 17, 67-73.
Cationic electrophoresis and electrotransfer of membrane glycoproteins. E Buxbaum, Anal Biochem. 314Buxbaum, E., Cationic electrophoresis and electrotransfer of membrane glycoproteins. Anal Biochem. 2003, 314, 70-76.
Use of cationic detergents for polyacrylamide gel electrophoresis in multiphasic buffer systems. G Mócz, M Bálint, Anal Biochem. 143Mócz, G., Bálint, M., Use of cationic detergents for polyacrylamide gel electrophoresis in multiphasic buffer systems. Anal Biochem. 1984, 143, 283-292.
Rapid analysis of mitotic histone H1 phosphorylation by cationic disc electrophoresis at neutral pH in minigels. J R Paulson, P W Mesner, J J Delrow, N N Mahmoud, W A Ciesielski, Anal Biochem. 203Paulson, J.R., Mesner, P.W., Delrow, J.J., Mahmoud, N.N., Ciesielski, W.A., Rapid analysis of mitotic histone H1 phosphorylation by cationic disc electrophoresis at neutral pH in minigels. Anal Biochem. 1992, 203, 227-234
Analysis of the synaptic vesicle proteome using three gel-based protein separation techniques. J Burré, T Beckhaus, H Schägger, C Corvey, Proteomics. 6Burré, J., Beckhaus, T., Schägger, H., Corvey, C. et al., Analysis of the synaptic vesicle proteome using three gel-based protein separation techniques. Proteomics. 2006, 6, 6250- 6262.
Tris-tricine and Tris-borate buffer systems provide better estimates of human mesothelial cell intermediate filament protein molecular weights than the standard Tris-glycine system. W F Patton, N Chung-Welch, M F Lopez, R P Cambria, Anal Biochem. 197Patton, W.F., Chung-Welch, N., Lopez, M.F., Cambria, R.P., et al., Tris-tricine and Tris-borate buffer systems provide better estimates of human mesothelial cell intermediate filament protein molecular weights than the standard Tris-glycine system. Anal Biochem. 1991, 197, 25-33
A novel Bicine running buffer system for doubled sodium dodecyl sulfate -polyacrylamide gel electrophoresis of membrane proteins. T I Williams, J C Combs, A P Thakur, H J Strobel, B C Lynn, Electrophoresis. 27Williams, T.I., Combs, J.C., Thakur, A.P., Strobel, H.J., Lynn, B.C., A novel Bicine running buffer system for doubled sodium dodecyl sulfate -polyacrylamide gel electrophoresis of membrane proteins. Electrophoresis. 2006, 27, 2984-2995
Two-dimensional electrophoresis for the isolation of integral membrane proteins and mass spectrometric identification. I Rais, M Karas, H Schägger, Rais, I., Karas, M., Schägger, H., Two-dimensional electrophoresis for the isolation of integral membrane proteins and mass spectrometric identification.
. Proteomics. 4Proteomics. 2004, 4, 2567-2571
Proteomic profile changes in membranes of ethanol-tolerant Clostridium thermocellum. T I Williams, J C Combs, B C Lynn, H J Strobel, Appl Microbiol Biotechnol. 74Williams, T.I., Combs, J.C., Lynn, B.C., Strobel, H.J., Proteomic profile changes in membranes of ethanol-tolerant Clostridium thermocellum. Appl Microbiol Biotechnol. 2007, 74, 422-432
Difference gel electrophoresis: a single gel method for detecting changes in protein extracts. M Unlü, M E Morgan, J S Minden, Electrophoresis. 18Unlü, M., Morgan, M.E., Minden, J.S., Difference gel electrophoresis: a single gel method for detecting changes in protein extracts. Electrophoresis. 1997, 18, 2071-2077.
Synaptic vesicle proteins under conditions of rest and activation: analysis by 2-D difference gel electrophoresis. J Burré, T Beckhaus, C Corvey, M Karas, Electrophoresis. 27Burré, J., Beckhaus, T., Corvey, C., Karas, M. et al., Synaptic vesicle proteins under conditions of rest and activation: analysis by 2-D difference gel electrophoresis. Electrophoresis 2006, 27, 3488-3496
Enrichment of integral membrane proteins from small amounts of brain tissue. J Schindler, S Jung, G Niedner-Schatteburg, E Friauf, H G Nothwang, Schindler, J., Jung, S., Niedner-Schatteburg, G., Friauf, E., Nothwang, H.G., Enrichment of integral membrane proteins from small amounts of brain tissue.
. J Neural Transm. 113J Neural Transm. 2006, 113, 995-1013
Protein pre-fractionation in detergent-polymer aqueous two-phase systems for facilitated proteomic studies of membrane proteins. H Everberg, U Sivars, C Emanuelsson, C Persson, J Chromatogr A. 1029Everberg, H., Sivars, U., Emanuelsson, C., Persson, C. et al., Protein pre-fractionation in detergent-polymer aqueous two-phase systems for facilitated proteomic studies of membrane proteins. J Chromatogr A. 2004, 1029, 113-124
Plasma membranefocused proteomics: dramatic changes in surface expression during the maturation of human dendritic cells. H Watarai, A Hinohara, J Nagafune, T Nakayama, Proteomics. 5Watarai, H., Hinohara, A., Nagafune, J., Nakayama, T., et al., Plasma membrane- focused proteomics: dramatic changes in surface expression during the maturation of human dendritic cells. Proteomics. 2005, 5, 4001-4011.
Extraction of Escherichia coli proteins with organic solvents prior to two-dimensional electrophoresis. M P Molloy, B R Herbert, K L Williams, A A Gooley, Electrophoresis. 20Molloy, M.P., Herbert, B.R., Williams, K.L., Gooley, A.A., Extraction of Escherichia coli proteins with organic solvents prior to two-dimensional electrophoresis. Electrophoresis 1999, 20, 701-704
Organic solvent extraction as a versatile procedure to identify hydrophobic chloroplast membrane proteins. M Ferro, D Seigneurin-Berny, N Rolland, A Chapel, Electrophoresis. 21Ferro, M., Seigneurin-Berny, D., Rolland, N., Chapel, A., et al., Organic solvent extraction as a versatile procedure to identify hydrophobic chloroplast membrane proteins. Electrophoresis 2000, 21, 3517-3526
Biochemical dissection of the mitochondrial proteome from Arabidopsis thaliana by three-dimensional gel electrophoresis. W Werhahn, H P Braun, Electrophoresis. 23Werhahn, W., Braun, H.P. Biochemical dissection of the mitochondrial proteome from Arabidopsis thaliana by three-dimensional gel electrophoresis. Electrophoresis 2002, 23, 640-646.
Two-dimensional electrophoresis of membrane proteins. R J Braun, N Kinkl, M Beer, M Ueffing, Anal Bioanal Chem. 389Braun, R.J., Kinkl, N., Beer, M., Ueffing, M. Two-dimensional electrophoresis of membrane proteins. Anal Bioanal Chem. 2007, 389, 1033-1045.
An alternative strategy for the membrane proteome analysis of the green sulfur bacterium Chlorobium tepidum using blue native PAGE and 2-D PAGE on purified membranes. M Aivaliotis, M Karas, G Tsiotis, J Proteome Res. 6Aivaliotis, M., Karas, M., Tsiotis, G., An alternative strategy for the membrane proteome analysis of the green sulfur bacterium Chlorobium tepidum using blue native PAGE and 2-D PAGE on purified membranes. J Proteome Res. 2007, 6, 1048-1058
|
[] |
[
"The Golden Ratio of Learning and Momentum",
"The Golden Ratio of Learning and Momentum"
] |
[
"Stefan Jaeger [email protected] \nNational Library of Medicine National Institutes of Health Bethesda\n20894MDUSA\n"
] |
[
"National Library of Medicine National Institutes of Health Bethesda\n20894MDUSA"
] |
[] |
Gradient descent has been a central training principle for artificial neural networks from the early beginnings to today's deep learning networks. The most common implementation is the backpropagation algorithm for training feed-forward neural networks in a supervised fashion. Backpropagation involves computing the gradient of a loss function, with respect to the weights of the network, to update the weights and thus minimize loss. Although the mean square error is often used as a loss function, the general stochastic gradient descent principle does not immediately connect with a specific loss function. Another drawback of backpropagation has been the search for optimal values of two important training parameters, learning rate and momentum weight, which are determined empirically in most systems. The learning rate specifies the step size towards a minimum of the loss function when following the gradient, while the momentum weight considers previous weight changes when updating current weights. Using both parameters in conjunction with each other is generally accepted as a means to improving training, although their specific values do not follow immediately from standard backpropagation theory. This paper proposes a new information-theoretical loss function motivated by neural signal processing in a synapse. The new loss function implies a specific learning rate and momentum weight, leading to empirical parameters often used in practice. The proposed framework also provides a more formal explanation of the momentum term and its smoothing effect on the training process. All results taken together show that loss, learning rate, and momentum are closely connected. To support these theoretical findings, experiments for handwritten digit recognition show the practical usefulness of the proposed loss function and training parameters.Preprint. Under review.
| null |
[
"https://arxiv.org/pdf/2006.04751v1.pdf"
] | 219,530,964 |
2006.04751
|
19615a9123cc0cc185e462586f8d7b508de08da8
|
The Golden Ratio of Learning and Momentum
Stefan Jaeger [email protected]
National Library of Medicine National Institutes of Health Bethesda
20894MDUSA
The Golden Ratio of Learning and Momentum
Gradient descent has been a central training principle for artificial neural networks from the early beginnings to today's deep learning networks. The most common implementation is the backpropagation algorithm for training feed-forward neural networks in a supervised fashion. Backpropagation involves computing the gradient of a loss function, with respect to the weights of the network, to update the weights and thus minimize loss. Although the mean square error is often used as a loss function, the general stochastic gradient descent principle does not immediately connect with a specific loss function. Another drawback of backpropagation has been the search for optimal values of two important training parameters, learning rate and momentum weight, which are determined empirically in most systems. The learning rate specifies the step size towards a minimum of the loss function when following the gradient, while the momentum weight considers previous weight changes when updating current weights. Using both parameters in conjunction with each other is generally accepted as a means to improving training, although their specific values do not follow immediately from standard backpropagation theory. This paper proposes a new information-theoretical loss function motivated by neural signal processing in a synapse. The new loss function implies a specific learning rate and momentum weight, leading to empirical parameters often used in practice. The proposed framework also provides a more formal explanation of the momentum term and its smoothing effect on the training process. All results taken together show that loss, learning rate, and momentum are closely connected. To support these theoretical findings, experiments for handwritten digit recognition show the practical usefulness of the proposed loss function and training parameters.Preprint. Under review.
Introduction
Artificial neural networks (ANNs) have been at the center of machine learning and artificial intelligence from the early beginning. However, the basic training principle of ANNs has not changed since the inception of feed-forward, multilayer networks [Rumelhart et al., 1986]. While their structure and depth have been constantly developed further, leading to the modern deep learning networks, training of parameters is traditionally based on gradient descent and backpropagation. This paper revisits this common way of training ANNs with backpropagation, arguing that it should rather be considered as a two-way relationship between input and output rather than a one-sided teaching process. Following this train of thought, and motivated by biological learning systems, this paper develops a computational model for which learning is tantamount to making an observed input identical to the actual input, implying a difference between perception and reality. This boils down to finding the optimal gradient of an information-theoretical loss function, where the search space includes points for which the synaptic input coincides with the output. The theory developed in this paper will show that the golden ratio plays a central role in this learning process, defining points of minimum uncertainty, which define reality. Furthermore, the golden ratio allows to derive theoretical weights for the learning rate and momentum weight. The paper shows that these values match closely the values traditionally used in the literature, which are determined empirically. To provide further evidence that the presented theoretical framework is of practical significance, a practical test is carried out in the last section of this paper. For this test, a deep learning network is applied to handwritten digit recognition, using the proposed loss function and learning parameters.
The paper is structured as follows: Section 2 briefly summarizes the basic principle of backpropagation, including the momentum term. Section 3 highlights the basic mechanism of natural synaptic signal processing based on which Section 4 then derives a computational learning model. In Section 5, the golden ratio and its mathematical definition are highlighted. Section 6 presents the informationtheoretical loss function and develops the regularization of momentum. Finally, Section 7 shows experimental results before a conclusion summarizes the paper.
Backpropagation
In more than thirty years, backpropagation has established itself as the commonly used optimization principle for supervised learning with ANNs, including modern deep learning networks [Rumelhart et al., 1986, Bengio, 2012. Backpropagation adjust the weights in ANNs so that the difference between network output and teaching input becomes smaller.
Basic principle
The backpropagation algorithm is a gradient descent method that starts by computing the gradient of the loss function defining the difference between the network output and the teaching input [LeCun et al., 2012]. A commonly used loss function L is the sum of the squared error (SSE) between the network predictions Y and training targets T [Widrow et al., 2019]:
L = 1 N N n K k (Y nk − T nk ) 2 ,(1)
where N is the number of observations and K is the number of classes. The gradient is computed with respect to each network weight. It can be computed one layer at a time, using the chain rule, starting at the output layer and then iterating backwards. Propagating the gradient back through the network for each weight is what gives this method its name. An important term in this backpropagation process is the derivative of the loss function with respect to the predictions Y . For the SSE loss function, this derivative takes the following form for an observation vector Y and a training vector T :
dL dY = 2 · (Y − T ) N(2)
The last step in the backpropagation process is to move along the gradient towards the optimum by adjusting each weight w ij between two nodes, i and j, in the network. This is achieved by adding a ∆w ij to each weight w ij , which is the partial derivative of the loss function L multiplied by the so-called learning rate η (multiplied by −1 to move towards the minimum):
∆w ij = −η ∂L ∂w ij(3)
The learning rate η is a parameter that had to be determined largely empirically so far [Bengio, 2012].
In setting a learning rate, there is a trade-off between the rate of convergence and the risk of passing over the optimal value. A typical value for the learning rate used in practice appears to be 0.01, although reported values have ranged across several orders of magnitude.
Momentum term
Updating weights by adding the delta of Eq. 3 does not guarantee reaching the global optimum. In practical experiments, adding a so-called momentum term has proved to be effective in improving performance. The momentum term for a weight w ij in the network is its delta from the previous iteration, t − 1, multiplied with a weighting factor α. With momentum, the delta term in Eq. 3 becomes
∆w ij (t) = −η ∂L ∂w ij (t) + α · ∆w ij (t − 1)(4)
The common understanding is that the momentum term helps in accelerating stochastic gradient descent (SGD) by dampening oscillations. However, it introduces another parameter α for which only empirical guidelines exist but no theoretical derivation of its value, although second order methods have been tried [Bengio, 2012, Sutskever et al., 2013.
Similar to the learning rate η, several values have been tried out in practice for the momentum weight α, although values around 0.9 seem to be more commonly used than others.
Synaptic Transmission Process
The basic building blocks of the human brain are neurons, which are connected and communicate with each other via synapses. It is estimated that the human brain has around 8.6×10 10 neurons [Herculano-Houzel, 2009], which can each have several thousand synaptic connections to other neurons, and that the number of synapses ranges from 10 14 to 5 × 10 14 for an adult brain [Drachman, 2005]. A neuron sends a signal to another neuron through its axon, which is a protrusion with potentially thousands of synapses and which can extend to other neurons in distant parts of the body. The other neuron receives the signal via so-called dendrites that conduct the received signal to the neuron's body.
A synapse is a membrane-to-membrane interface between two neurons that allows either chemical or electrical signal transmission [Lodish et al., 2000]. In case of a chemical synapse, the signal is transmitted by molecular means from the presynaptic axon terminal of the sending neuron to the postsynaptic dentritic terminal of the receiving neuron. This is accomplished by neurotransmitters, which can bridge the synaptic cleft, a small gap between the membranes of two neurons, as illustrated in Figure 1. The small volume of the synaptic cleft allows neurotransmitter concentration to increase Figure 1: Signal transmission at a chemical synapse [Julien, 2005] (Source: Wikipedia) and decrease rapidly.
Although synaptic processes are not fully understood, it is believed that they are at the heart of learning and memorizing patterns. It is known that the signal transmission at a chemical synapse happens in several steps. Except for the last step, each step takes no more than a fraction of a millisecond. The transmission is first triggered by an electrochemical excitation (action potential) at the presynaptic terminal. This excitation then causes calcium channels to open, allowing calcium ions (Ca++) to flow into the presynaptic terminal. The increased concentration of calcium ions in the presynaptic terminal causes the release of neurotransmitters into the synaptic cleft. Some of these neurotransmitters then bind to receptors of the postsynaptic terminal, which opens ion channels in the postsynaptic membrane, allowing ions to flow into or out of the postsynaptic neuron. This changes the transmembrane potential, leading to an excitation or inhibition of the postsynaptic neuron. Eventually, the docked neurotransmitters will break away from the postsynaptic receptors. Some of them will be reabsorbed by the presynaptic neuron to initiate another transmission cycle.
The computational model developed in the next section is based on the assumption that the concentration of calcium ions p largely defines the strength of the signal transmitted. Furthermore, the assumption is that the strength of the signal is defined by the relation between the concentrations of calcium ions inside the presynaptic terminal (1 − p) and outside the terminal (p). For example, when all calcium ion channels are closed, the outside concentration will be p = 1 and there will be no signal transmitted: (1 − p)/p = 0. On the other hand, when the ion channels open up because of an excitation of the presynaptic terminal, calcium ions rush into the terminal due to a greater extracellular ion concentration. The maximum signal strength of 1 will be reached for p = 0.5, when the concentrations of calcium ions inside and outside the terminal are in equilibrium.
Computational Model
The computational model set forth here is based on the expected information that is either received or generated by the presynaptic neuron [Hodgkin and Huxley, 1990]. This section proceeds from the common conception that information is a measure for uncertainty, and is related to energy, without going into details about how information relates to energy in physics. The terms information and energy are used synonymously in the following. In a traditional information-theoretical approach, the informational content of a probabilistic event is measured by the negative logarithm of its probability, which is then used to compute the expected information or entropy [Shannon, 1948]. This section will follow along these lines.
Building on Section 3, the information conveyed by the signal (1 − p)/p is computed as − ln((1 − p)/p). The expected information or energy for this signal is then computed as the product of its information and "probability," where the latter can be viewed as the likelihood of a calcium ion entering the presynaptic terminal through an ion channel:
E = −p · ln 1 − p p ,(5)
with p ∈ 1 2 ; 1 [Jaeger, 2013]. Developing the model further, the signal (1 − p)/p can be regarded as the signal perceived by the presynaptic neuron, whereas p is the actual signal coming from outside the neuron. Continuing this thought leads to the observation that reality is defined by the agreement between the perceived signal and the actual signal. Both signals are identical when (1 − p)/p equals p, which is the case when p is equal to the golden ratio [Livio, 2002, Weiss andWeiss, 2003]. Therefore, the golden ratio defines points for which the perceived and the actual signal coincide. Section 5 will discuss the golden ratio in more detail, as it is central to this paper.
Assuming that the perceived signal (1 − p)/p is equal to the actual signal p, the formula for the expected information, as given by Eq. 5, can be transformed into a symmetric equation as follows:
E = −p · ln 1 − p p ⇔ −p · ln 1 − p 2 (6) ⇔ −p · ln 1 − p 2 · 2 (7) ⇔ − sin(φ) · ln cos(φ) · 2,(8)
where the last expression holds for an angle φ ∈ 0 ; π 2 . Note that this last expression is symmetric in that we can swap sine and cosine to obtain the expected information for a signal sent in the opposite direction. The signal in the forward direction corresponds to the traditional forward pass in a feed-forward neural network, whereas the signal in the opposite direction represents the signal generated by the teaching input.
Based on this observation, one can define an information loss function similar to entropy, which measures the expected information for both directions of signal transmission:
L I (d) = sin(d) · ln cos(d) + cos(d) · ln sin(d)(9)
The information loss function in Eq. 9 assumes its minimum for an angle of 45 • , or d = π/4, when the signals in both directions are identical, and in equilibrium, as shown in Figure 2. Section 6 will Figure 2: Information loss as defined by Eq. 9
develop this function into the proposed loss function for training.
Golden ratio
The golden ratio is a specific ratio between two quantities, which has been known since ancient times, and which has been observed, for example in nature, architecture, and art [Livio, 2002]. Informally, two quantities are said to be in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. Mathematically, the golden ratio can be derived from the equation developed in the previous section for which the perceived signal is equal to the actual or true signal:
p = 1 − p p(10)
A straightforward transformation into the following expression provides one of two common ways of defining the golden ratio as zeros of a quadratic equation:
p 2 + p − 1 = 0(11)
This equation has two irrational solutions, namely
p 1 = √ 5 − 1 2 ≈ 0.618,(12)
and
p 2 = − √ 5 − 1 2 ≈ −1.618(13)
Note that the following identity holds for both solutions, p 1 and p 2 , which will become important later in the paper when terms for the learning rate and momentum weight are developed:
1 − p = p 2(14)
The second way of defining the golden ratio is to replace p by −p in Eq. 11, which leads to this quadratic equation: p 2 − p − 1 = 0 (15) Again, this equation has two solutions, which are the negatively signed solutions of Eq. 11:
− p 1 ≈ −0.618 and − p 2 ≈ 1.618
However, for the second quadratic equation, Eq. 15, the complement 1 − p is given by:
1 − p = − 1 p(17)
Similar to Eq. 14, this property will be used later in the paper for the development of regularization terms. It turns out that Eq. 14 and Eq. 17 are closely connected with each other.
Another property of Eq. 11 and Eq. 15 is that the absolute sum of their solutions equals 1, respectively: p 1 + p 2 = −1 and − p 1 − p 2 = 1 (18) In fact, this property is one of the main motivations for this paper. Section 6.2 will argue that exploiting this property can improve machine learning performance.
Note that most literature refers to the specific number ϕ ≈ 1.618 as the golden ratio, which was defined by Eq. 15 above. However, this paper will refer to all solutions of Eq. 11 and Eq. 15 as the golden ratio, without singling out a specific number.
Loss function
This section will transform the information loss introduced in Section 4 into a sigmoidal loss function, which will be used for training. Implementing a sigmoidal function is beneficial for a biological system because the output is within fixed limits.
Information loss
Starting from the information loss function given by Eq. 9 in Section 4, let its input d be computed as follows:
d = (y − t + 1) · π 4(19)
The input d then denotes the angular difference between the network output y and the teaching input t, with d ∈ 0; π 2 . This difference is extreme, meaning either 0 or π/2, when the difference between the network output and the teaching input is maximum, with |y − t| = 1. For inputs approaching this maximum difference, the information loss function in Eq. 9 tends towards infinity. On the other hand, the information loss function attains its minimum when the output and teaching input are identical, which means d is π/4. Eq. 5 in Section 4 computes the expected information based on the true signal p and the observed signal (1 − p)/p. Resolving this equation for p leads to the following sigmoidal function for p:
p = 1 1 + exp (−E/p)(20)
Inserting the information loss L I (d) from Eq. 9 for E into Eq. 20, and using the equilibrium value 1/ √ 2 for p, with φ = π/4 in Eq. 8, produces the following loss function:
L(d) = 1 1 + exp −L I (d)/ √ 2(21)
This loss function reaches its maximum of 1 for d = 0 or d = π/2, when the distance between the network output and the teaching input is maximum. Its minimum is reached when the output equals the teaching input, for y − t = 0 and d = π/4.
The derivative of the loss function L with respect to the prediction Y , dL/dY , can be computed by applying the chain rule. This leads to the following equation when multiplying the derivative of the outer sigmoid function, which is L · (1 − L), and the derivative of L I (d), and ∂d/∂y = π/4:
dL dy = π √ 2 5 · L(d) · 1 − L(d) · sin 2 (d) cos(d) − cos 2 (d) sin(d) + sin(d) · ln sin(d) − cos(d) · ln cos(d)(22)
This derivative can then be used for descending the gradient in the traditional backpropagation process. The next section introduces the corresponding learning rate and momentum weight to be used in combination with the loss function in Eq. 21) and its derivative in Eq. 22.
Regularization
The computational model outlined in Section 4 implies specific values for the learning rate η and momentum weight α in Eq. 4 compatible with the model. The reasoning is as follows: The loss function given by Eq. 21 returns the true signal, which is the gradient of the computational model defined by Eq. 5 when regarding the information of the observed signal, − ln((1 − p)/p), as input. Furthermore, according to Eq. 8, this true signal corresponds to the signal observed in the opposite direction. For the minimum uncertainty, or equilibrium, this signal is equal to 1/ (2). However, for the minimum uncertainty, this signal should be the golden ratio, which is the solution to the model defined by Eq. 5. Therefore, the signal needs to be regularized, using the momentum weight α, so that it satisfies the following requirement:
α sqrt(2) = p 1 ≈ 0.618 =⇒ α ≈ 0.874(23)
This provides a specific value for the momentum weight, namely α ≈ 0.874.
On the other hand, the observed signal also needs to be regulated. This can be accomplished by choosing the learning rate η appropriately, again following a similar reasoning: The observed signal and the true signal, which is the signal observed in the opposite/backwards direction, are in a negatively inverse relation, p = −1/p. When viewed from the opposite direction, using Eq. 17, the observed signal in forward direction is equal to one minus the true signal. Therefore, the complement of the signal observed in backward direction needs to be computed in order to obtain the signal in forward direction. According to Eq. 14, this means squaring the signal, which amounts to computing one minus the signal. For this reason, the loss function in Eq. 21, which computes the observed signal p in backward direction based on Eq. 5, needs to be squared, leading to the ultimate loss function proposed in this paper:
Loss(d) = 1 1 + exp − L I (d) − min / √ 2 2 ,(24)
where min is the minimum of the information loss function in Eq.9, which the function attains for d = π/4. The derivative in Eq. 22 needs to be adjusted accordingly by replacing L(d) with Loss(d) and multiplying by 2 · Loss(d), again using the chain rule. Applying the same processing steps to the momentum weight α then leads to the following expression for the learning rate η:
η = (1 − α) 2 ≈ 0.016(25)
This provides the value for the second regularization term, namely the learning rate η, with η ≈ 0.016.
From the above discussion, it follows that the delta learning rule with momentum, as given by Eq. 4, adds two regularized gradients, each seen from a different direction. This interpretation of the delta rule and momentum differs from the common understanding that applying Eq. 4 smoothens gradient descent by giving more weight to weight changes in previous iterations. Instead, here it is argued that the delta learning rule with momentum considers gradients for two different directions, ensuring that one is not improved at the expense of the other, while descending to an optimum, until the equilibrium is reached. In fact, for the observed reality, gradient descent follows the relationship given in Eq. 18, with the sum of both gradients becoming one when the minimum uncertainty is reached in the state of equilibrium. According to the theory laid out here, it is this balancing of gradients that makes the delta rule with momentum so successful in practical applications. In both directions, gradients contribute with the same weight.
Experimental Evaluation
To show that the proposed loss function in Eq. 24 works in conjunction with the derived learning rate η and momentum weight α, a practical experiment is performed on public data. For Figure 3 shows the network architecture used in the experiment, with a sample input digit 3 and a correct output result [Krizhevsky et al., 2012]. The first layer is the image input layer, with a size of 28-by-28, followed by a convolution layer with 20 5-by-5 filters. The next layers are a batch normalization layer, a ReLU layer, and a fully-connected layer with an output size of 10. Finally, a softmax layer and a classification layer are the last two layers of the network, with the latter computing the proposed loss function in Eq. 24. For training, the learning rate given by Eq. 25 and the momentum weight given by Eq. 23 are used. Table 1 shows the classification results for training with the common loss function defined by the sum of squares, and with the proposed loss function defined by Eq. 24. All results have been achieved after 30 training epochs, using ten-fold cross-validation. The results show that training with SSE loss benefits significantly from using a momentum term, which increases the accuracy from 77.4% to 98.9%. The proposed loss function in Eq. 24 with momentum performs best, with an accuracy of 99.4%. It is also worth noting that the standard deviation improves by an order of magnitude each time, decreasing from 10 for SSE without momentum to only 0.01 for the proposed loss function, learning rate, and momentum weight.
Conclusion
For training of ANNs, this paper presents an information-theoretical loss function that is motivated by biomedical signal processing via neurotransmitters in a biological synapse. Learning is based on the principle of minimizing uncertainty, as measured by the loss function. Another important aspect is that learning is considered to be a two-directional process between network input and teaching input, with either input becoming the output in one direction. Both inputs become identical in points defined by the golden ratio. Therefore, the golden ratio, which has gained little attention in the machine learning literature so far, takes center stage here. Network weights are adjusted so that the absolute sum of gradients in both directions equals one. This helps in dampening oscillations, while the network is approaching a state of minimum energy. Technically, this is achieved by setting the learning rate and the momentum weight to specific values, thus explaining the generally accepted usefulness of the momentum term in a formal framework. This also confirms empirical values generally used in the literature for learning rate and momentum weight.
To validate this information-theoretical approach further, classification results for a handwritten digit recognition task are presented, showing that the proposed loss function in conjunction with the derived learning rate and momentum weight works in practice.
Broader Impact
The broader impact of this paper lies in the theoretical explanation of the learning rate and momentum term, and in how their values can be determined, in a learning process based on backpropagation. The theoretical findings confirm and specify more precisely the empirical values often used in the literature for practical experiments. This provides a guideline for future experiments, relieving researchers and developers of a tedious parameter search. The paper derives this result by studying the basic neurotransmitter mechanisms in the synaptic cleft between the presynaptic and postsynaptic neuron of a biological synapse. This study also revealed that the golden ratio plays a key role role in neural signal transduction, particularly in synaptic transmission/neurotransmission. Therefore, a full understanding of neural information processing may not be possible without considering the golden ratio.
Figure 3 :
3Network architecture
Table 1 :
1handwritten digit classification, a deep learning network is trained on a dataset containing 10,000 handwritten, artificially rotated digits, and evaluated by averaging ten runs for each fold in 10-fold cross-validation [MathWorks, accessed May 10, 2020]. Each digit is a 28-by-28 gray-scale image, with a corresponding label denoting which digit the image represents (MNIST database[LeCun et al., accessed May 10, 2020]).Experimental results with 10-fold cross-validation
Loss
avg. accuracy (%) std
SSE without momentum
77.4
10
SSE with momentum
98.9
1
Eq. 24 with momentum
99.4
0.1
Acknowledgments and Disclosure of Funding
Practical recommendations for gradient-based training of deep architectures. Y Bengio, Neural networks: Tricks of the trade. SpringerY. Bengio. Practical recommendations for gradient-based training of deep architectures. In Neural networks: Tricks of the trade, pages 437-478. Springer, 2012.
Do we have brain to spare?. D A Drachman, Neurology64D.A. Drachman. Do we have brain to spare? Neurology, 64(12), 2005.
The human brain in numbers: a linearly scaled-up primate brain. S Herculano-Houzel, Frontiers in Human Neuroscience. 331S. Herculano-Houzel. The human brain in numbers: a linearly scaled-up primate brain. Frontiers in Human Neuroscience, 3(31), 2009.
A quantitative description of membrane current and its application to conduction and excitation in nerve. A L Hodgkin, A F Huxley, Bulletin of mathematical biology. 521-2A.L. Hodgkin and A.F. Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. Bulletin of mathematical biology, 52(1-2):25-71, 1990.
The neurological principle: how traditional chinese medicine unifies body and mind. S Jaeger, International Journal of Functional Informatics and Personalised Medicine. 42S. Jaeger. The neurological principle: how traditional chinese medicine unifies body and mind. International Journal of Functional Informatics and Personalised Medicine, 4(2):84-102, 2013.
A primer of drug action: A comprehensive guide to the actions, uses, and side effects of psychoactive drugs, chapter The neuron, synaptic transmission, and neurotransmitters. R M Julien, Worth PublishersNew YorkR.M. Julien. A primer of drug action: A comprehensive guide to the actions, uses, and side effects of psychoactive drugs, chapter The neuron, synaptic transmission, and neurotransmitters, pages 60-88. Worth Publishers -New York, 2005.
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G Hinton, Advances in neural information processing systems. A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012.
Efficient backprop. Y Lecun, L Bottou, G Orr, K R Müller, Neural networks: Tricks of the trade. SpringerY. LeCun, L. Bottou, G. Orr, and K.R. Müller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9-48. Springer, 2012.
The MNIST Database. Y Lecun, C Cortes, C J C Burges, Y. LeCun, C. Cortes, and C.J.C. Burges. The MNIST Database, accessed May 10, 2020. URL http://yann.lecun.com/exdb/mnist/.
The Golden Ratio. M Livio, Random House, IncM. Livio. The Golden Ratio. Random House, Inc., 2002.
Neurotransmitters, synapses, and impulse transmission. H Lodish, A Berk, L Zipursky, P Matsudaira, D Baltimore, J Darnell, Molecular Cell Biology. 4th edition. WH FreemanH. Lodish, A. Berk, L. Zipursky, P. Matsudaira, D. Baltimore, and J. Darnell. Neurotransmitters, synapses, and impulse transmission. In Molecular Cell Biology. 4th edition. WH Freeman, 2000.
Data Sets for Deep Learning. Mathworks, MathWorks. Data Sets for Deep Learning, accessed May 10, 2020. URL https://www.mathworks. com/help/deeplearning/ug/data-sets-for-deep-learning.html.
Learning representations by back-propagating errors. D E Rumelhart, G E Hinton, R J Williams, Nature. 3236088D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning representations by back-propagating errors. Nature, 323(6088):533-536, 1986.
A mathematical theory of communication. C E Shannon, Bell system technical journal. 273C.E. Shannon. A mathematical theory of communication. Bell system technical journal, 27(3): 379-423, 1948.
On the importance of initialization and momentum in deep learning. I Sutskever, J Martens, G Dahl, G Hinton, International conference on machine learning. I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139-1147, 2013.
The golden mean as clock cycle of brain waves. H Weiss, V Weiss, Chaos, Solitons & Fractals. 184H. Weiss and V. Weiss. The golden mean as clock cycle of brain waves. Chaos, Solitons & Fractals, 18(4):643-652, 2003.
Nature's learning rule: The hebbian-lms algorithm. B Widrow, Y Kim, D Park, J Perin, Artificial Intelligence in the Age of Neural Networks and Brain Computing. ElsevierB. Widrow, Y. Kim, D. Park, and J. Perin. Nature's learning rule: The hebbian-lms algorithm. In Artificial Intelligence in the Age of Neural Networks and Brain Computing, pages 1-30. Elsevier, 2019.
|
[] |
[
"Relative Edge Density of the Underlying Graphs Based on Proportional-Edge Proximity Catch Digraphs for Testing Bivariate Spatial Patterns",
"Relative Edge Density of the Underlying Graphs Based on Proportional-Edge Proximity Catch Digraphs for Testing Bivariate Spatial Patterns"
] |
[
"Elvan Ceyhan [email protected]@tel:90 \nAddress: Department of Mathematics\nKoç University\n34450Sarıyer, IstanbulTurkey\n"
] |
[
"Address: Department of Mathematics\nKoç University\n34450Sarıyer, IstanbulTurkey"
] |
[] |
The use of data-random graphs in statistical testing of spatial patterns is introduced recently. In this approach, a random directed graph is constructed from the data using the relative positions of the points from various classes. Different random graphs result from different definitions of the proximity region associated with each data point and different graph statistics can be employed for pattern testing. The approach used in this article is based on underlying graphs of a family of data-random digraphs which is determined by a family of parameterized proximity maps. The relative edge density of the AND-and OR-underlying graphs is used as the summary statistic, providing an alternative to the relative arc density and domination number of the digraph employed previously. Properly scaled, relative edge density of the underlying graphs is a U -statistic, facilitating analytic study of its asymptotic distribution using standard U -statistic central limit theory. The approach is illustrated with an application to the testing of bivariate spatial clustering patterns of segregation and association. Knowledge of the asymptotic distribution allows evaluation of the Pitman asymptotic efficiency, hence selection of the proximity map parameter to optimize efficiency. Asymptotic efficiency and Monte Carlo simulation analysis indicate that the AND-underlying version is better (in terms of power and efficiency) for the segregation alternative, while the OR-underlying version is better for the association alternative. The approach presented here is also valid for data in higher dimensions.
| null |
[
"https://arxiv.org/pdf/0906.5481v1.pdf"
] | 88,518,200 |
0906.5481
|
de6261c88181d7461086922f1dc81e8e93739bcb
|
Relative Edge Density of the Underlying Graphs Based on Proportional-Edge Proximity Catch Digraphs for Testing Bivariate Spatial Patterns
30 Jun 2009 June 30, 2009
Elvan Ceyhan [email protected]@tel:90
Address: Department of Mathematics
Koç University
34450Sarıyer, IstanbulTurkey
Relative Edge Density of the Underlying Graphs Based on Proportional-Edge Proximity Catch Digraphs for Testing Bivariate Spatial Patterns
30 Jun 2009 June 30, 2009associationasymptotic efficiencyclusteringcomplete spatial randomnessrandom graphs and digraphssegregationU -statistic
The use of data-random graphs in statistical testing of spatial patterns is introduced recently. In this approach, a random directed graph is constructed from the data using the relative positions of the points from various classes. Different random graphs result from different definitions of the proximity region associated with each data point and different graph statistics can be employed for pattern testing. The approach used in this article is based on underlying graphs of a family of data-random digraphs which is determined by a family of parameterized proximity maps. The relative edge density of the AND-and OR-underlying graphs is used as the summary statistic, providing an alternative to the relative arc density and domination number of the digraph employed previously. Properly scaled, relative edge density of the underlying graphs is a U -statistic, facilitating analytic study of its asymptotic distribution using standard U -statistic central limit theory. The approach is illustrated with an application to the testing of bivariate spatial clustering patterns of segregation and association. Knowledge of the asymptotic distribution allows evaluation of the Pitman asymptotic efficiency, hence selection of the proximity map parameter to optimize efficiency. Asymptotic efficiency and Monte Carlo simulation analysis indicate that the AND-underlying version is better (in terms of power and efficiency) for the segregation alternative, while the OR-underlying version is better for the association alternative. The approach presented here is also valid for data in higher dimensions.
Introduction
Classification and clustering have received considerable attention in the statistical literature. In this article, a graph-based approach for testing bivariate spatial clustering patterns is introduced. The analysis of spatial point patterns in natural populations has been extensively studied and have important implications in epidemiology, population biology, and ecology. The patterns of points from one class with respect to points from other classes, rather than the pattern of points from one-class with respect to the ground, are investigated. The spatial relationships among two or more classes have important implications especially for plant species. See, for example, Pielou (1961) and Dixon (1994Dixon ( , 2002.
The goal of this article is to derive the asymptotic distribution of the relative edge density of underlying graphs based on a particular digraph family and use it to test the spatial pattern of complete spatial randomness against spatial segregation or association. Complete spatial randomness (CSR) is roughly defined as the lack of spatial interaction between the points in a given study area. Segregation is the pattern in which points of one class tend to cluster together, i.e., form one-class clumps. In association, the points of one class tend to occur more frequently around points from the other class. For convenience and generality, we call the different types of points "classes", but the class can be replaced by any characteristic of an observation at a particular location. For example, the pattern of spatial segregation has been investigated for plant species (Diggle 1983), age classes of plants (Hamill and Wright (1986)) and sexes of dioecious plants (Nanami et al. (1999)).
In recent years, the use of mathematical graphs has also gained popularity in spatial analysis (Roberts et al. (2000)). In spatial pattern analysis graph theoretic tools provide a way to move beyond Euclidean metrics for spatial analysis. For example, graph-based approaches have been proposed to determine paths among habitats at various scales and dispersal movement distances, and balance data requirements with information content (Fall et al. (2007)). Although only recently introduced to landscape ecology, graph theory is well suited to ecological applications concerned with connectivity or movement (Minor and Urban (2007)). However, conventional graphs do not explicitly maintain geographic reference, reducing utility of other geo-spatial information. Fall et al. (2007) introduce spatial graphs that integrate a geometric reference system that ties patches and paths to specific spatial locations and spatial dimensions thereby preserving the relevant spatial information. After a graph is constructed using spatial data, usually the scale is lost (see for instance, Su et al. (2007)). Many concepts in spatial ecology depend on the idea of spatial adjacency which requires information on the close vicinity of an object. Graph theory conveniently can be used to express and communicate adjacency information allowing one to compute meaningful quantities related to spatial point pattern. Adding vertex and edge properties to graphs extends the problem domain to network modeling (Keitt (2007)). Wu and Murray (2008) propose a new measure based on graph theory and spatial interaction, which reflects intra-patch and inter-patch relationships by quantifying contiguity within patches and potential contiguity among patches. Friedman and Rafsky (1983) also propose a graph-theoretic method to measure multivariate association, but their method is not designed to analyze spatial interaction between two or more classes; instead it is an extension of generalized correlation coefficient (such as Spearman's ρ or Kendall's τ ) to measure multivariate (possibly nonlinear) correlation.
A new type of spatial clustering test using directed graphs (i.e., digraphs) which is based on the relative positions of the data points from various classes has also been developed recently. Data-random digraphs are directed graphs in which each vertex corresponds to a data point, and directed edges (i.e., arcs) are defined in terms of some bivariate function on the data. For example, nearest neighbor digraphs are defined by placing an arc between each vertex and its nearest neighbor. Priebe et al. (2001) introduced the class cover catch digraphs (CCCDs) in R and gave the exact and the asymptotic distribution of the domination number of the CCCDs. DeVinney et al. (2002), Marchette and Priebe (2003), Priebe et al. (2003a), Priebe et al. (2003b), andDeVinney andPriebe (2006) applied the concept in higher dimensions and demonstrated relatively good performance of CCCDs in classification. Their methods involve data reduction (i.e., condensing) by using approximate minimum dominating sets as prototype sets (since finding the exact minimum dominating set is an NP-hard problem in general -e.g., for CCCD in multiple dimensions -(see DeVinney and Priebe (2006)). Furthermore the exact and the asymptotic distribution of the domination number of the CCCDs are not analytically tractable in multiple dimensions. For the domination number of CCCDs for one-dimensional data, a SLLN result is proved in DeVinney and Wierman (2003), and this result is extended by Wierman and Xiang (2008); furthermore, a generalized SLLN result is provided by Wierman and Xiang (2008), and a CLT is also proved by Xiang and Wierman (2009). The asymptotic distribution of the domination number of CCCDs for non-uniform data in R is also calculated in a rather general setting (Ceyhan (2008)). Ceyhan (2005) generalized CCCDs to what is called proximity catch digraphs (PCDs). The first PCD family is introduced by Ceyhan and Priebe (2003); the parametrized version of this PCD is developed by Ceyhan et al. (2007) where the relative arc density of the PCD is calculated and used for spatial pattern analysis. Ceyhan and Priebe (2005) introduced another digraph family called proportional edge PCDs and calculated the asymptotic distribution of its domination number and used it for the same purpose. The relative arc density of this PCD family is also computed and used in spatial pattern analysis (Ceyhan et al. (2006)). Ceyhan and Priebe (2007) derived the asymptotic distribution of the domination number of proportional-edge PCDs for two-dimensional uniform data.
The underlying graphs based on digraphs are obtained by replacing arcs in the digraph by edges based on bivariate relations. If symmetric arcs are replaced by edges, then we obtain the AND-underlying graph; and if all arcs are replaced by edges without allowing multi-edges, then we obtain the OR-underlying graph. The statistical tool utilized in this article is the asymptotic theory of U -statistics. Properly scaled, we demonstrate that the relative edge density of the underlying graphs of proportional-edge PCDs is a U -statistic, which has asymptotic normality by the general central limit theory of U -statistics. For the digraphs introduced by Priebe et al. (2001), whose relative arc density is also of the U -statistic form, the asymptotic mean and variance of the relative density is not analytically tractable, due to geometric difficulties encountered. However, for the PCDs introduced in Ceyhan and Priebe (2003), Ceyhan et al. (2006), andCeyhan et al. (2007), the relative arc density has tractable asymptotic mean and variance.
We define the underlying graphs of proportional-edge PCDs and their relative edge density in Section 2, provide the asymptotic distribution of the relative edge density under the null hypothesis in Section 3.1, and describe the alternatives of segregation and association in Section 3.2. We prove the consistency of the relative edge density in Section 4.1, and provide Pitman asymptotic efficiency in Section 4.2. We present the Monte Carlo simulation analysis for finite sample performance in Section 5, in particular, provide the Monte Carlo power analysis under segregation in Section 5.1, and under association in Section 5.2. We treat the multiple triangle case in Section 6, provide extension to higher dimensions in Section 6.4. We provide the discussion and conclusions in Section 7, and the tedious calculations and long proofs are deferred to the Appendix.
2 Relative Edge Density of Underlying Graphs
Preliminaries
The main difference between a graph and a digraph is that edges are directed in digraphs, hence are called arcs. So the arcs are denoted as ordered pairs while edges are denoted as unordered pairs. The underlying graph of a digraph is the graph obtained by replacing each arc uv ∈ A or each symmetric arc, {uv, vu} ⊂ A by the edge (u, v). The former underlying graph will be referred as the OR-underlying graph, while the latter as the ANDunderlying graph. That is, the AND-underlying graph for digraph D = (V, A) is the graph G and (D) = (V, E and ) where E and is the set of edges such that (u, v) ∈ E and iff uv ∈ A and vu ∈ A. The OR-underlying graph for D = (V, A) is the graph G or (D) = (V, E or ) where E or is the set of edges such that (u, v) ∈ E or iff uv ∈ A or vu ∈ A.
The relative edge density of a graph G = (V, E) of order |V| = n, denoted ρ(G), is defined as
ρ(G) = 2 |E| n(n − 1)
where | · | denotes the set cardinality function (Janson et al. (2000)). Thus ρ(G) represents the ratio of the number of edges in the graph G to the number of edges in the complete graph of order n, which is n(n − 1)/2. Let (Ω, M) be a measurable space and consider N : Ω → ℘(Ω), where ℘(·) represents the power set functional. Then given Y m ⊂ Ω, the proximity map N Y (·) associates with each point x ∈ Ω a proximity region N Y (x) ⊆ Ω. The Γ 1 -region Γ 1 (·, N ) : Ω → ℘(Ω) associates the region Γ 1 (x, N Y ) := {z ∈ Ω : x ∈ N Y (z)} with each point x ∈ Ω. If X 1 , X 2 , . . . , X n are Ω-valued random variables, then the N Y (X i ) (and Γ 1 (X i , N Y )), i = 1, 2, . . . , n are random sets. If the X i are independent and identically distributed, then so are the random sets N Y (X i ) (and Γ 1 (X i , N Y )).
Consider the data-random PCD D with vertex set V = {X 1 , X 2 , . . . , X n } and arc set A defined by X i X j ∈ A ⇐⇒ X j ∈ N Y (X i ). The AND-underlying graph, G and , of D with the vertex set V and the edge set E and is defined by (X i , X j ) ∈ E and iff X i X j ∈ A and X j X i ∈ A. Likewise, the OR-underlying graph, G or , of D with the vertex set V and the edge set E or is defined by (X i , X j ) ∈ E or ⇐⇒ X i X j ∈ A or X j X i ∈ A. Then (X i , X j ) ∈ E and iff X j ∈ N Y (X i ) and
X i ∈ N Y (X j ) iff X j ∈ N Y (X i ) and X j ∈ Γ 1 (X i , N Y ) iff X j ∈ N Y (X i ) ∩ Γ 1 (X i , N Y ).
Similarly, (X i , X j ) ∈ E or iff X j ∈ N Y (X i ) ∪ Γ 1 (X i , N Y ). Since the random digraph D depends on the (joint) distribution of the X i and on the map N Y , so do the underlying graphs. The adjective proximity -for the catch digraph D and for the map N Y -comes from thinking of the region N Y (x) as representing those points in Ω "close" to x (Toussaint (1980) and Jaromczyk and Toussaint (1992)).
Relative Edge Density of the AND-Underlying Graphs
The relative edge density of G and (D), the AND-underlying graph based on digraph D, is denoted as ρ and (D). For X i iid ∼ F , ρ and (D) is a U -statistic, ρ and (D) = 2 n (n − 1) i<j h and ij where h and ij = h and (X i , X j ; N ) = I((X i , X j ) ∈ E and ) = I(X i X j ∈ A) · I(X j X i ∈ A)
= I(X i ∈ N (X j )) · I(X j ∈ N (X i )) = I(X j ∈ N (X i ) ∩ Γ 1 (X i , N )).
is the number of symmetric arcs between X i and X j in D or number of edges between X i and X j in G and (D). Note that h and ij is a symmetric kernel with finite variance since 0 ≤ h and (X i , X j ; N ) ≤ 1. Moreover, ρ and (D) is a random variable that depends on n, F , and N (·) (i.e., Y). But
where E h and 12 = P (X 1 X 2 ∈ A , X 2 X 1 ∈ A) = P (X 2 ∈ N (X 1 ) ∩ Γ 1 (X 1 , N )) = µ and (N ) is the symmetric arc probability. Note that µ and (N ) = P (X j ∈ N (X i ) ∩ Γ 1 (X i , N )) for i = j. (2)
Expanding this expression, we have = P (X 2 ∈ N (X 1 ) ∩ Γ 1 (X 1 , N ) , X 3 ∈ N (X 1 ) ∩ Γ 1 (X 1 , N )) = P ({X 2 , X 3 } ⊂ N (X 1 ) ∩ Γ 1 (X 1 , N )).
Thus
Cov h and 12 , h and
13 = P ({X 2 , X 3 } ⊂ N (X 1 ) ∩ Γ 1 (X 1 , N )) − [µ and (N )] 2 .
13 is a discrete random variable with four possible values:
h and 12 , h and 13 ∈ {(0, 0), (1, 0), (0, 1), (1, 1)}.
Then finding the joint distribution of h and 12 , h and 13 is equivalent to finding the joint probability mass function of h and 12 , h and 13 . First, note that h and 12 , h and 13 = (0, 0) iff h and 12 = h and 13 = 0 iff I(A 12 ∩ A 21 ) = I(A 13 ∩ A 31 ) = 0 iff I(X 2 ∈ N (X 1 ) ∩ Γ 1 (X 1 , N )) = I(X 3 ∈ N (X 1 ) ∩ Γ 1 (X 1 , N )) = 1 iff
I(X 2 ∈ T (Y 3 ) \ N (X 1 ) ∩ Γ 1 (X 1 , N )) = I(X 3 ∈ T (Y 3 ) \ N (X 1 ) ∩ Γ 1 (X 1 , N )) = 1 iff I({X 2 , X 3 } ⊂ T (Y 3 ) \ [N (X 1 ) ∩ Γ 1 (X 1 , N )]) = 1.
Hence P ( h and 12 , h and 13 = (0, 0)) = P ({X 2 , X 3 } ⊂ T (Y 3 ) \ [N (X 1 ) ∩ Γ 1 (X 1 , N )]).
Next, note that h and 12 , h and
Relative Edge Density of OR-Underlying Graphs
The relative edge density of G or (D), the OR-underlying graph of digraph D, is denoted as ρ or (D). For X i iid ∼ F , ρ or (D) is a U -statistic, ρ or (D) = 2 n (n − 1) i<j h or ij where h or ij = h or (X i , X j ; N ) = I((X i , X j ) ∈ E or ) = max(I(X i X j ∈ A), I(X j X i ∈ A)) = I(X i ∈ N (X i ) ∪ Γ 1 (X i , N )).
is the number of edges between X i and X j in G or (D). Note that h or ij is a symmetric kernel with finite variance since 0 ≤ h or (X i , X j ; N ) ≤ 1. Moreover, ρ or (D) is a random variable that depends on n, F , and N (·) (i.e., Y). But E[ρ or (D)] does only depend on F and N (·). Then
0 ≤ E[ρ or (D)] = 2 n (n − 1) i<j E[h or ij ] = E [h or 12 ] = P (X 2 ∈ N (X 1 ) ∪ Γ 1 (X 1 , N ))
where E [h or 12 ] = P (X 1 X 2 ∈ A ∨ X 2 X 1 ∈ A) which we denote as µ or (N ) for brevity of notation. Similar to the AND-underlying case, = 1 2 (1 − [P ((h or 12 , h or 13 ) = (0, 0)) + P ((h or 12 , h or 13 ) = (1, 1))]) .
Proportional-Edge Proximity Maps and the Associated Regions
Let Ω = R 2 and Y 3 = {y 1 , y 2 , y 3 } ⊂ R 2 be three non-collinear points. Denote by T (Y 3 ) the triangle (including the interior) formed by these three points. For r ∈ [1, ∞] define N r P E (x) to be the proportional-edge proximity map with parameter r and Γ r 1 (x) := Γ 1 (x, N r P E ) to be the corresponding Γ 1 -region as follows; see also Figures 1 and 2. Let "vertex regions" R(y 1 ), R(y 2 ), R(y 3 ) partition T (Y 3 ) using segments from the center of mass of T (Y 3 ) to the edge midpoints. For x ∈ T (Y 3 ) \ Y 3 , let v(x) ∈ Y 3 be the vertex whose region contains x; x ∈ R(v(x)). If x falls on the boundary of two vertex regions, or at the center of mass, we assign v(x) arbitrarily. Let e(x) be the
edge of T (Y 3 ) opposite v(x). Let ℓ(v(x), x) be the line parallel to e(x) through x. Let d(v(x), ℓ(v(x), x)) be the Euclidean (perpendicular) distance from v(x) to ℓ(v(x), x). For r ∈ [1, ∞) let ℓ r (v(x), x) be the line parallel to e(x) such that d(v(x), ℓ r (v(x), x)) = rd(v(x), ℓ(v(x), x)) and d(ℓ(v(x), x), ℓ r (v(x), x)) < d(v(x), ℓ r (v(x), x)
). Let T r (x) be the triangle similar to and with the same orientation as T (Y 3 ) having v(x) as a vertex and ℓ r (v(x), x) as the opposite edge. Then the proportional-edge proximity region N r
P E (x) is defined to be T r (x) ∩ T (Y 3 ). Furthermore, let ξ i (x) be the line such that ξ i (x) ∩ T (Y 3 ) = ∅ and r d(y i , ξ i (x)) = d(y i , ℓ(y i , x)) for i = 1, 2, 3. Then Γ r 1 (x)∩R(y i ) = {z ∈ R(y i ) : d(y i , ℓ(y i , z)) ≥ d(y i , ξ i (x)}, for i = 1, 2, 3. Hence Γ r 1 (x) = 3 i=1 (Γ r 1 (x)∩R(y i )). Notice that r ≥ 1 implies x ∈ N r P E (x) and x ∈ Γ r 1 (x). Furthermore, lim r→∞ N r P E (x) = T (Y 3 ) for all x ∈ T (Y 3 ) \ Y 3 , and so we define N ∞ Y (x) = T (Y 3 ) for all such x. For x ∈ Y 3 , we define N r P E (x) = {x} for all r ∈ [1, ∞].
Then, for x ∈ R(y i ) lim r→∞ Γ r 1 (x) = T (Y 3 ) \ {y j , y k } for distinct i, j, and k. Notice that X i iid ∼ F , with the additional assumption that the non-degenerate two-dimensional probability density function f exists with support in T (Y 3 ), implies that the special cases in the construction of N r P E -X falls on the boundary of two vertex regions, or at the center of mass, or X ∈ Y 3 -occur with probability zero. Note that for such an F , N Y (x) is a triangle a.s. and Γ 1 (x) is a convex or nonconvex polygon. Figure 2: Construction of the Γ 1 -region, Γ r=2 1 (x) (shaded region) for an x ∈ R(y)1).
x ℓ 2 ( v ( x ) , x ) ℓ ( v ( x ) , x ) e ( x ) y1 = v(x) y2 y3 d ( v ( x ) , ℓ 2 ( v ( x ) , x ) ) = 2 d ( v ( x ) , ℓ ( v ( x ) , x ) ) d ( v ( x ) , ℓ ( v ( x ) , x ) ) Figure 1: Construction of proportional-edge proximity region, N r=2 Y (x) (shaded region) for an x ∈ R(y)1). x ξ3(x) ξ 1 ( x ) ξ 2 (x ) d ( y 1 , ξ 1 ( x ) ) ℓ ( y 1 , x ) d ( y 1 , ℓ ( y 1 , x ) ) = r d ( y 1 , ξ 1 ( x ) ) y1 y3 y2
Relative Edge Density of the Underlying Graphs of Proportional-Edge PCDs
Consider the underlying graphs of the data-random PCD D with vertex set V = {X 1 , X 2 , . . . , X n } and arc set
A defined by (X i , X j ) ∈ A ⇐⇒ X j ∈ N Y (X i ). Recall that (X i , X j ) ∈ E and iff X j ∈ N r P E (X i ) ∩ Γ 1 (X i , N r P E ) and (X i , X j ) ∈ E or iff X j ∈ N r P E (X i ) ∪ Γ 1 (X i , N r P E ). Let h and ij (r) := h and (X i , X j ; N r P E ) = I(X j ∈ N r P E (X i ) ∩ Γ r 1 (X i )) and h or ij (r) := h or (X i , X j ; N r P E ) = I(X j ∈ N r P E (X i ) ∪ Γ r 1 (X i )) for i = j.
The random variable ρ and n (r) := ρ and (X n ; h, N r P E ) depends on n explicitly, and on F and N r P E implicitly. The expectation E ρ and n (r) , however, is independent of n and depends on only F and N r P E . Let µ and (r) := E h and 12 (r) and ν and (r) := Cov h and 12 (r), h and 13 (r) . Then 0 ≤ E ρ and n (r) = E h and 12 (r) ≤ 1.
The variance Var ρ and n (r) simplifies to 0 ≤ Var ρ and n (r) = 2 n(n − 1)
Var h and 12 (r) + 4 (n − 2) n(n − 1) Cov h and 12 (r), h and 13 (r) ≤ 1/4.
A central limit theorem for U -statistics (Lehmann (1999)) yields √ n ρ and n (r) − µ and (r)
L −→ N (0, 4 ν and (r))(5)
provided ν and (r) > 0. The asymptotic variance of ρ and n (r), 4 ν and (r), depends on only F and N r P E . Thus we need determine only µ and (r) and ν and (r) in order to obtain the normal approximation ρ and n (r) approx ∼ N µ and (r), 4 ν and (r) n .
The above paragraph holds for ρ or n (r) = ρ or (X n ; h, N r P E ) also with ρ and n (r) is replaced by ρ or n (r), h and 12 (r) and h and 13 (r) are replaced by h or 12 and h or 13 , respectively.
For r = 1, N r=1 P E (x)∩Γ r=1 1 (x) = ℓ(v(x),
x) which has zero R 2 -Lebesgue measure. Then we have E ρ and n (r = 1) = E h and 12 (r = 1) = µ and (r = 1) = P (X 2 ∈ N r=1
P E (X 1 ) ∩ Γ r=1 1 (X 1 )) = 0. Similarly, P ({X 2 , X 3 } ⊂ N r=1 P E (X 1 ) ∩ Γ r=1 1 (X 1 )) = 0. Thus, ν and (r = 1) = 0. Furthermore, for r = ∞, N r=∞ P E (x) ∩ Γ r=∞ 1 (x) = T (Y 3 ) for all x ∈ T (Y 3 ) \ Y 3 . Then E ρ and n (r = ∞) = E h and 12 (r = ∞) = µ and (r = ∞) = P (X 2 ∈ N r=∞ P E (X 1 ) ∩ Γ r=∞ 1 (X 1 ) = P (X 2 ∈ T (Y 3 )) = 1. Similarly, P ({X 2 , X 3 } ⊂ N r=∞ P E (X 1 ) ∩ Γ r=∞ 1
(X 1 )) = 1. Hence ν and (r = ∞) = 0. Therefore, the CLT result in Equation (6) holds only for r ∈ (1, ∞). Furthermore, ρ and n (r = 1) = 0 a.s. and ρ and n (r = ∞) = 1 a.s.
For r = 1, N r=1 P E (x) ∪ Γ r=1 1 (x) has positive R 2 -Lebesgue measure. Then P ({X 2 , X 3 } ⊂ N r=1 P E (X 1 ) ∪ Γ r=1 1 (X 1 )) > 0. Thus, ν or (r = 1) = 0. On the other hand, for r = ∞, N r=∞ P E (X 1 ) ∪ Γ r=∞ 1 (X 1 )) = T (Y 3 ) for all X 1 ∈ T (Y 3 ). Then E [ρ or n (r = ∞)] = E [h or 12 (r = ∞)] = P (X 2 ∈ N r=∞ P E (X 1 ) ∪ Γ r=∞ 1 (X 1 )) = µ or (r = ∞) = P (X 2 ∈ T (Y 3 )) = 1. Similarly, P ({X 2 , X 3 } ⊂ N r=∞ P E (X 1 ) ∪ Γ r=∞ 1
(X 1 )) = 1. Hence ν or (r = ∞) = 0. Therefore, the CLT result for the OR-underlying case holds only for r ∈ [1, ∞). Moreover ρ or n (r = ∞) = 1 a.s. Remark 2.2. Relative Arc Density of PCDs: The relative arc density of the digraph D is denoted as ρ(D).
For X i iid ∼ F , ρ(D)
is also shown to be a U -statistic (Ceyhan et al. (2006)),
ρ(D) = 1 n (n − 1) i<j h ij where h ij = h(X i , X j ; N ) = I(X i X j ∈ A) = I(X j ∈ N (X i ))
) is the number of arcs between X i and X j in D.
Here
0 ≤ E [ρ(D)] = 1 n (n − 1) i<j E[h ij ] = E [h 12 ] /2. Furthermore, Moreover, Cov [h 12 , h 13 ] = P ({X 2 , X 3 } ⊂ N (X 1 )) − [E [h 12 ]] 2 .
Let h ij (r) := h(X i , X j ; N r P E ) = I(X j ∈ N r P E (X i )) for i = j and the random variable ρ n (r) := ρ(X n ; h, N r P E ). Let µ(r) := E [ρ n (r)] and ν(r) := Cov [h 12 (r), h 13 (r)]. A central limit theorem for U -statistics (Lehmann (1999)) yields
√ n (ρ n (r) − µ(r)) L −→ N (0, ν(r))(7)
provided ν(r) > 0. The explicit forms of asymptotic mean µ(r) and variance ν(r) are provided in Ceyhan et al. (2006).
Relative Edge Density under Null and Alternative Patterns
Null Distribution of Relative Edge Density
The null hypothesis is generally some form of complete spatial randomness; thus we consider
H o : X i iid ∼ U(T (Y 3 )).
If it is desired to have the sample size be a random variable, we may consider a spatial Poisson point process on T (Y 3 ) as our null hypothesis.
We first present a "geometry invariance" result which will simplify our subsequent analysis by allowing us to consider the special case of the equilateral triangle. Let ρ and n (r) := ρ and (n; U(T (Y 3 )), N r P E ) and ρ or n (r) := ρ or (n; U(T (Y 3 )), N r P E ).
Theorem 3.1. Geometry Invariance: Let Y 3 = {y 1 , y 2 , y 3 } ⊂ R 2 be three non-collinear points. For i = 1, 2, . . . , n let X i iid ∼ F = U(T (Y 3 )), the uniform distribution on the triangle T (Y 3 ). Then for any r ∈ [1, ∞] the distribution of ρ and n (r) and ρ or n (r) is independent of Y 3 , and hence the geometry of T (Y 3 ).
Proof: A composition of translation, rotation, reflections, and scaling will take any given triangle T o = T (y 1 , y 2 , y 3 ) to the "basic" triangle T b = T ((0, 0), (1, 0), (c 1 , c 2 )) with 0 < c 1 ≤ 1 2 , c 2 > 0 and (1 − c 1 ) 2 + c 2 2 ≤ 1, preserving uniformity. The transformation φ :
R 2 → R 2 given by φ(u, v) = u + 1−2 c1 √ 3 v, √ 3 2 c2 v takes T b
to the equilateral triangle T e = T (0, 0), (1, 0), 1/2, √ 3/2 . Investigation of the Jacobian shows that φ also preserves uniformity. Furthermore, the composition of φ with the rigid motion transformations and scaling maps the boundary of the original triangle T o to the boundary of the equilateral triangle T e , the median lines of T o to the median lines of T e , and lines parallel to the edges of T o to lines parallel to the edges of T e . Since the joint distribution of any collection of the h and ij (r) and h or ij (r) involves only probability content of unions and intersections of regions bounded by precisely such lines, and the probability content of such regions is preserved since uniformity is preserved, the desired result follows.
Based on Theorem 3.1, for our proportional-edge proximity map and the uniform null hypothesis, we may
assume that T (Y 3 ) is a standard equilateral triangle with Y 3 = {(0, 0), (1, 0), (1/2, √ 3/2)} henceforth.
In the case of this (proportional-edge proximity map, uniform null hypothesis) pair, the asymptotic null distribution of ρ and n (r) and ρ or n (r) as a function of r can be derived. Recall that µ and (r) = E h and 12 (r) = P (X 2 ∈ N r P E (X 1 ) ∩ Γ r 1 (X 1 )) = µ and (r) and µ or (r) = E [h or 12 ] = P (X 2 ∈ N r P E (X 1 ) ∪ Γ r 1 (X 1 )) = µ or (r) are the probability of an edge occurring between any two vertices in the AND-and OR-underlying graphs, respectively.
Theorem 3.2. Asymptotic Normality: For r ∈ (1, ∞), √ n ρ and n (r) − µ and (r) 4 ν and (r)
L −→ N (0, 1)
and for r ∈ [1, ∞), √ n (ρ or n (r) − µ or (r)) 4 ν or (r)
L −→ N (0, 1). where µ and (r) = − 1 54 (−1+r)(5 r 5 −148 r 4 +245 r 3 −178 r 2 −232 r+128) r 2 (r+2)(r+1) for r ∈ [1, 4/3), − 1 216 101 r 5 −801 r 4 +1302 r 3 −732 r 2 −536 r+672 r(r+2)(r+1)
for r ∈ [4/3, 3/2), 1 8 r 8 −13 r 7 +30 r 6 +148 r 5 −448 r 4 +264 r 3 +288 r 2 −368 r+96 r 4 (r+2)(r+1) for r ∈ [3/2, 2),
(r 3 +3 r 2 −2+2 r)(−1+r) 2 r 4 (r+1) for r ∈ [2, ∞),(8)µ or (r) = 47 r 6 −195 r 5 +860 r 4 −846 r 3 −108 r 2 +720 r−256 108 r 2 (r+2)(r+1) for r ∈ [1, 4/3), 175 r 5 −579 r 4 +1450 r 3 −732 r 2 −536 r+672 216 r (r+2)(r+1)
for r ∈ [4/3, 3/2),
− 3 r 8 −7 r 7 −30 r 6 +84 r 5 −264 r 4 +304 r 3 +144 r 2 −368 r+96 8 r 4 (r+1)(r+2)
for r ∈ [3/2, 2),
r 5 +r 4 −6 r+2 r 4 (r+1) for r ∈ [2, ∞),(9)ν and (r) = 11 i=1 ϑ and i (r) I(I i ),(10)ν or (r) = 11 i=1 ϑ or i (r) I(I i )(11)
where ϑ and i (r) and ϑ or i (r) are provided in Appendix Sections 1 and 2, and the derivations of µ and (r) and ν and (r) are provided in Appendix 3, while those of µ or (r) and ν or (r) are provided in Appendix 4.
Notice that µ and (r = 1) = 0 and lim r→∞ µ and (r) = 1 (at rate O(r −1 )); and µ or (r = 1) = 37/108 and lim r→∞ µ or (r) = 1 (at rate O(r −1 )).
To illustrate the limiting distribution, for example, r = 2 yields √ n(ρ and n (2) − µ and (2) ν or (r) Figure 3: Result of Theorem 3.2: asymptotic null means µ(r), µ and (r), and µ or (r) (left) and variances ν(r), 4 ν and (r), and 4 ν or (r) (right), from Equations (8), (9), and (10), (11), respectively. Some values of note: µ(1) = 37/216, µ and (1) = 0, and µ or (1) = 37/108, lim r→∞ µ(r) = lim r→∞ µ and (r) = lim r→∞ µ or (r) = 1, ν and (r = 1) = 0 and lim r→∞ ν and (r) = 0, ν or (r = 1) = 1/3240 and lim r→∞ ν or (r) = 0, and argsup r∈ [1,∞] By construction of the underlying graphs, there is a natural ordering of the means of relative arc and edge densities.
Lemma 3.3. The means of the relative edge densities and arc density have the following ordering: µ and (r) < µ(r) < µ or (r) for all r ∈ [1, ∞). Furthermore, for r = ∞ we have µ and (r) = µ(r) = µ or (r) = 1.
Proof: Recall that µ and (r) = E[ρ and n (r)] = P (X 2 ∈ N r P E (X 1 ) ∩ Γ r 1 (X 1 )), µ(r) = E[ρ n (r)] = P (X 2 ∈ N r P E (X 1 )), and µ or (r) = E[ρ or n (r)] = P (X 2 ∈ N r P E (X 1 )∪Γ r 1 (X 1 )). And N r P E (X 1 )∩Γ r 1 (X 1 ) ⊆ N r P E (X 1 ) ⊆ N r P E (X 1 )∪Γ r 1 (X 1 ) with probability 1 for all r ≥ 1 with equality holding for r = ∞ only. Then the desired result follows. See also Figure 3.
Note that the above lemma holds for all X i that has a continuous distribution on T (Y 3 ). There is also a stochastic ordering for the relative edge and arc densities as follows.
Theorem 3.4. For sufficiently small r, ρ and n (r) < ST ρ n (r) < ST ρ or n (r) as n → ∞.
Proof: Above we have proved that µ and (r) < µ(r) < µ or (r) for all r ∈ [1, ∞). For small r (r ≤ r ≈ 1.8) the asymptotic variances have the same ordering, 4 ν and (r) < ν(r) < 4 ν or (r). Since ρ and n (r), ρ n (r), ρ or n (r) are asymptotically normal, then the desired result follows. See also Figure 3.
Figures 4 and 5 indicate that, for r = 2, the normal approximation is accurate even for small n although kurtosis may be indicated for n = 10 in the AND-underlying case, and skewness may be indicated for n = 10 in the OR-underlying case. Figures 6 and 7 demonstrate, however, that severe skewness obtains for some values of n, r. The finite sample variance and skewness may be derived analytically in much the same way as was 4 ν and (r) (and 4 ν or (r)]) for the asymptotic variance. In fact, the exact distribution of ρ and n (r) (and ρ or n (r)) is, in principle, available by successively conditioning on the values of the X i . Alas, while the joint distribution of h and 12 (r), h and 13 (r) (and h or 12 (r), h or 13 (r)) is available, the joint distribution of {h and ij (r)} 1≤i<j≤n (and {h or ij (r)} 1≤i<j≤n ), and hence the calculation for the exact distribution of ρ and n (r) (and ρ or n (r)), is extraordinarily tedious and lengthy for even small values of n.
Let γ n (r) be the domination number of the proportional-edge PCD based on X n which is a random sample from U(T (Y 3 )). Additionally, let γ and n (r) and γ or n (r) be the domination number of the AND-and OR-underlying graphs based on the proportional-edge PCD, respectively. Then we have the following stochastic ordering for the domination numbers.
Theorem 3.5. For all r ∈ [1, ∞) and n > 1, γ or n (r) < ST γ n (r) < ST γ and n (r).
Proof: For all x ∈ T (Y 3 ), we have N r P E (x) ∩ Γ r 1 (x) ⊆ N r P E (x) ⊆ N r P E (x) ∪ Γ r 1 (x). For X ∼ U(T (Y 3 )), we have N r P E (X) ∩ Γ r 1 (X) N r P E (X) N r P E (X) ∪ Γ r 1 (X) a.
s. Moreover, γ n (r) = 1 iff X n ⊂ N r P E (X i ) for some i; γ and n (r) = 1 iff X n ⊂ N r P E (X i ) ∩ Γ r 1 (X i ) for some i; and γ or n (r) = 1 iff X n ⊂ N r P E (X i ) ∪ Γ r 1 (X i ) for some i. So it follows that P (γ and n (r) = 1) < P (γ n (r) = 1) < P (γ or n (r) = 1). In a similar fashion, we have P (γ and n (r) ≤ 2) < P (γ n (r) ≤ 2) < P (γ or n (r) ≤ 2). Since P (γ n (r) ≤ 3) = 1 (Ceyhan and Priebe (2005)), it follows that P (γ or n (r) ≤ 3) = 1 also holds as P (γ n (r) ≤ 3) < P (γ or n (r) ≤ 3). Hence the desired stochastic ordering follows.
Note the stochastic ordering in the above theorem holds for any continuous distribution F with support being in T (Y 3 ).
Alternatives: Segregation and Association
The phenomenon known as segregation involves observations from different classes having a tendency to repel each other -in our case, this means the X i tend to fall away from all elements of Y 3 . Association involves observations from different classes having a tendency to attract one another, so that the X i tend to fall near an element of Y 3 . See, for instance, Dixon (1994) and Coomes et al. (1999). We define two simple classes of alternatives, H S ε and H A ε with ε ∈ 0, √ 3/3 , for segregation and association, respectively. For y ∈ Y 3 , let e(y) denote the edge of T (Y 3 ) opposite vertex y, and for x ∈ T (Y 3 ) let ℓ y (x) denote the line parallel to e(y) through x.
Then define T (y, ε) = {x ∈ T (Y 3 ) : d(y, ℓ y (x)) ≤ ε}. Let H S ε be the model under which X i iid ∼ U(T (Y 3 ) \ ∪ y∈Y3 T (y, ε)) and H A ε be the model under which X i iid ∼ U(∪ y∈Y3 T (y, √ 3/3 − ε))
. Thus the segregation model excludes the possibility of any X i occurring near a y j , and the association model requires that all X i occur near a y j . The √ 3/3 − ε in the definition of the association alternative is so that ε = 0 yields H o under both classes of alternatives.
Remark 3.6. These definitions of the alternatives are given for the standard equilateral triangle. The geometry invariance result of Theorem 3.1 still holds under the alternatives H S ε and H A ε . In particular, the segregation alternative with ε ∈ 0, √ 3/4 in the standard equilateral triangle corresponds to the case that in an arbitrary triangle, δ × 100% of the area is carved away as forbidden from the vertices using line segments parallel to the opposite edge where δ = 4ε 2 (which implies δ ∈ (0, 3/4)). But the segregation alternative with ε ∈ √ 3/4, √ 3/3 in the standard equilateral triangle corresponds to the case that in an arbitrary triangle, δ × 100% of the area is carved away as forbidden around the vertices using line segments parallel to the opposite edge where δ = 1 − 4 1 − √ 3ε 2 (which implies δ ∈ (3/4, 1)). This argument is for the segregation alternative; a similar construction is available for the association alternative.
The asymptotic normality of the relative edge density under the alternatives follows as in the null case.
Theorem 3.7. Asymptotic Normality under the Alternatives: Let µ and (r, ε) be the mean and ν and (r, ε) be the variance of ρ and n (r) under the alternatives for r ∈ [1, ∞) and ε ∈ 0, √ 3/3 . Then under H S ε and H A ε , √ n(ρ and n (r) − µ and (r, ε)) L −→ N (0, 4 ν and (r, ε)) for the values of the pair (r, ε) for which ν and (r, ε) > 0. A similar result holds for ρ or n (r).
Proof: Under the alternatives, i.e., ε > 0 , ρ and n (r) is a U -statistic with the same symmetric kernel h and ij (r) as in the null case. Let E ε [·] be the expectation with respect to the uniform distribution under the alternatives with ε ∈ 0, √ 3/3 . The mean µ and (r, ε) = E ε ρ and n (r) = E ε h and 12 (r) , now a function of both r and ε, is again in [0, 1]. The asymptotic variance, 4 ν and (r, ε) = 4 Cov h and 12 (r), h and 13 (r) , also a function of both r and ε, is bounded above by 1/4, as before. Thus asymptotic normality obtains provided ν and (r, ε) > 0; otherwise ρ and n (r) is degenerate. Then under H S ε , ν and (r, ε) > 0 for (r, ε) in (1,
√ 3/(2ε)) × (0, √ 3/4] or (1, √ 3/ε − 2) × ( √ 3/4, √ 3/3), and under H A ε , ν and (r, ε) > 0 for (r, ε) in (1, ∞) × 0, √ 3/3 . Also under H S ε , ν or (r, ε) > 0 for (r, ε) in [1, √ 3/(2ε))×(0, √ 3/4] or [1, √ 3/ε−2)×( √ 3/4, √ 3/3), and under H A ε , ν or (r, ε) > 0 for (r, ε) in (1, ∞)× 0, √ 3/3 or {1} × (0, √ 3/12).
Notice that for the association class of alternatives any r ∈ (1, ∞) yields asymptotic normality for all ε ∈ 0, √ 3/3 in both AND-and OR-underlying cases, while for the segregation class of alternatives only r = 1 yields this universal asymptotic normality in the OR-underlying case, and such an ε does not exist for the AND-underlying case.
The relative edge density of the underlying graphs based on the PCD is a test statistic for the segregation/association alternative; rejecting for extreme values of ρ and n (r) is appropriate since under segregation we expect ρ and n (r) to be large, while under association we expect ρ and n (r) to be small. The same holds for ρ or n (r). Using the test statistics R and n (r) = √ n ρ and n (r) − µ and (r) 4 ν and (r), and R or n (r) = √ n (ρ or n (r) − µ or (r)) 4 ν or (r)
for AND-and OR-underlying cases, respectively, the asymptotic critical value for the one-sided level α test against segregation is given by
z α = Φ −1 (1 − α)(13)
where Φ(·) is the standard normal distribution function. The test rejects for R and n (r) > z α against segregation. Against association, the test rejects for R and n (r) < z 1−α . The same holds for the test statistic R or n (r).
Asymptotic Performance of Relative Edge Density
Consistency
Theorem 4.1. The test against H S ε which rejects for R and n (r) > z α and the test against H A ε which rejects for R and n (r) < z 1−α are consistent for r ∈ (1, ∞) and ε ∈ 0, √ 3/3 . The same holds for R or n (r) with r ∈ [1, ∞). Proof: Since the variance of the asymptotically normal test statistic, under both the null and the alternatives, converges to 0 as n → ∞ (or might be zero for n < ∞), it remains to show that the mean under the null, µ and (r) = E ρ and n (r) , is less than (greater than) the mean under the alternative, µ and (r, ε) = E ρ and n (r) against segregation (association) for ε > 0. Whence it will follow that power converges to 1 as n → ∞. Let P ε (·) be the probability with respect to the uniform distribution under the alternatives with ε ∈ 0, √ 3/3 . Then against segregation, we have ν or (r) since (µ and ) ′ (r, ε = 0) = (µ or ) ′ (r, ε = 0) = 0. Equations (10) and (11) provide the denominators; the numerators require a bit of additional work, but µ and (r, ε) and µ or (r, ε) are available for small enough ε, which is all we need here. See Appendix 5 for explicit forms of µ and (r, ε) and µ or (r, ε) for segregation and association, and the derivations of µ and (r, ε) and µ or (r, ε) are provided in Appendix 6.
Let PAE S (r) and PAE A (r) denote the PAE score against the segregation and association alternatives, respectively, for the relative arc density of the PCD based on N r P E (see Ceyhan et al. (2006) more detail). Figure 8 presents the PAE as a function of r for both segregation and association in the digraph, AND, and OR-underlying graph cases. For large n and small ε, PAE analysis suggests choosing r large for testing against segregation in all three cases and choosing r small for testing against association, arbitrarily close to 1 for the AND-and OR-underlying cases, but around 1.1 for the digraph case. Furthermore, in segregation, PAE S or (r) < PAE S (r) < PAE S and (r), suggesting the use of AND-underlying version. Under association, max PAE S and (r), PAE S (r) < PAE S or (r) implying the use of OR-underlying version. Unlike PAE, HLAE does not involve the limit as ε → 0. Since this requires the mean and, especially, the asymptotic variance of ρ and n (r) under the alternative, we avoid the explicit investigation of HLAE. HLAE for OR-underlying graphs can be defined similarly. The ordering of HLAE seems to be the same as that of PAE.
Remark 4.3. Asymptotic Power Function Analysis: The asymptotic power function (Kendall and Stuart (1979)) allows investigation of power as a function of r, n, and ε using the asymptotic critical value and an appeal to normality. Under a specific segregation alternative H S ε , the asymptotic power function for AND-underlying graphs is given by Π S and (r, n, ε) = 1 − Φ z α ν and (r) ν and (r, ε) + √ n (µ and (r) − µ and (r, ε)) ν and (r, ε) .
Under H A ε , we have Π A and (r, n, ε) = Φ z 1−α ν and (r) ν and (r, ε) + √ n (µ and (r) − µ and (r, ε)) ν and (r, ε) .
For OR-underlying graphs, the asymptotic power functions, Π S or (r, n, ε) and Π A or (r, n, ε), are defined similarly. However it is not investigated in this article.
Monte Carlo Simulation Analysis for Finite Sample Performance
We implement the Monte Carlo simulations under the above described null and alternatives for r ∈ {1, 11/10, 6/5, 4/3, √ 2, 3/2, 2, 3, 5}.
Monte Carlo Power Analysis under Segregation
In Figure 9, we present a Monte Carlo investigation against the segregation alternative H S √ 3/8 for r = 1.1 and n = 10 (left) and n = 100 (right). The empirical power estimates are calculated based on the Monte Carlo critical values. Let β S mc ρ and n (r) and β S mc (ρ or n (r)) stand for the corresponding empirical power estimates for the AND-and OR-underlying cases. With n = 10, the null and alternative probability density functions for ρ and 10 (1.1) and ρ or 10 (1.1) are very similar, implying small power (10,000 Monte Carlo replicates yield empirical power values β S mc ρ and 10 = 0.1318 and β S mc (ρ or 10 ) = 0.0539). Among the 10000 Monte Carlo replicates under H o , we find the 95 th percentile value and use it as the Monte Carlo critical value at .05 level for the segregation alternative, and use 5 th percentile value for the association alternative. With n = 100, there is more separation between null and alternative probability density functions in the underlying cases where separation is much less emphasized in the OR-underlying case; 1000 Monte Carlo replicates yield β S mc ρ and 100 = 0.994 and β S mc (ρ or 100 ) = 0.298 where the empirical power estimates are based on Monte Carlo critical values. Notice also that the probability density functions are skewed right for n = 10 in both underlying cases, while approximate normality holds for n = 100.
For a given alternative and sample size we may consider optimizing the empirical power of the test as a function of the proximity factor r. Figure 10 Table 1. Our Monte Carlo estimates of r * ε , the value of r which maximizes the power against H S ε , are r * √ 3/8 = 3 and r * √ 3/4 ∈ [4/3, 3] in the AND-underlying case, and r * √ 3/8 = 2 and r * √ 3/4 ∈ [4/3, 2] in the OR-underlying case. That is, more severe segregation (larger ε) suggests a smaller choice of r in both cases. For both ε values, smaller r values are suggested in the OR-underlying case compared to the AND-underlying case.
For a given alternative and sample size we may consider analyzing the power of the test -using the asymptotic critical value-as a function of the proximity factor r. Let α n (r) denote the empirical significance levels and β n (r) empirical power estimates based on the asymptotic critical value. Figure 11 presents a Monte Carlo investigation of empirical power based on asymptotic critical value against H S √ 3/8 and H S √ 3/4 as a function of r for n = 10. The corresponding empirical power estimates are given in Table 2. In the AND-underlying case, the empirical significance level, α n=10 (r), is closest to .05 for r = 2 and 3 which have the empirical power β 10 (2) = .3846 and β 10 (3) = .5767 for ε = √ 3/8, and β 10 (2) = β 10 (3) = 1 for ε = √ 3/4. In the OR-underlying case, the empirical significance level, α n=10 (r), is closest to .05 for r = 2 -larger for all r values -which have the empirical power β 10 (2) = .1594 for ε = √ 3/8, and β 10 (2) = 1 for ε = √ 3/4. So, for small sample sizes, moderate values of r is more appropriate for normal approximation, as they yield the desired significance level, and the more severe the segregation, higher the power estimate. Furthermore, the AND-underlying version seems to perform better than the OR-underlying version for segregation alternatives.
Monte Carlo Power Analysis under Association
In Figure 12, we present a Monte Carlo investigation against the association alternative H A √ 3/12 for r = 1.1 and n = 10 (left) and n = 100 (right). The empirical power estimates are calculated based on the Monte Carlo critical values Let β A mc ρ and n (r) and β A mc (ρ or n (r)) stand for the corresponding empirical power estimates for the AND-and OR-underlying cases. As above, with n = 10, the null and alternative probability density functions for ρ and 10 (1.1) and ρ or 10 (1.1) are very similar, implying small power-in fact, virtually no power-(10,000 Monte Carlo replicates yield the following empirical power estimates based on Monte Carlo critical values: β A mc ρ and 10 = 0.0 and β A mc (ρ or 10 ) = 0.0). With n = 100, there is more separation between null and alternative probability density functions in the underlying cases where separation is much less emphasized in the AND-underlying case; for this case, 1000 Monte Carlo replicates yield the following empirical power estimates based on Monte Carlo critical values: β A mc ρ and 100 = 0.009 and β A mc (ρ or 100 ) = 0.939. Notice also that the probability density functions are skewed right for n = 10 in both underlying cases, with more skewness in OR-underlying case, while approximate normality holds for n = 100 for both cases.
In Figure 13, we also present a Monte Carlo investigation of empirical power based on Monte Carlo critical values against H A √ 3/12 and H A 5 √ 3/24 as a function of r for n = 10 with 1000 replicates. The corresponding empirical power estimates are presented in Table 3. Our Monte Carlo estimates of r * ε are r * √ 3/12 = 2 and r * 5 √ 3/24 = 3 in both underlying cases. That is, more severe association (larger ε) suggests a larger choice of r in both cases.
In Figure 14, we present a Monte Carlo investigation of power based on asymptotic critical values against H A √ 3/12 and H A 5 √ 3/24 as a function of r for n = 10. In the AND-underlying case, the empirical significance level, α n=10 (r), is about .05 for r = 2 and 3 which have the empirical power β 10 (2) ≈ .2 with maximum power at r = 2 for ε = √ 3/12, and β 10 (3) = 1 for ε = 5 √ 3/24. In the OR-underlying case, the empirical significance level, α n=10 (r), is closest to .05 for r = 1.5 which have the empirical power β 10 (1.5) ≈ .45 for ε = √ 3/12, and β 10 (1.5) = 1 for ε = 5 √ 3/24. So, for small sample sizes, moderate values of r is more appropriate for normal approximation, as they yield the desired significance level, and the more severe the association, higher the power estimate. Furthermore, the OR-underlying version seems to perform better than the AND-underlying version for association alternatives. The empirical significance levels, and empirical power β S n (r, ε) values based on asymptotic critical values under H A ε for ε = √ 3/12, 5 √ 3/24 are given in Table 4. Table 2: The empirical significance levels, α S (n), and empirical power values, β S n (r, ε), based on asymptotic critical values under H S ε for ε = √ 3/8, √ 3/4, N mc = 10000, and n = 10 at α = .05.
Multiple Triangle Case
Suppose Y m is a finite collection of m > 3 points in R 2 . Consider the Delaunay triangulation (assumed to exist) of Y m . Let T i denote the i th Delaunay triangle, J m denote the number of triangles, and
C H (Y m ) denote the convex hull of Y m . We wish to investigate H o : X i iid ∼ U(C H (Y m )
) against segregation and association alternatives using the relative edge densities of the associated underlying graphs. The underlying graphs are constructed using the PCD D, which is constructed using N r P E (·) as described in Section 2.4, where for X i ∈ T j , the three points in Y m defining the Delaunay triangle T j are used as Y [j] . We consider various versions of the relative edge density as a test statistic in the multiple triangle case.
First Version of Relative Edge Density in the Multiple Triangle Case
For J m > 1, as in Section 2.5, let ρ and I,n (r) = 2 |E and | /(n (n − 1)) and ρ or n (r) = 2 |E or | /(n (n − 1)). Let E and i be the number of edges and ρ and
[i] (r) be the relative edge density for triangle i in the AND-underlying case, and E or i and ρ or
[i] (r) be similarly defined for OR-underlying case. Let n i be the number of X points in T i for i = 1, 2, . . . , J m .
Letting w i = A(T i )/A(C H (Y m )) with A(·)
being the area functional, we obtain the following as a corollary to Theorem 3.2.
Corollary 6.1. The asymptotic null distribution for ρ and I,n (r) conditional on Y m for r ∈ (1, ∞) is given by √ n ρ and I,n (r) − µ and (r)
L −→ N (0, 4 ν and (r)) ,(14)
where µ and (r) = µ and (r) with µ and (r) and ν and (r) being as in Equations (8) and (10), respectively. The asymptotic null distribution of
Jm i=1 w 2 i and ν and (r) = ν and (r) Jm i=1 w 3 i + (µ and (r)) 2 Jm i=1 w 3 i − Jm j=1 w 2 i 1ρ or I,n (r) with r ∈ [1, ∞) is similar.
The Proof is provided in Appendix 7. By an appropriate application of the Jensen's Inequality, we see
that Jm i=1 w 3 i ≥ Jm i=1 w 2 i 2 . So the covariance above is zero iff ν and (r) = 0 and Jm i=1 w 3 i = Jm i=1 w 2 i 2
, so asymptotic normality may hold even though ν and (r) = 0. The same holds for the OR-underlying case.
Under the segregation (association) alternatives with δ × 100% where δ = 4 ε 2 /3 around the vertices of each triangle is forbidden (allowed), we obtain the above asymptotic distribution of ρ and I,n (r) with µ and (r) being replaced by µ and (r, ε) and ν and (r) by ν and (r, ε). The OR-underlying case is similar.
Other Versions of Relative Edge Density in the Multiple Triangle Case
Let Ξ and n (r) := Jm i=1 n i (n i − 1) n (n − 1) ρ and [i] (r). Then Ξ and n (r) = ρ and I,n (r), since Ξ and
n (r) = Jm i=1 n i (n i − 1) n (n − 1) ρ and [i] (r) = Jm i=1 2 |E and i | n (n − 1) = 2 |E and | n (n − 1) = ρ and I,n (r). Similarly, Ξ or n (r) = ρ or n (r).
Furthermore, let Ξ and
n := Jm i=1 w 2 i ρ and [i] (r) where w i is as above. So Ξ
since ρ and
[i] (r)'s are asymptotically independent, Ξ and n (r), ρ and I,n (r) are asymptotically normal; i.e., for large n their distribution is approximately N ( µ and (r), 4 ν and (r)/n). A similar result holds for the OR-underlying case.
In Section 6.1, the denominator of ρ and I,n (r) has n(n−1)/2 as the maximum number of edges possible. However, by definition, given the n i 's we can at most have a graph with J m complete components, each with order n i for i = 1, 2, . . . , J m . Then the maximum number of edges possible is n t := Jm i=1 n i (n i −1)/2 which suggests another version of relative edge density: ρ and II,n (r) := |E and | n t . Then ρ and II,n (r) =
Jm i=1 |E and i | n t = Jm i=1 n i (n i − 1) 2 n t ρ and [i] (r). Since ni (ni−1) 2 nt ≥ 0 for each i, and Jm i=1 n i (n i − 1) 2 n t = 1, ρ and II,n (r) is a mixture of ρ and [i] (r)'s.
Theorem 6.2. The asymptotic null distribution for ρ and II,n (r) conditional on Y m for r ∈ (1, ∞) is given by √ n ρ and II,n (r) −μ and (r)
L −→ N (0, 4ν and (r)) ,(15)
whereμ and (r) = µ and (r) andν and (r) = ν and (r)
Jm i=1 w 3 i Jm i=1 w 2 i 2
with µ and (r) and ν and (r) being as in Equations (8) and (10), respectively. The asymptotic null distribution of ρ or
II,n (r) with r ∈ [1, ∞) is similar.
Proof is provided in Appendix 8. Notice that the covarianceν and (r) is zero iff ν and (r) = 0, Under the segregation (association) alternatives, we obtain the above asymptotic distribution of ρ and II,n (r) with µ and (r) being replaced by µ and (r, ε) and ν and (r) by ν and (r, ε). The OR-underlying case is similar.
Remark 6.3. Comparison of Versions of Relative Edge Density in the Multiple Triangle Case: Among the versions of the relative edge density we considered, Ξ and n (r) = ρ and I,n (r) for all n > 1, and Ξ and n and ρ and I,n (r) are asymptotically equivalent (i.e., they have the same asymptotic distribution in the limit). However, ρ and I,n (r) and ρ and II,n (r) do not have the same distribution for finite or infinite n. But we have ρ and I,n (r) = 2 nt n(n−1) ρ and II,n (r) and µ and (r) <μ and (r) = µ and (r), since
Jm i=1 w 2 i < 1. Furthermore, since 2 nt n(n−1) = Jm i=1 ni(ni) n(n−1) −→ Jm i=1 w 2 i , we have lim ni→∞ Var[ √ nρ and I,n (r)] = Jm i=1 w 2 i 2 lim ni→∞ Var[ √ nρ
and I,n (r)] Henceν and (r) ≥ ν and (r). Therefore, we choose ρ and I,n (r) for further analysis in the multiple triangle case. Moreover, asymptotic normality might hold for ρ and I,n (r) even if ν and (r) = 0.
Power Analysis for the Multiple Triangle Case
Let S and n (r) := ρ and I,n (r) and S or n (r) := ρ or I,n (r). Thus in the case of J m > 1 (i.e., m > 3), we have a (conditional)
test of H o : X i iid ∼ U(C H (Y m )
) which once again rejects against segregation for large values of S and n (r) and rejects against association for small values of S and n (r). The same holds for S or n (r). Depicted in Figures 15 and 16 Table 4: The empirical significance level and empirical power estimates based on asymptotic critical values under H A ε for ε = √ 3/12, 5 √ 3/24, N mc = 10000, and n = 10 at α = .05.
identically distributed according to the segregation with δ = 1/16, null, and association with δ = 1/4 (from left to right) for |Y m | = 10 and J 10 = 13.
With n = 100 , for the null realization, the p-value is greater than 0.1 for all r except r = 1, 4/3, √ 2 for both alternatives in the AND-underlying case, and for all r values and both alternatives in the OR-underlying case. For the segregation realization with δ = 1/16, we obtain p < 0.018 for all r values except r = 1 in the AND-underlying case and p < 0.02 for all r values in the OR-underlying case. For the association realization with δ = 1/4, we obtain p < 0.043 for r = 2, 3 in the AND-underlying case and p < 0.05 for r = 4/3, √ 2, 1.5, 2 in the OR-underlying case.
With n = 1000, in the AND-underlying case under the null distribution, p > .05 for all r values relative to segregation and association. Under segregation with δ = 1/16, p < .01 for all r values considered. Under association with δ = 1/4, p < .01 for r ∈ {4/3, √ 2, 1.5, 2, 3, 5} and p > .05 for the other r values considered. In the OR-underlying case under the null distribution, p > .05 for all r values relative to segregation and association. Under segregation with δ = 1/16, p < .01 for r ∈ {1.1, 1.2, 4/3, √ 2, 1.5, 2, 3, 5} and p > .05 for the other r values considered. Under association with δ = 1/4, p < .01 for r ∈ {1.1, 1.2, 4/3, √ 2, 1.5, 2, 3} and p > .05 for the other r values considered.
We repeat the null realization 1000 times for n = 100 and find the estimated significance level above 0.05 for the AND-underlying case relative to both alternatives with smallest being 0.12 at r = 2 relative to segregation and 0.099 at r = 2 relative to association. The associated empirical size and power estimates are presented in Figures 17 and 18. These results indicate that n = 100 (i.e., the average number of points per triangle being about 8) is not enough for the normal approximation in the AND-underlying case. For the OR-underlying case the estimated significance level relative to segregation is closest to 0.05 is 0.03 at r = 5 and all much different at other r values. The estimated significance level relative to association are larger than 0.25 for all r values. Again the number of points per triangle is not large enough for normal approximation. With n = 500 (i.e., the average number of points per triangle being about 40), the estimated significance levels get closer to 0.05, however they still are all above 0.05, hence for moderate sample sizes, the tests using the relative edge density of the underlying graphs are liberal in rejecting H o . The empirical power analysis suggests the choice of r = 2 -a moderate r value-for both alternatives in both underlying cases. Note also that AND-underlying case seems to perform better for segregation.
The PAE is given for J m = 1 in Section 4.2. For J m > 1, the analysis will depend both the number of triangles as well as the sizes of the triangles. So the optimal r values suggested for the J m = 1 case does not necessarily hold for J m > 1, so it needs to be updated, given the Y m points. The conditional test presented here is appropriate when the Y m are fixed. An unconditional version requires the joint distribution of the number and size of Delaunay triangles when Y m is, for instance, a Poisson point pattern. Alas, this joint distribution is not available (Okabe et al. (2000)).
Extension to Higher Dimensions
The extension to R d for d > 2 is straightforward. Let Y d+1 = {y 1 , y 2 , . . . , y d+1 } be d + 1 non-coplanar points. Denote the simplex formed by these d + 1 points as S(Y d+1 ). A simplex is the simplest polytope in R d having d + 1 vertices, d (d + 1)/2 edges and d + 1 faces of dimension (d − 1). For r ∈ [1, ∞], define the proportionaledge proximity map as follows. Given a point x in S(Y d+1 ), let y := arg min y∈Y d+1 volume(Q y (x)) where Q y (x) is the polytope with vertices being the d (d + 1)/2 midpoints of the edges, the vertex y and x. That is, the vertex region for vertex v is the polytope with vertices given by v and the midpoints of the edges.
Let v(x) be the vertex in whose region x falls. (If x falls on the boundary of two vertex regions or at the center of mass, we assign v(x) arbitrarily.) Let ϕ(x) be the face opposite to vertex v(x), and η(v(x), x) be the hyperplane parallel to ϕ(x) which contains
x. Let d(v(x), η(v(x), x)) be the (perpendicular) Euclidean distance from v(x) to η(v(x), x). For r ∈ [1, ∞), let η r (v(x), x) be the hyperplane parallel to ϕ(x) such that d(v(x), η r (v(x), x)) = r d(v(x), η(v(x), x)) and d(η(v(x), x), η r (v(x), x)) < d(v(x), η r (v(x), x)
). Let S r (x) be the polytope similar to and with the same orientation as S having v(x) as a vertex and η r (v(x), x) as the opposite face. Then the proportional-edge proximity region N r
P E (x) := S r (x) ∩ S(Y d+1 ). Furthermore, let ζ i (x) be the hyperplane such that ζ i (x) ∩ S(Y d+1 ) = ∅ and r d(y i , ζ i (x)) = d(y i , η(y i , x)) for i = 1, 2, . . . , d + 1. Then Γ r 1 (x) ∩ R(y i ) = {z ∈ R(y i ) : d(y i , η(y i , z)) ≥ d(y i , ζ i (x)}, for i = 1, 2, 3. Hence Γ r 1 (x) = ∪ d+1 j=1 (Γ r 1 (x) ∩ R(y i )). Notice that r ≥ 1 implies x ∈ N r P E (x) and x ∈ Γ r 1 (x)
. Theorem 1 generalizes, so that any simplex S in R d can be transformed into a regular polytope (with edges being equal in length and faces being equal in volume) preserving uniformity. Delaunay triangulation becomes Delaunay tessellation in R d , provided no more than 4 points being cospherical (lying on the boundary of the same sphere). In particular, with d = 3, the general simplex is a tetrahedron (4 vertices, 4 triangular faces and 6 edges), which can be mapped into a regular tetrahedron (4 faces are equilateral triangles) with vertices (0, 0, 0) (1, 0, 0) (1/2, √ 3/2, 0), (1/2, √ 3/4, √ 3/2).
Asymptotic normality of the U -statistic and consistency of the tests hold for d > 2 in both underlying cases.
Discussion and Conclusions
In this article, we consider the asymptotic distribution of the relative edge density of the underlying graphs based on (parametrized) proportional-edge proximity catch digraphs (PCDs), for testing bivariate spatial point patterns of segregation and association. To our knowledge the PCD-based methods are the only graph theoretic methods for testing spatial patterns in literature (Ceyhan and Priebe (2005), Ceyhan et al. (2006), and Ceyhan et al. (2007)). The proportional-edge PCDs lend themselves for such a purpose, because of the geometry invariance property for uniform data on Delaunay triangles. Let the two samples of sizes n and m be from classes X and Y, respectively, with X points being used as the vertices of the PCDs and Y points being used in the construction of Delaunay triangulation. For the relative density approach to be appropriate, n should be much larger compared to m. This implies that n tends to infinity while m is assumed to be fixed. That is, the difference in the relative abundance of the two classes should be large for this method. Such an imbalance usually confounds the results of other spatial interaction tests. Furthermore, we can perform Monte Carlo randomization to remove the conditioning on Y m .
Previously, Ceyhan et al. (2006) employed the relative (arc) density of the proportional-edge PCDs for testing bivariate spatial patterns. In this work, we consider the AND-and OR-underlying graphs based on this PCD; in particular, we demonstrate that relative edge density of these underlying PCDs is a U -statistic, and employing asymptotic normality of U -statistics, we derive the asymptotic distribution of the relative edge density. We then use relative edge density as a test statistic for testing segregation and association. The null hypothesis is assumed to be CSR of X points, i.e., the uniformness of X points in the convex hull of Y points. Although we have two classes here, the null pattern is not the CSR independence, since for finite m, we condition on m and the areas of the Delaunay triangles based on Y points as long as they are not co-circular.
There are many types of parametrizations for the alternatives. The particular parametrization of the alternatives in this article is chosen so that the distribution of the relative edge density under the alternatives would be geometry invariant (i.e., independent of the geometry of the support triangles). The more natural alternatives (i.e., the alternatives that are more likely to be found in practice) can be similar to or might be approximated by our parametrization. Because in any segregation alternative, the X points will tend to be further away from Y points and in any association alternative X points will tend to cluster around the Y points. And such patterns can be detected by the test statistics based on the relative edge density, since under segregation (whether it is parametrized as in Section 3.2 or not) we expect them to be larger, and under association (regardless of the parametrization) they tend to be smaller.
Our Monte Carlo simulation analysis and asymptotic efficiency analysis based on Pitman asymptotic efficiency reveals that AND-underlying graph has better power performance against segregation compared to the digraph and OR-underlying version. On the other hand, OR-underlying graph has better power performance against association compared to the digraph and AND-underlying version. When the number of X points per triangle is less than 30, we recommend the use Monte Carlo randomization, otherwise we recommend the use of normal approximation as n → ∞. Furthermore, when testing against segregation we recommend the parameter r ≈ 2, while for testing against association we recommend the parameters r ∈ (2, 3) as they exhibit the better performance in terms of size and power. The variance term is
Var h and 12 (r) = ϕ and 1,1 (r)I(r ∈ [1, 4/3)) + ϕ and 1,2 (r)I(r ∈ [4/3, 3/2)) + ϕ and 1,3 (r)I(r ∈ [3/2, 2)) + ϕ and 1,4 (r)I(r ∈ [2, ∞))
where ϕ and 1,1 (r) = − (5 r 6 −153 r 5 +393 r 4 −423 r 3 −54 r 2 +360 r−128)(447 r 4 −261 r 3 +54 r 2 +5 r 6 −153 r 5 +360 r−128)
2916 r 4 (r+2) 2 (r+1) 2 , ϕ and 1,2 (r) = − (101 r 5 −801 r 4 +1302 r 3 −732 r 2 −536 r+672)(1518 r 3 −84 r 2 −104 r+101 r 5 −801 r 4 +672) 46656 r 2 (r+2) 2 (r+1) 2 , ϕ and 1,3 (r) = − (r 8 −13 r 7 +30 r 6 +148 r 5 −448 r 4 +264 r 3 +288 r 2 −368 r+96)(22 r 6 +124 r 5 −464 r 4 +r 8 −13 r 7 +264 r 3 +288 r 2 −368 r+96) 64 r 8 (r+2) 2 (r+1) 2 , ϕ and 1,4 (r) = (r 5 +r 4 −3 r 3 −3 r 2 +6 r−2)(3 r 3 +3 r 2 −6 r+2) where ϑ and 1 (r) = − 1 58320 (2 r 2 + 1)(r + 2) 2 (r + 1) 3 r 6 ((r − 1) 2 (972 r 19 + 8748 r 18 + 44456 r 17 + 140328 r 16 + 121371 r 15 − 412117 r 14 − 27145 r 13 − 4503501 r 12 + 1336147 r 11 + 10640999 r 10 − 982009 r 9 − 6677105 r 8 − 2274458 r 7 − 1150162 r 6 + 249126 r 5 + 1232530 r 4 + 1234372 r 3 + 226776 r 2 − 184944 r − 81920)) ϑ and 2 (r) = − 1 116640 (2 r 2 + 1)(r + 2) 2 (r + 1) 3 r 6 (486 r 21 + 3402 r 20 − 269 r 19 − 45155 r 18 − 118850 r 17 + 443518 r 16 + 3251855 r 15 − 13836295 r 14 + 13434672 r 13 + 11140788 r 12 − 27667544 r 11 + 13293088 r 10 + 7159710 r 9 − 13013598 r 8 + 4185440 r 7 + 3262952 r 6 + 586636 r 5 − 1616444 r 4 − 680120 r 3 − 55952 r 2 + 219936 r + 49152) ϑ and 3 (r) = − 1 116640 (2 r 2 + 1)(r + 2) 2 (r + 1) 3 r 6 (486 r 21 + 3402 r 20 − 269 r 19 − 45155 r 18 − 118850 r 17 + 443518 r 16 + 2751855 r 15 − 13736295 r 14 + 18084672 r 13 + 8770788 r 12 − 43009544 r 11 + 24604048 r 10 + 27137438 r 9 − 30889822 r 8 − 2832544 r 7 + 11101160 r 6 − 4168820 r 5 + 2364868 r 4 + 2305864 r 3 − 3041936 r 2 + 219936 r + 49152) ϑ and 4 (r) = − 1 58320 (r + 2) 3 (r 2 − 2)(2 r 2 + 1)(r + 1) 3 r 6 (3632 r 22 + 25632 r 21 − 60328 r 20 − 441888 r 19 + 1353430 r 18 − 297666 r 17 − 4791125 r 16 + 12849927 r 15 − 10894618 r 14 − 26295324 r 13 + 62283823 r 12 − 2280753 r 11 − 81700012 r 10 +32551926 r 9 +39974410 r 8 −11284026 r 7 −5806580 r 6 −9167580 r 5 −2004944 r 4 +4646688 r 3 +1931776 r 2 −489024 r−98304) ϑ and 5 (r) = ϑ and 6 (r) = − 1 58320 (r + 2) 3 (2 r 2 + 1)(r 2 + 1)(r + 1) 3 r 6 (3632 r 22 +25632 r 21 −49432 r 20 −364992 r 19 +958940 r 18 − 1167012 r 17 + 1200518 r 16 + 5424126 r 15 − 23566328 r 14 + 23837088 r 13 + 11797395 r 12 − 41623065 r 11 + 39261953 r 10 −8239197 r 9 −30178496 r 8 +27901506 r 7 −4936170 r 6 +61038 r 5 +4719720 r 4 −5513952 r 3 +340736 r 2 +23328 r+65536) ϑ and 7 (r) = 1 466560 (r + 2) 3 (2 r 2 + 1)(r 2 + 1)(r + 1) 3 r 5 (1562 r 21 − 11142 r 20 − 103099 r 19 + 2105697 r 18 − 9774118 r 17 + 10220280 r 16 + 27825711 r 15 − 69243129 r 14 + 81624200 r 13 − 76052574 r 12 − 65530400 r 11 + 262451196 r 10 − 178092280 r 9 −69106464 r 8 +158439568 r 7 −97568688 r 6 +12246288 r 5 +17591952 r 4 −21111616 r 3 +15628032 r 2 −2545664 r+993024) ϑ and 8 (r) = − 1 1920 (r + 2) 3 (r 2 + 1)(2 r 2 + 1)(r + 1) 3 r 10 (2 r 26 −30 r 25 −2395 r 23 +281 r 24 +8770 r 22 +29528 r 21 −268053 r 20 + 245667 r 19 + 2066216 r 18 − 5313494 r 17 − 1589216 r 16 + 18512684 r 15 − 18946136 r 14 − 2665248 r 13 + 22789584 r 12 − 32987760 r 11 +20482512 r 10 +13109584 r 9 −28084416 r 8 +17326976 r 7 −3864576 r 6 −4579328 r 5 +6666240 r 4 −3576320 r 3 + 635904 r 2 − 116736 r + 61440) ϑ and 9 (r) = − 1 1920 (r + 2) 3 (r 2 + 1)(2 r 2 + 1)(r + 1) 3 r 10 (2 r 26 −30 r 25 −2395 r 23 281 r 24 +8258 r 22 +31064 r 21 −262677 r 20 + 225443 r 19 + 2052136 r 18 − 5219030 r 17 − 1608928 r 16 + 18337836 r 15 − 18837080 r 14 − 2598688 r 13 + 22736336 r 12 − 32858736 r 11 +20384720 r 10 +12930896 r 9 −27988416 r 8 +17416832 r 7 −3862784 r 6 −4575488 r 5 +6638848 r 4 −3603200 r 3 + 640512 r 2 − 107520 r + 63488) ϑ and 10 (r) = − 1 1920 (r + 2) 3 (r − 1)(r + 1) 3 (2 r 2 − 1)r 10 (2 r 25 +307 r 23 −32 r 24 −2612 r 22 +11572 r 21 +21934 r 20 −328867 r 19 + 524994 r 18 + 2446870 r 17 − 8676180 r 16 − 437020 r 15 + 36944680 r 14 − 40677696 r 13 − 44860384 r 12 + 106256352 r 11 − 15515040 r 10 − 98636848 r 9 + 66358080 r 8 + 27142272 r 7 − 42614272 r 6 + 7781120 r 5 + 7327232 r 4 − 3388672 r 3 + 430592 r 2 − 171008 r + 63488) ϑ and 11 (r) = 1 15 (2 r 2 − 1)(r + 1) 3 r 10 (30 r 13 + 90 r 12 − 127 r 11 − 621 r 10 + 320 r 9 + 1568 r 8 − 858 r 7 − 1370 r 6 + 909 r 5 + 295 r 4 − 292 r 3 + 44 r 2 + 6 r − 2) and I 1 = [1, 2/ The variance term is Var [h or 12 (r)] = ϕ or 1,1 (r)I(r ∈ [1, 4/3)) + ϕ or 1,2 (r)I(r ∈ [4/3, 3/2)) + ϕ or 1,3 (r)I(r ∈ [3/2, 2)) + ϕ or 1,4 (r)I(r ∈ [2, ∞))
where ϕ or 1,1 (r) = − (47 r 6 −195 r 5 +860 r 4 −846 r 3 −108 r 2 +720 r−256)(752 r 4 −1170 r 3 −324 r 2 +47 r 6 −195 r 5 +720 r−256)
11664 r 4 (r+2) 2 (r+1) 2 , ϕ or 1,2 (r) = − (175 r 5 −579 r 4 +1450 r 3 −732 r 2 −536 r+672)(1234 r 3 −1380 r 2 −968 r+175 r 5 −579 r 4 +672) 46656 r 2 (r+2) 2 (r+1) 2 , ϕ or 1,3 (r) = − (3 r 8 −7 r 7 −30 r 6 +84 r 5 −264 r 4 +304 r 3 +144 r 2 −368 r+96)(−22 r 6 +108 r 5 −248 r 4 +3 r 8 −7 r 7 +304 r 3 +144 r 2 −368 r+96) 64 r 8 (r+2) 2 (r+1) 2 , ϕ or 1,4 (r) = 2 (r 5 +r 4 −6 r+2)(3 r−1)
r 8 (r+1) 2
. See Figure 19.
Note that Var or (r = 1) = 2627/11664 and lim r→∞ Var or (r) = 0 (at rate O(r −4 )), and argsup r∈[1,∞) Var or (r) ≈ 1.44 with sup Var or (r) ≈ .25. where ϑ or 1 (r) = − 1 58320 (r 2 + 1)(2 r 2 + 1)(r + 1) 3 (r + 2) 3 r 6 (1458 r 22 +13122 r 21 +50731 r 20 −84225 r 19 −19193 r 18 −1823223 r 17 + 5576151 r 16 + 2978697 r 15 − 33432692 r 14 + 37427862 r 13 + 15883834 r 12 − 60944766 r 11 + 49876417 r 10 − 1754523 r 9 − 36606859 r 8 + 32338215 r 7 − 10290256 r 6 − 2234754 r 5 + 7085471 r 4 − 5608569 r 3 + 1645826 r 2 − 132876 r + 30824) ϑ or 2 (r) = ϑ or 3 (r) = − 1 116640 (r 2 + 1)(2 r 2 + 1)(r + 1) 3 (r + 2) 3 r 6 (1458 r 22 +13122 r 21 +62825 r 20 −175011 r 19 +156014 r 18 − 3300900 r 17 + 11053023 r 16 + 5055135 r 15 − 67685050 r 14 + 75243552 r 13 + 33155180 r 12 − 120628524 r 11 + 99831906 r 10 − 4883958 r 9 −74801558 r 8 +64360782 r 7 −19812000 r 6 −3667716 r 5 +14541630 r 4 −11254002 r 3 +3070468 r 2 −413208 r+28880) ϑ or 4 (r) = − 1 58320 (r 2 + 1)(2 r 2 + 1)(r 2 − 2)(r + 2) 3 (r + 1) 3 r 6 (972 r 24 + 8748 r 23 + 29590 r 22 − 149106 r 21 − 36820 r 20 − 986280 r 19 +5942884 r 18 +2883672 r 17 −47189711 r 16 +43450125 r 15 +85975304 r 14 −156173934 r 13 +27378901 r 12 +123606417 r 11 −152209261 r 10 +64653597 r 9 +56621894 r 8 −88962768 r 7 +43754559 r 6 −5940597 r 5 −13006396 r 4 +17019366 r 3 −7037340 r 2 + 413208 r − 28880) ϑ or 5 (r) = − 1 58320 (r 2 + 1)(2 r 2 + 1)(r + 1) 3 (r + 2) 3 r 6 (972 r 22 +8748 r 21 +31534 r 20 −131610 r 19 +261546 r 18 −1552026 r 17 + 3745643 r 16 + 4573731 r 15 − 29416804 r 14 + 26163354 r 13 + 19600850 r 12 − 43126062 r 11 + 31497249 r 10 − 7381467 r 9 − 22237963 r 8 + 26778663 r 7 − 9107024 r 6 − 115074 r 5 + 3136927 r 4 − 5055609 r 3 + 2292994 r 2 + 14580 r − 1944) ϑ or 6 (r) = 1 233280 (r 2 + 1)(2 r 2 + 1)(r + 1) 3 (r + 2) 3 r 6 (486 r 22 −7290 r 21 −181459 r 20 +1024401 r 19 −2691213 r 18 +3921057 r 17 + 1844321 r 16 − 33347697 r 15 + 80028903 r 14 − 29292735 r 13 − 98093906 r 12 + 125034492 r 11 − 46658244 r 10 − 57216612 r 9 + 88057996 r 8 − 26383068 r 7 − 12851392 r 6 + 14179848 r 5 − 8656508 r 4 + 1593828 r 3 + 134136 r 2 − 58320 r + 7776) ϑ or 7 (r) = 1 233280 (r + 2) 3 (r 2 + 1)(2 r 2 + 1)(r + 1) 3 (r − 1)r 6 (486 r 23 −7776 r 22 −174169 r 21 +1205860 r 20 −4656806 r 19 + 8763566 r 18 +7460036 r 17 −63559490 r 16 +91134324 r 15 +18516450 r 14 −122708655 r 13 +18577230 r 12 +80410332 r 11 −19357704 r 10 − 39129236 r 9 +75311048 r 8 −77449360 r 7 +4053376 r 6 +48283912 r 5 −40690240 r 4 +17736336 r 3 −4315680 r 2 +544320 r−31104) ϑ or 8 (r) = 1 960 (r + 2) 3 (r 2 + 1)(2 r 2 + 1)(r + 1) 3 r 8 (2 r 24 − 30 r 23 − 161 r 22 + 107 r 21 + 4137 r 20 − 10685 r 19 + 8367 r 18 + 78713 r 17 − 450859 r 16 + 697707 r 15 + 517846 r 14 − 3723120 r 13 + 6565124 r 12 − 1468692 r 11 − 8695792 r 10 + 9535720 r 9 − 6773160 r 8 + 526744 r 7 + 10691376 r 6 − 7797264 r 5 + 1137696 r 4 + 523712 r 3 − 2687872 r 2 + 1701888 r − 245760) ϑ or 9 (r) = 1 960 (2 r 2 + 1)(r + 1) 2 (r + 2) 3 (r 2 + 1)r 10 ( In the standard equilateral triangle, let y 1 = (0, 0), y 2 = (1, 0), y 3 = 1/2, √ 3/2 , M C be the center of mass, M i be the midpoints of the edges e i for i = 1, 2, 3. Then M C = 1/2, √ 3/6 , M 1 = 3/4, √ 3/4 , M 2 = 1/4, √ 3/4 , M 3 = (1/2, 0). Let X n be a random sample of size n from U(T (Y 3 )). For x 1 = (u, v), ℓ r (x 1 ) = r v+r √ 3 u− √ 3 x. Next, let N 1 := ℓ r (x 1 ) ∩ e 3 and N 2 := ℓ r (x 1 ) ∩ e 2 .
Derivation of µ and (r) in Theorem 3.2 First we find µ and (r) for r ∈ (1, ∞). Observe that, by symmetry,
µ and (r) = P X 2 ∈ N r P E (X 1 ) ∩ Γ r 1 (X 1 ) = 6 P X 2 ∈ N r Y (X 1 ) ∩ Γ r 1 (X 1 ), X 1 ∈ T s
where T s is the triangle with vertices y 1 , M 3 , and M C . Let ℓ s (r, x) be the line such that r d(y 1 , ℓ s (r, x)) = d(y 1 , e 1 ), so ℓ s (r, x)
= √ 3 (1/r − x). Then if x 1 ∈ T s is above ℓ s (r, x) then N r P E (x 1 ) = T (Y 3 ), otherwise, N r P E (x 1 ) T (Y 3 ).
To compute µ and (r), we need to consider various cases for N r P E (X 1 ) and Γ r 1 (X 1 ) given X 1 = (x, y) ∈ T s . See Figures 21 and 22. For any x = (u, v) ∈ T (Y), Γ r 1 (x) is a convex or nonconvex polygon. Let ξ i (r, x) be the line between x and the vertex y i parallel to the edge e i such that r d(y i , ξ i (r, x)) = d(y i , ℓ r (x)) for i = 1, 2, 3. Then Γ r 1 (x) ∩ R(y i ) is bounded by ξ i (r, x) and the median lines. For
x = (u, v), ξ 1 (r, x) = − √ 3 x + (v + √ 3 u)/r, ξ 2 (r, x) = (v + √ 3r (x − 1) + √ 3(1 − u))/r and ξ 3 (r, x) = ( √ 3(r − 1) + 2 v)/(2 r). For r ∈ 6/5, √ 5 − 1),
there are six cases regarding Γ r 1 (x) and one case for N r P E (x). See Figure 22 for the prototypes of these six cases of Γ 1 x, N r Y . For the AND-underlying version, we determine the possible types of N r P E (x 1 ) ∩ Γ r 1 (x 1 ) for x 1 ∈ T s . Depending on the location of x 1 and the value of the parameter r, N r P E (x 1 ) ∩ Γ r 1 (x 1 ) regions are polygons with various vertices. See Figure 24 for the illustration of these vertices and below for their explicit forms.
G 1 = √ 3y+3 x 3r , 0 , G 2 = − √ 3y−3 r+3−3 x 3r , 0 , G 3 = − √ 3y−6 r+3−3 x 6r , − √ 3(− √ 3y−3+3 x) 6r , G 4 = ( √ 3r+ √ 3−2 y) √ 3 6r , √ 3(3 r−3+2 √ 3y) 6r , G 5 = ( √ 3r− √ 3+2 y) √ 3 6r , √ 3(3 r−3+2 √ 3y) 6r , G 6 = √ 3y+3 x 6r , √ 3( √ 3y+3 x)6r
;
P 1 = 1/2, √ 3/6 2 √ 3r y + 6 r x − 3 , and P 2 = −1/2 + ( √ 3r y + 3 r x)/2, − √ 3/6 −3 + √ 3r y + 3 r x ; y 2 = (1, 0) e 3 M 3 s 1 ℓ s (r = 4, x) ℓ s (r = 1.75, x) ℓ s r = √ 2, x y 3 = 1/2, √ 3/2 e 1 s 2 y 1 = (0, 0) M C e 2L 1 = 1/2, √ 3(2 √ 3y+6 x−3 r) 6r , L 2 = 1/2, − (−2 √ 3y−6+6 x+3 r) √ 3 6r , L 3 = − √ 3y−3 r+3−3 x 2r , √ 3(3 r− √ 3y−3+3 x) 6r , L 4 = 3 r−3+2 √ 3y 2r , √ 3(3 r−3+2 √ 3y) 6r , L 5 = − r−3+2 √ 3y 2r , √ 3(3 r−3+2 √ 3y) 6r , and L 6 = −r+ √ 3y+3 x 2r , − √ 3( √ 3y+3 x−3 r)6r
; N 1 = √ 3r y/3 + r x, 0 , N 2 = √ 3r y/6 + r x/2, √ 3 √ 3y/6 + 3 x r , and N 3 = √ 3r y/4 + 3 r x/4, √ 3 √ 3y/12 + 3 x r ; and Q 1 =
√ 3r 2 y+3 r 2 x− √ 3y+3 r−3+3 x 6r , ( √ 3r 2 y+3 r 2 x+ √ 3y−3 r+3−3 x) √ 3 6r , and Q 2 = 2 √ 3r 2 y+6 r 2 x−3 r+3−2 √ 3y 6r , √ 3(3 r−3+2 √ 3y) 6r .
Let P(a 1 , a 2 , . . . , a n ) denote the polygon with vertices a 1 , a 2 , . . . , a n . For r ∈ 1, 4/3 , there are 14 cases to consider for calculation of µ and (r) in the AND-underlying version. Each of these cases correspond to the regions in Figure 26, where Case 1 corresponds to R i for i = 1, 2, 3, 4, and Case j for j > 1 corresponds to R j+3 for j = 1, 2, . . . , 14. These regions are bounded by various combinations of the lines defined below.
Let ℓ am (x) be the line joining y 1 to M C , then ℓ am (x) = √ 3x/3. Let also r 1 (
x) = √ 3 (2 r + 3 x − 3) /3, r 2 (x) = √ 3/2 − √ 3r/3, r 3 (x) = (2 x − 2 + r) √ 3/2, r 4 (x) = √ 3/2 − √ 3r/4, r 5 (x) = − √ 3(2 r x−1) 2r , r 6 (x) = − √ 3(−2+3 r x) 3r , r 7 (x) = − (1+r 2 x−r−x) √ 3 r 2 +1 , r 8 (x) = − (r 2 x−1+x) √ 3 r 2 −1 , r 9 (x) = − (r 2 x−1) √ 3 r 2 +2 , r 10 (x) = − (−2 r+2+r 2 x) √ 3 −4+r 2 , r 11 (x) = − (−2 r+2−2 x+r 2 x) √ 3 r 2 +2 , r 12 (x) = − (2 x − r)
√ 3/2, and r 13 (x) = − (−1 + x) √ 3/3. Furthermore, to determine the integration limits, we specify the x-coordinate of the boundaries of these regions using s k for k = 0, 1, . . . , 14. See also Figure 26 for an illustration of these points whose explicit forms are provided below. s 0 = 1 − 2 r/3, s 1 = 3/2 − r, s 2 = 3/(8 r), s 3 = −3 r+2 r 2 +3 6r , s 4 = 1 − r/2, s 5 = 2 r−r 2 +1 4r , s 6 = 1/(2 r), s 7 = 3 2 (2 r 2 +1) , s 8 = 9−3 r 2 +2 r 3 −2 r 6(r 2 +1) , s 9 = 1/ (r + 1), s 10 = −3 r+2 r 2 +4 6r , s 11 = 3 r/8, s 12 = 6 r−3 r 2 +4 12r , s 13 = 3/2 − 5 r/6, and s 14 = r − 1/2 − r 3 /8.
Below, we compute P (X 2 ∈ N r P E (X 1 ) ∩ Γ r 1 (X 1 ), X 1 ∈ T s ) for each of the 14 cases: Case 1:
P (X2 ∈ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 2 0 Z ℓam(x) 0 + Z s 6 s 2 Z r 5 (x) 0 ! A(P(G1, N1, N2, G6)) A(T (Y3)) 2 dydx = (r − 1) (r + 1)`r 2 + 16 4 r 6 where A(P(G1, N1, N2, G6)) = √ 3/36`√3y + 3 x´2 r 2 − √ 3( √ 3y+3 x) 2 36 r 2
. y 2 = (1, 0) y 1 = (0, 0)
y 3 = (1/2, √ 3/2) e 1 e 2 e 3 M 3 ξ 1 (r, x) M C x 1 y 2 = (1, 0) y 1 = (0, 0) y 3 = (1/2, √ 3/2) e 1 e 2 e 3 M 3 ξ 1 (r, x) M C x 1 ξ 2 (r, x) y 2 = (1, 0) y 1 = (0, 0) y 3 = (1/2, √ 3/2) e 1 e 2 e 3 M 3 M C ξ 2 (r, x) x 1 ξ 1 (r, x) y 2 = (1, 0) y 1 = (0, 0) y 3 = (1/2, √ 3/2) e 3 ξ 1 (r, x) M C x 1 e 2 e 1 M 3 G 1 G 6 M 2 L 5 ξ 3 (r, x) L 4 L 3 L 2 ξ 2 (r, x)
M 1 y 2 = (1, 0) y 1 = (0, 0)
y 3 = (1/2, √ 3/2) e 1 e 2 e 3 M 3 M C ξ 3 (r, x)
x 1 ξ 2 (r, x) ξ 1 (r, x) y 2 = (1, 0) y 1 = (0, 0)
y 3 = (1/2, √ 3/2) e 1 e 2 e 3 M 3 M C x 1 ξ 2 (r, x) ξ 3 (r, x) ξ 1 (r, x)
case-6 Figure 22: The prototypes of the six cases of Γ r 1 (x) for x ∈ T s for r ∈ [1, 4/3).
Case 2:
P (X2 ∈ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 6 s 5 Z r 7 (x) r 5 (x) + Z s 9 s 6 Z r 7 (x) 0 ! A(P(G1, N1, P2, M3, G6))
A(T (Y3)) 2 dydx = 9 r 5 + 23 r 4 + 24 r 3 + 24 r 2 + 13 r + 3´(r − 1) 4 96 r 6 (r + 1) 3
where A(P(G1, N1, P2, M3, G6)) = − √ 3(−4 r 3 √ 3y−12 r 3 x+2 r 4 y 2 +4 r 4 √ 3y x+6 r 4 x 2 +3 r 2 +2 y 2 +4 √ 3y x+6 x 2 ) 24 r 2 .
Case 3: where A(P(G1, G2, Q1, P2, M3, G6)) = − h √ 3`−4 √ 3r y − 12 x + 4 y 2 + 4 r 2 y 2 − 12 r + 9 r 2 + 12 r x + 4 r 4 y 2 − 12 x 2 r 2 − 24 r 3 x + 12 r 4 x 2 + 8 r 4 √ 3y x + 12 x 2 + 12 r 2
P (X2 ∈ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 9 s 5 Z r 3 (x)x + 6 − 8 r 3 √ 3y + 4 √ 3y + 4 √ 3r 2 y´i .h 24 r 2 i .
Case 4:
P (X2 ∈ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 5 s 8 Z r 2 (x) r 8 (x) + Z s 10 s 5 Z r 2 (x) r 3 (x) + Z s 12 s 10 Z r 6 (x) r 3 (x)
! A(P(G1, M1, L2, Q1, P2, M3, G6)) A(T (Y3)) 2 dydx = h 512 + 138240 r 7 + 3654 r 12 − 255 r 8 + 43008 r 3 − 12369 r 2 − 86387 r 4 − 193581 r 6 + 148224 r 5 − 100608 r 9 + 94802 r 10 − 35328 r 11
i.h 7776`r 2 + 1´3 r 6 i where A(P(G1, M1, L2, Q1, P2, M3, G6)) = − h √ 3`6 x + 3 r 2 − 2 √ 3y + 2 √ 3r 2 y + 2 r 4 y 2 − 4 r 3 √ 3y + 4 √ 3y x + 2 r 2 y 2 + 4 r 4 √ 3y x − 6 x 2 r 2 − 12 r 3 x + 6 r 4 x 2 + 6 r 2 x − 3´i
.h 12 r 2 i .
Case 5:
P (X2 ∈ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 8 s 3 Z r 2 (x) r 5 (x) + Z s 5 s 8 Z r 8 (x) r 5 (x) ! A(P(G1, M1, P1, P2, M3, G6)) A(T (Y3)) 2 dydx =
−`1 77 r 8 − 648 r 7 + 570 r 6 − 360 r 5 + 28 r 4 − 24 r 3 + 174 r 2 + 72 r + 27´`−12 r + 7 r 2 + 3´2 7776 (r 2 + 1) 3 r 6
where A(P(G1, M1, L2, Q1, P2, M3, G6)) = − √ 3(−4 r 3 √ 3y−12 r 3 x+3 r 2 +6 r 4 √ 3y x+9 r 4 x 2 +3 r 4 y 2 + y 2 +2 √ 3y x+3 x 2 ) 12 r 2 .
Case 6: h`1 5552 r 2 + 1´3`2 r 2 + 1´3 r 6 i where A(P(G1, M1, P1, P2, M3, G6)) = − √ 3(−4 r 3 √ 3y−12 r 3 x+3 r 2 +6 r 4 √ 3y x+9 r 4 x 2 +3 r 4 y 2 + y 2 +2 √ 3y x+3 x 2 ) 12 r 2 .
P (X2 ∈ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 3 s 2 Z ℓam (x) r 5 (x) + Z s 7 s 3 Z ℓam(x) r 2 (x) + Z s 8 s 7 Z r 8 (x) r 2 (x) ! A(P(G1,
Case 7:
P (X2 ∈ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 8 s 7 Z r 9 (x) r 8 (x) + Z s 10 s 8 Z r 9 (x) r 2 (x)
! A(P(G1, M1, L2, Q1, P2, M3, G6)) A(T (Y3)) 2 dydx = − 4`100 r 11 − 408 r 10 + 454 r 9 − 564 r 8 + 283 r 7 − 108 r 6 − 34 r 5 + 204 r 4 − r 3 + 132 r 2 + 26 r + 24´(2 r − 1) 2 (r − 1) 2 243 (r 2 + 1) 3 r 3 (2 r 2 + 1) 3
where A(P(G1, M1, L2, Q1, P2, M3, G6)) = − h √ 3`6 x + 3 r 2 − 2 √ 3y + 2 √ 3r 2 y + 2 r 4 y 2 − 4 r 3 √ 3y + 4 √ 3y x + 2 r 2 y 2 + 4 r 4 √ 3y x − 6 x 2 r 2 − 12 r 3 x + 6 r 4 x 2 + 6 r 2 x − 3´i
.h 12 r 2 i .
Case 8: i where A(P(G1, G2, Q1, N3, MC , M3, G6)) = − h √ 3`4 √ 3r 2 y − 12 x − 12 r + 5 r 2 + 12 r x + 4 y 2 − 12 x 2 r 2 + 4 r 2 y 2 + r 4 y 2 + 2 r 4 √ 3y x − 4 r 3 √ 3y + 6 − 12 r 3 x + 3 r 4 x 2 + 12 x 2 + 12 r 2 x − 4 √ 3r y + 4 √ 3y´i
P (X2 ∈ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 13 s 12 Z r 3 (x) r 6 (x) + Z 1/2 s 13 Z r 2 (x)
.h 24 r 2 i .
Case 9:
P (X2 ∈ N r P E (X1)∩Γ r 1 (X1), X1 ∈ Ts) = Z s 12 s 10 Z r 2 (x) r 6 (x) + Z s 13 s 12 Z r 2 (x) r 3 (x)
! A(P(G1, M1, L2, Q1, N3, MC , M3, G6)) A(T (Y3)) 2 dydx = −`4 9 r 8 − 168 r 7 + 354 r 6 − 528 r 5 + 236 r 4 − 96 r 3 − 224 r 2 + 384 r + 64´`−12 r + 7 r 2 + 4´2 15552 r 6
where A(P(G1, M1, L2, Q1, N3, MC , M3, G6)) = − h √ 3`8 √ 3y x + 4 √ 3r 2 y + 12 x + 2 r 2 − 12 x 2 r 2 − 4 r 3 √ 3y − 12 r 3 x + 3 r 4 x 2 + r 4 y 2 + 2 r 4 √ 3y x + 12 r 2 x − 6 − 4 √ 3y + 4 r 2 y 2´i
.h 24 r 2 i .
Case 10: where A(P(G1, M1, L2, Q1, N3, L4, L5, M3, G6)) = − h √ 3`4 √ 3r 2 y +8 √ 3y x+4 r 2 y 2 −16 √ 3r y −4 r 3 √ 3y −24 y 2 +12 x+ 24 r − 6 r 2 − 12 x 2 r 2 − 12 r 3 x + 3 r 4 x 2 + 12 r 2 x + 20 √ 3y + 2 r 4 √ 3y x + r 4 y 2 − 24´i
P (X2 ∈ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 14 s 10 Z r 10 (x) r 2 (x) + Z s 13 s 14 Z r 12 (x) r 2 (x) + Z 1/2 s 13 Z r 12 (x) r 3 (x)
.h 24 r 2 i .
Case 11: i where A(P(G1, M1, L2, Q1, Q2, L5, M3, G6)) = − h √ 3`6 x+3 r 2 −4 r 2 x √ 3y −4 y 2 −6 x 2 r 2 +2 r 4 √ 3y x+4 √ 3y x−2 r 2 y 2 − 4 r 3 √ 3y + r 4 y 2 − 12 r 3 x + 3 r 4 x 2 + 12 r 2 x − 6 + 4 √ 3r 2 y + 2 √ 3y´i
P (X2 ∈ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 11 s 7 Z ℓam (x) r 9 (x) + Z s 10 s 11 Z r 12 (x) r 9 (x) + Z s 14 s 10 Z r 12 (x) r 10 (x) ! A(P(G1, M1, L2, Q1, Q2, L5, M3, G6)) A(T (Y3)) 2 dydx = h (r − 1
.h 12 r 2 i .
Case 12:
P (X2 ∈ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z 1/2 s 13 Z r 3 (x) r 2 (x)
A(P(G1, G2, Q1, N3, L4, L5, M3, G6)) A(T (Y3)) 2 dydx = −`4 9 r 6 − 204 r 5 + 476 r 4 − 768 r 3 − 8 r 2 + 768 r − 288´(−6 + 5 r) 2 7776 r 2
where A(P(G1, G2, Q1, N3, L4, L5, M3, G6)) = − h √ 3`−12 x + 12 r − 3 r 2 + 12 r x − 20 √ 3r y − 12 x 2 r 2 + 4 √ 3r 2 y − 12 r 3 x + 3 r 4 x 2 + 28 √ 3y + 12 x 2 + 12 r 2 x − 12 − 20 y 2 + 4 r 2 y 2 − 4 r 3 √ 3y + r 4 y 2 + 2 r 4 √ 3y x´i
.h 24 r 2 i .
Case 13:
P (X2 ∈ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z 1/2 s 14 Z r 10 (x) r 12 (x)
A(P(L1, L2, Q1, N3, L4, L5, L6)) A(T (Y3)) 2 dydx = 4 r 7 + 8 r 6 − 37 r 5 − 58 r 4 − 84 r 3 + 168 r 2 + 336 r − 352´(−2 + r)`r 2 + 2 r − 4´2 384 (r + 2) 2 r 2 where A(P(L1, L2, Q1, N3, L4, L5, L6)) = − h √ 3`−4 r 3 √ 3y −8 √ 3r y +12 x+24 r −8 √ 3y x−12 r 2 +24 r x−24−12 x 2 r 2 + 4 √ 3r 2 y − 32 y 2 − 12 r 3 x + 3 r 4 x 2 + 20 √ 3y − 24 x 2 + 12 r 2 x + 2 r 4 √ 3y x + r 4 y 2 + 4 r 2 y 2´.
h 24 r 2 i .
Case 14: i.h 10368 (r + 2) 2 r 2 i where A(P(L1, L2, Q1, Q2, L5, L6)) = − h √ 3`−4 r 3 √ 3y + 4 √ 3r y + r 4 y 2 + 6 x − 4 √ 3y x + 2 r 4 √ 3y x + 12 r x − 4 r 2 x √ 3y − 6 x 2 r 2 + 4 √ 3r 2 y − 12 r 3 x + 3 r 4 x 2 + 2 √ 3y − 12 x 2 + 12 r 2 x − 6 − 8 y 2 − 2 r 2 y 2´i .h 12 r 2 i .
P (X2 ∈ N r P E (X1) ∩ Γ r 1(
Adding up the P (X 2 ∈ N r P E (X 1 ) ∩ Γ r 1 (X 1 ), X 1 ∈ T s ) values in the 14 possible cases above, and multiplying by 6 we get for r ∈ [1, 4/3), µ and (r) = − (r − 1) 5 r 5 − 148 r 4 + 245 r 3 − 178 r 2 − 232 r + 128 54 r 2 (r + 2) (r + 1) .
The µ and (r) values for the other intervals can be calculated similarly. For r = ∞, µ and (r) = 1 follows trivially.
Derivation of ν and (r) in Theorem 3.2
By symmetry, P ({X 2 , X 3 } ⊂ N r P E (X 1 ) ∩ Γ r 1 (X 1 )) = 6 P ({X 2 , X 3 } ⊂ N r P E (X 1 ) ∩ Γ r 1 (X 1 ), X 1 ∈ T s ). For r ∈ 6/5, √ 5 − 1 , there are 14 cases to consider for calculation of ν and (r) in the AND-underlying version:
Case 1:
P ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 2 0 Z ℓam(x) 0 + Z s 6 s 2 Z r 5 (x) 0 ! A(P(G1, N1, N2, G6)) 2 A(T (Y3)) 3 dydx = r 2 + 1´2 (r + 1) 2 (r − 1) 2 384 r 10 where A(P(G1, N1, N2, G6)) = √ 3`√3y + 3 x´2 r 2 /36 − ( √ 3y+3 x) 2 √ 3 36 r 2 .
Case 2:
P ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 6 s 5 Z r 7 (x) r 5 (x) + Z s 9 s 6 Z r 7 (x) 0 ! A(P(G1, N1, P2, M3, G6)) 2
A(T (Y3)) 3 dydx = 5 + 38 r + 137 r 2 + 320 r 3 + 552 r 4 + 736 r 5 + 792 r 6 + 640 r 7 + 407 r 8 + 178 r 9 + 35 r 10´( −1 + r) 5 960 r 10 (r + 1) 5
where A(P(G1, N1, P2, M3, G6)) = − √ 3(−4 r 3 √ 3y−12 r 3 x+2 r 4 y 2 +4 r 4 √ 3y x+6 r 4 x 2 +3 r 2 +2 y 2 +4 √ 3y x+6 x 2 ) 24 r 2 .
Case 3: i.h 2099520 (r + 1) 5 r 10 i where A(P(G1, G2, Q1, P2, M3, G6)) = − h √ 3`4 √ 3r 2 y − 8 r 3 √ 3y + 4 r 2 y 2 + 4 r 4 y 2 + 4 y 2 + 8 r 4 √ 3y x + 6 − 12 x 2 r 2 − 12 x − 12 r − 24 r 3 x + 12 r 4 x 2 + 9 r 2 + 12 r x − 4 √ 3r y + 12 x 2 + 4 √ 3y + 12 r 2 x´i .h 24 r 2 i . i.h 2099520`r 2 + 1´5 r 10 i where A(P(G1, M1, L2, Q1, P2, M3, G6)) = − h √ 3`−6 x 2 r 2 − 3 + 6 x − 12 r 3 x + 6 r 4 x 2 − 4 r 3 √ 3y + 4 √ 3y x + 4 r 4 √ 3y x + 2 r 4 y 2 + 3 r 2 + 2 √ 3r 2 y − 2 √ 3y + 2 r 2 y 2 + 6 r 2 x´i .h 12 r 2 i .
P ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 9 s 5 Z r 3 (x)
Case 5:
P ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 8 s 3 Z r 2 (x) r 5 (x) + Z s 5 s 8 Z r 8 (x) r 5 (x) ! A(P(G1, M1, P1, P2, M3, G6)) 2 A(T (Y3)) 3 dydx =
h`3 5361 r 16 −229392 r 15 +602820 r 14 −858384 r 13 +778848 r 12 −460368 r 11 +277740 r 10 −258768 r 9 +160594 r 8 −62256 r 7 − 5892 r 6 − 17712 r 5 + 19224 r 4 + 11664 r 3 + 5076 r 2 + 1296 r + 405´`−12 r + 7 r 2 + 3´2
i.h 699840 r 10`r2 + 1´5
i where A(P(G1, M1, P1, P2, M3, G6)) = − √ 3(−4 r 3 √ 3y−12 r 3 x+3 r 2 +6 r 4 √ 3y x+9 r 4 x 2 +3 r 4 y 2 +y 2 +2 √ 3y x+3 x 2 ) 12 r 2 .
Case 6: 19534445 r 14 −18170472 r 13 +15507752 r 12 −13150464 r 11 +9987958 r 10 −7448736 r 9 +5016464 r 8 −2991768 r 7 +1857485 r 6 − 749160 r 5 + 481804 r 4 − 96720 r 3 + 76160 r 2 − 4032 r + 4320´(2 r − 1) 2 (r − 1) 2 i.h 32805`r 2 + 1´5 r 6`2 r 2 + 1´5
P ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 3 s 2 Z ℓam(x) r 5 (x) + Z s 7 s 3 Z ℓam(x) r 2 (x) + Z s 8 s 7 Z r 8 (x) r 2 (x) ! A(P(G1,
i where A(P(G1, M1, L2, Q1, P2, M3, G6)) = − h √ 3`−6 x 2 r 2 − 3 + 6 x − 12 r 3 x + 6 r 4 x 2 − 4 r 3 √ 3y + 4 √ 3y x + 4 r 4 √ 3y x + 2 r 4 y 2 + 3 r 2 + 2 √ 3r 2 y − 2 √ 3y + 2 r 2 y 2 + 6 r 2 x´i .h 12 r 2 i .
Case 8: i where A(P(G1, G2, Q1, N3, MC , M3, G6)) = − h √ 3`−12 x 2 r 2 − 12 x − 12 r − 12 r 3 x + 3 r 4 x 2 + 4 √ 3r 2 y + 5 r 2 + 12 r x + 12 x 2 + 2 r 4 √ 3y x + 4 r 2 y 2 − 4 r 3 √ 3y + 6 + 4 y 2 + r 4 y 2 + 4 √ 3y + 12
P ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 13 s 12 Z r 3 (x) r 6 (x) + Z 1/2 s 13 Z r 2 (x)r 2 x − 4 √ 3r y´i .h 24 r 2 i .
Case 9:
P ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 12 s 10 Z r 2 (x) r 6 (x) + Z s 13 s 12 Z r 2 (x) r 3 (x) ! A(P(G1,) = − h √ 3`−12 x 2 r 2 − 6 + 12 x − 12 r 3 x + 3 r 4 x 2 + 2 r 2 + 2 r 4 √ 3y x + r 4 y 2 + 8 √ 3y x + 4 r 2 y 2 − 4 √ 3y + 4 √ 3r 2 y + 12 r 2 x − 4 r 3 √ 3y´i .h 24 r 2 i .
Case 10:
P ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 14 s 10 Z r 10 (x) r 2 (x) + Z s 13 s 14 Z r 12 (x) r 2 (x) + Z 1/2 s 13 Z r 12 (x) r 3 (x) ! A(P(G1,) = − h √ 3`−16 √ 3r y + 20 √ 3y − 24 y 2 − 12 x 2 r 2 + 12 x + 24 r − 12 r 3 x + 3 r 4 x 2 − 6 r 2 − 24 + 4 √ 3r 2 y + 8 √ 3y x − 4 r 3 √ 3y + 4 r 2 y 2 + r 4 y 2 + 2 r 4 √ 3y x + 12 r 2 x´i .h 24 r 2 i .
Case 11:
P ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 11) = − h √ 3`4 √ 3r 2 y + 4 √ 3y x − 2 r 2 y 2 − 4 r 3 √ 3y − 4 y 2 − 4 √ 3r 2 y x − 6 x 2 r 2 + 6 x − 12 r 3 x + 3 r 4 x 2 + 3 r 2 + 2 r 4 √ 3y x + r 4 y 2 + 2 √ 3y + 12 r 2 x − 6´i .h 12 r 2 i .
Case 12:
P ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z 1/2 s 13 Z r 3 (x) r 2 (x) A(P(G1, G2, Q1, N3, L4, L5, M3, G6)) 2 A(T (Y3)) 3 dydx =
h`2 322432 − 7554816 r + 9510912 r 2 + 1046068 r 8 − 558720 r 9 + 2444224 r 4 − 5799360 r 3 − 2134656 r 5 − 1608672 r 7 + 2169696 r 6 + 216300 r 10 − 55440 r 11 + 7095 r 12´( −6 + 5 r) 2 i.h 4199040 r 4 i where A(P(G1, G2, Q1, N3, L4, L5, M3, G6)) = − h √ 3`−12 x 2 r 2 − 12 x + 12 r − 12 r 3 x + 3 r 4 x 2 − 3 r 2 + 12 r x + 28 √ 3y + 12 x 2 − 20 y 2 + 12 r 2 x + r 4 y 2 + 4 r 2 y 2 − 4 r 3 √ 3y + 2 r 4 √ 3y x + 4 √ 3r 2 y − 20 √ 3r y − 12´i
.h 24 r 2 i .
Case 13:
P ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z 1/2 s 14 Z r 10 (x) r 12 (x)
A(P(L1, L2, Q1, N3, L4, L5, L6)) 2 A(T (Y3)) 3 dydx = − h`9 r 14 + 36 r 13 − 132 r 12 − 576 r 11 + 164 r 10 + 2512 r 9 + 4976 r 8 − 1536 r 7 − 13888 r 6 − 17536 r 5 − 3072 r 4 + 79360 r 3 + 9216 r 2 − 120832 r + 61440´(−2 + r)`r 2 + 2 r − 4´2
i.h 7680 (r + 2) 3 r 4 i where A(P(L1, L2, Q1, N3, L4, L5, L6)) = − h √ 3`r 4 y 2 −8 √ 3r y −8 √ 3y x+4 r 2 y 2 −4 r 3 √ 3y −32 y 2 +2 r 4 √ 3y x−12 x 2 r 2 + 12 x + 24 r − 12 r 3 x + 3 r 4 x 2 − 12 r 2 + 4 √ 3r 2 y + 24 r x − 24 x 2 − 24 + 20 √ 3y + 12 r 2 x´i .h 24 r 2 i .
Case 14:
P ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = Z s 14 s 11 Z ℓam(x) r 12 (x) + Z 1/2 s 14 Z ℓam(x) r 10 (x) ! A(P(L1, L2, Q1, Q2, L5, L6)) 2 A(T (Y3)) 3 dydx = h
(r − 1)`3483 r 18 + 24381 r 17 − 34830 r 16 − 529416 r 15 − 265680 r 14 + 4274208 r 13 + 4999320 r 12 − 15227352 r 11 − 25751336 r 10 + 19466488 r 9 + 62834064 r 8 + 17452256 r 7 − 53339200 r 6 − 117114624 r 5 − 51206656 r 4 + 270430208 r 3 + 58073088 r 2 − 296222720 r + 122159104´i
.h 1866240 (r + 2) 3 r 4 i where A(P(L1, L2, Q1, Q2, L5, L6)) = − h √ 3`−4 √ 3y x − 2 r 2 y 2 + 4 √ 3r y − 4 r 3 √ 3y − 8 y 2 − 4 √ 3r 2 y x − 6 x 2 r 2 + 6 x − 12 r 3 x + 3 r 4 x 2 + 4 √ 3r 2 y + 12 r x − 12 x 2 + 2 r 4 √ 3y x + r 4 y 2 + 2 √ 3y + 12 r 2 x − 6´i .h 12 r 2 i .
Adding up the P ({X 2 , X 3 } ⊂ N r P E (X 1 ) ∩ Γ r 1 (X 1 ), X 1 ∈ T s ) values in the 14 possible cases above, and multiplying by 6 we get for r ∈ 6/5, √ 5 − 1 , ν and (r) = − 219936 r−3041936 r 2 −30889822 r 8 +18084672 r 13 +27137438 r 9 +2364868 r 4 +2305864 r 3 −4168820 r 5 − 2832544 r 7 +486 r 21 −118850 r 17 −45155 r 18 −269 r 19 +3402 r 20 +11101160 r 6 +24604048 r 10 −43009544 r 11 +8770788 r 12 − 13736295 r 14 + 2751855 r 15 + 443518 r 16 + 49152 116640 r 6 (r + 2) 2 2 r 2 + 1 (r + 1) 3 .
The ν and (r) values for the other intervals can be calculated similarly.
Appendix 4: Derivation of µ or (r) and ν or (r) under the Null Case Derivation of µ or (r) in Theorem 3.2 First we find µ or (r) for r ∈ 1, ∞). Observe that, by symmetry,
µ or (r) = P X 2 ∈ N r P E (X 1 ) ∪ Γ r 1 (X 1 ) = 6 P X 2 ∈ N r Y (X 1 ) ∪ Γ r 1 (X 1 ), X 1 ∈ T s .
For r ∈ [1, 4/3), there are 17 cases to consider for calculation of ν or (r) in the OR-underlying version. Each Case j correspond to R i for i = 1, 2, . . . , 17 in Figure 26. Case 1:
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 0 0 Z ℓam(x) 0 + Z s 1 s 0 Z ℓam (x) r 1 (x) ! A(P(A, M1, MC, M3)) A(T (Y3)) 2 dydx = 4 27 r 2 − 4 r/9 + 1/3
where A(P(A, M1, MC , M3)) = √ 3/12.
Case 2:
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 1 s 0 Z r 1 (x) 0 + Z s 3 s 1 Z r 2 (x) 0 + Z s 4 s 3 Z r 5 (x) 0 + Z s 5 s 4 Z r 5 (x)= √ 3(−4 √ 3r y−12 r+12 r x+5 r 2 +3 y 2 +6 √ 3y−6 √ 3y x+9−18 x+9 x 2 ) 12 r 2 .
Case 3:
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 5 s 4 Z r 3 (x) 0 + Z s 6 s 5 Z r 5 (x) 0 ! A(P(A, G2, G3, M2, MC , M3)) A(T (Y3)) 2 dydx = 13 r 4 − 4 r 3 + 4 r − 1 − 2 r 2´( r − 1) 4 96 r 6 where A(P(A, G2, G3, M2, MC , M3)) = − √ 3( y 2 +2 √ 3y−2 √ 3y x+3−6 x+3 x 2 −2 r 2 ) 12 r 2 .
Case 4:
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 2 s 1 Z ℓam(x) r 2 (x) + Z s 3 s 2 Z r 5 (x) r 2 (x) ! A(P(A, M1, L2, L3, L4, L5, M3)) A(T (Y3)) 2 dydx = 9 − 72 r + 192 r 2 − 192 r 3 + 76 r 4´`4 r − 3 + √ 3´2`4 r − 3 − √ 3´2 10368 r 6 where A(P(A, M1, L2, L3, L4, L5, M3)) = √ 3(4 √ 3r y+9 r 2 −24 r+12 r x+15 y 2 −6 √ 3y−6 √ 3y x+18−18 x+9 x 2 ) 12 r 2 .
Case 5:
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 6 s 5 Z r 7 (x) r 5 (x) + Z s 9 s 6 Z r 7 (x) 0 ! A(P(A, G2, G3, M2, MC , P2, N2)) A(T (Y3)) 2 dydx = −1 + 2 r + 6 r 2 − 6 r 3 + 22 r 5 + 17 r 6´( r − 1) 3 96 r 6 (r + 1) 3 where A(P(A, G2, G3, M2, MC , P2, N2)) = h √ 3`−2 y 2 − 4 √ 3y + 4 √ 3y x − 6 + 12 x − 6 x 2 + 7 r 2 − 4 r 3 √ 3y − 12 r 3 x + 8 r 4 √ 3y x + 12 r 4 x 2 + 4 r 4 y 2´i .h 24 r 2 i .
Case 6: where A(P(A, N1, Q1, G3, M2, MC , P2, N2)) = h √ 3`4 r y 2 − 4 √ 3y + 12 x + 13 r − 12 + 18 r 3 x 2 + 12 r x − 12 r x 2 − 8 √ 3r 2 y + 4 √ 3r y − 24 r 2 x + 12 √ 3r 3 y x + 6 r 3 y 2´i
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 9 s 5 Z r 3 (x)
.h 24 r i .
Case 7:
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 5 s 8 Z r 2 (x) r 8 (x) + Z s 10 s 5 Z r 2 (x) r 3 (x) + Z s 12 s 10 Z r 6 (x) r 3 (x) ! A(P(A, N1, Q1, L3, MC , P2, N2)) A(T (Y3)) 2 dydx = − h
128 − 1536 r − 302592 r 7 + 11753 r 12 + 346171 r 8 − 28416 r 3 + 8384 r 2 + 69760 r 4 + 220201 r 6 − 135936 r 5 − 305664 r 9 + 186683 r 10 − 69120 r 11 i.h 1944`r 2 + 1´3 r 6 i where A(P(A, N1, Q1, L3, MC , P2, N2)) = h √ 3`−4 √ 3r y +2 √ 3r 2 y −12 x−12 r +8 r 2 +12 r x−6 x 2 r 2 +2 r 2 y 2 −4 √ 3y x+ 3 r 4 y 2 − 4 r 3 √ 3y − 12 r 3 x + 9 r 4 x 2 + 4 √ 3y + 6 r 4 √ 3y x + 6 x 2 + 6 r 2 x + 6 + 2 y 2´i
.h 12 r 2 i .
Case 8:
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 8 s 3 Z r 2 (x) r 5 (x) + Z s 5 s 8 Z r 8 (x) r 5 (x)
! A(P(A, N1, P1, L2, L3, MC , P2, N2)) A(T (Y3)) 2 dydx = 895 r 8 − 2472 r 7 + 3363 r 6 − 2880 r 5 + 2220 r 4 − 1296 r 3 + 675 r 2 − 216 r + 27´`−12 r + 7 r 2 + 3´2 7776 (r 2 + 1) 3 r 6
where A(P(A, N1, P1, L2, L3, MC , P2, N2)) = h √ 3`4 r 4 y 2 + 8 r 4 √ 3y x + 12 r 4 x 2 − 4 r 3 √ 3y − 12 r 3 x − 4 √ 3r y − 12 r + 12 r x + 8 r 2 + 3 y 2 + 6 √ 3y − 6 √ 3y x + 9 − 18 x + 9 x 2´i
.h 12 r 2 i .
Case 9: i.h 7776`r 2 + 1´3`2 r 2 + 1´3 r 6 i where A(P(A, N1, P1, L2, L3, L4, L5, P2, N2)) = h √ 3`18 + 4 √ 3r y − 18 x − 24 r + 12 r 2 + 12 r x − 6 √ 3y + 8 r 4 √ 3y x − 12 r 3 x + 12 r 4 x 2 + 9 x 2 + 15 y 2 + 4 r 4 y 2 − 4 r 3 √ 3y − 6 √ 3y x´i .h 12 r 2 i .
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 3 s 2 Z ℓam(x) r 5 (x) + Z s 7 s 3 Z ℓam(x) r 2 (x) + Z s 8 s 7 Z r 8 (x) r 2 (x) ! A(P(A,
Case 10:
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 8 s 7 Z r 9 (x) r 8 (x) + Z s 10 s 8 Z r 9 (x) r 2 (x) ! A(P(A, N1, Q1, L3, L4, L5, P2, N2)) A(T (Y3)) 2 dydx =
h 8`288 r 12 − 864 r 11 + 1486 r 10 − 1896 r 9 + 2056 r 8 − 1608 r 7 + 1189 r 6 − 654 r 5 + 317 r 4 − 132 r 3 + 44 r 2 − 12 r + 2´(2 r − 1) 2
(r − 1) 2 i.h 243`r 2 + 1´3`2 r 2 + 1´3 r 4 i where A(P(A, N1, Q1, L3, L4, L5, P2, N2)) = h √ 3`4 √ 3r y + 2 √ 3r 2 y − 8 √ 3y
− 12 x − 24 r + 12 r 2 + 12 r x − 6 x 2 r 2 + 15 − 12 r 3 x + 9 r 4 x 2 + 6 x 2 + 6 r 2 x + 6 r 4 √ 3y x + 2 r 2 y 2 − 4 √ 3y x + 3 r 4 y 2 − 4 r 3 √ 3y + 14 y 2´i
.h 12 r 2 i .
Case 11: where A(P(A, N1, Q1, G3, M2, N3, N2)) = h √ 3`4 r y 2 + 12 x + 9 r − 12 + 9 r 3 x 2 + 12 r x − 12 r x 2 − 4 √ 3r 2 y + 4 √ 3r y + 6 √ 3r 3 y x + 3 r 3 y 2 − 12 r 2 x − 4 √ 3y´i
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 13 s 12 Z r 3 (x)
.h 24 r i .
Case 12:
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 13 s 10 Z r 2 (x) r 6 (x) + Z s 13 s 12 Z r 2 (x) r 3 (x) ! A(P(A, N1, Q1, L3, N3, N2)) A(T (Y3)) 2 dydx =
147 r 8 − 504 r 7 + 530 r 6 − 336 r 5 + 876 r 4 − 1056 r 3 + 896 r 2 − 384 r + 64´`−12 r + 7 r 2 + 4´2 15552 r 6
where A(P(A, N1, Q1, L3, N3, N2)) = h √ 3`4 y 2 − 8 √ 3y x − 24 x − 24 r + 8 √ 3y + 12 r 2 + 4 √ 3r 2 y + 6 r 4 √ 3y x + 24 r x − 4 r 3 √ 3y + 3 r 4 y 2 − 8 √ 3r y − 12 x 2 r 2 − 12 r 3 x + 9 r 4 x 2 + 12 x 2 + 12 r 2 x + 4 r 2 y 2 + 12´i
.h 24 r 2 i .
Case 13: i where A(P(A, N1, Q1, L3, N3, N2)) = h √ 3`4 y 2 − 8 √ 3y x − 24 x − 24 r + 8 √ 3y + 12 r 2 + 4 √ 3r 2 y + 6 r 4 √ 3y x + 24 r x − 4 r 3 √ 3y + 3 r 4 y 2 − 8 √ 3r y − 12 x 2 r 2 − 12 r 3 x + 9 r 4 x 2 + 12 x 2 + 12 r 2 x + 4 r 2 y 2 + 12´i
P (X2 ∈ N r P E (X1) ∪ Γ r 1(
.h 24 r 2 i .
Case 14: .h 5184`2 r 2 + 1´3 r 4 i where A(P(A, N1, Q1, L3, L4, Q2, N2)) = h √ 3`−6 x − 12 r + 6 r 2 + 6 r x + 2 √ 3r 2 y − r 2 y 2 − 2 √ 3y x + r 4 y 2 + 5 y 2 − 2 r 2 x √ 3y + 2 r 4 √ 3y x + 2 √ 3r y − 2 r 3 √ 3y − 3 x 2 r 2 − 6 r 3 x + 3 r 4 x 2 − 2 √ 3y + 3 x 2 + 6 r 2 x + 6´i
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 11 s 7 Z ℓam(x) r 9 (x) + Z s 10 s 11 Z r 12 (x)
.h 6 r 2 i .
Case 15:
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z 1/2 s 13 Z r 3 (x) r 2 (x)
A(P(A, N1, Q1, G3, M2, N3, N2)) A(T (Y3)) 2 dydx = 147 r 5 − 612 r 4 + 980 r 3 − 768 r 2 + 744 r − 288´(−6 + 5 r) 2 7776 r
where A(P(A, N1, Q1, L3, L4, Q2, N2)) = h √ 3`4 r y 2 + 12 x + 9 r − 12 + 9 r 3 x 2 + 12 r x − 12 r x 2 − 4 √ 3r 2 y + 4 √ 3r y + 6 √ 3r 3 y x + 3 r 3 y 2 − 12 r 2 x − 4 √ 3y´i
.h 24 r i .
Case 16:
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z 1/2 s 14 Z r 10 (x) r 12 (x)
A(P(A, N1, Q1, L3, N3, N2)) A(T (Y3)) 2 dydx = −`1 3 r 8 + 52 r 7 + 10 r 6 − 184 r 5 + 60 r 4 + 624 r 3 − 48 r 2 − 832 r + 448´(−2 + r)`r 2 + 2 r − 4´2 384 (r + 2) 3 r 2
where A(P(A, N1, Q1, L3, N3, N2)) = h √ 3`4 y 2 − 8 √ 3y x − 24 x − 24 r + 8 √ 3y + 12 r 2 + 4 √ 3r 2 y + 6 r 4 √ 3y x + 24 r x − 4 r 3 √ 3y + 3 r 4 y 2 − 8 √ 3r y − 12 x 2 r 2 − 12 r 3 x + 9 r 4 x 2 + 12 x 2 + 12 r 2 x + 4 r 2 y 2 + 12´i
.h 24 r 2 i .
Case 17:
P (X2 ∈ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 14 s 11 Z ℓam(x) r 12 (x) + Z 1/2 s 14 Z ℓam(x) r 10 (x) ! A(P(A, N1, Q1, L3, L4, Q2, N2)) A(T (Y3)) 2 dydx =
h`1 89 r 12 + 1323 r 11 + 1026 r 10 − 10692 r 9 − 14364 r 8 + 51732 r 7 + 64664 r 6 − 183952 r 5 − 153504 r 4 + 398080 r 3 + 124928 r 2 − 470528 r + 197632´(r − 1)
i.h 5184 r 2 (r + 2) 3 i where A(P(A, N1, Q1, L3, N3, N2)) = h √ 3`−6 x − 12 r + 6 r 2 + 6 r x + 2 √ 3r 2 y − r 2 y 2 − 2 √ 3y x + r 4 y 2 + 5 y 2 − 2 r 2 x √ 3y + 2 r 4 √ 3y x + 2 √ 3r y − 2 r 3 √ 3y − 3 x 2 r 2 − 6 r 3 x + 3 r 4 x 2 − 2 √ 3y + 3 x 2 + 6 r 2 x + 6´i
.h 6 r 2 i .
Adding up the P (X 2 ∈ N r P E (X 1 ) ∪ Γ r 1 (X 1 ), X 1 ∈ T s ) values in the 17 possible cases above, and multiplying by 6 we get for r ∈ [1, 4/3), ν or (r) = 860 r 4 − 195 r 5 − 256 + 720 r − 846 r 3 − 108 r 2 + 47 r 6 108 r 2 (r + 2) (r + 1) .
The ν or (r) values for the other intervals can be calculated similarly.
Derivation of ν or (r) in Theorem 3.2
By symmetry, P ({X 2 , X 3 } ⊂ N r P E (X 1 ) ∪ Γ r 1 (X 1 )) = 6 P ({X 2 , X 3 } ⊂ N r P E (X 1 ) ∪ Γ r 1 (X 1 ), X 1 ∈ T s ).
For r ∈ 6/5, √ 5 − 1 , there are 17 cases to consider for calculation of ν or (r) in the OR-underlying version (see also
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 0 0 Z ℓam(x) 0 + Z s 1 s 0 Z ℓam(x) r 1 (x) ! A(P(A, M1, MC , M3)) 2 A(T (Y3)) 3 dydx = 4 81 r 2 − 4 27 r + 1/9
where A(P(A, M1, MC , M3)) = 1/12 √ 3.
Case 2: Case 3:
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 1 s 0 Z r 1 (x) 0 + Z s 3 s 1 Z r 2 (x) 0 + Z s 4 s 3 Z r 5 (x) 0 + Z s 5 s 4 Z r 5 (x) r 3 (x) ! A(P(A, M1, L2, L3, MC , M3)) 2 A(T (Y3)) 3 dydx = − h (r − 1)`P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 5 s 4 Z r 3 (x) 0 + Z s 6 s 5 Z r 5 (x) 0 ! A(P(A, G2, G3, M2, MC, M3)) 2 A(T (Y3)) 3 dydx = 215 r 8 − 136 r 7 − 56 r 6 + 172 r 5 − 55 r 4 − 60 r 3 + 66 r 2 − 24 r + 3´(r − 1) 4 2880 r 10 where A(P(A, G2, G3, M2, MC , M3)) = − √ 3(y 2 +2 √ 3y−2 √ 3y x+3−6 x+3 x 2 −2 r 2 ) 12 r 2 .
Case 4:
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 2 s 1 Z ℓam(x) r 2 (x) + Z s 3 s 2 Z r 5 (x) r 2 (x) ! A(P(A, M1, L2, L3, L4, L5, M3)) 2 A(T (Y3)) 3 dydx =
h`3 7072 r 8 − 195072 r 7 + 453120 r 6 − 589248 r 5 + 460728 r 4 − 217728 r 3 + 60480 r 2 − 9072 r + 567"
4 r − 3 + √ 3 " 2 " 4 r − 3 − √ 3 " 2 i.h 1866240 r 10 i
where A(P(A, M1, L2, L3, L4, L5, M3)) = √ 3(4 √ 3r y+9 r 2 −24 ν+12 r x+15 y 2 −6 √ 3y−6 √ 3y x+18−18 x+9 x 2 ) 12 r 2 .
Case 5:
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 6 s 5 Z r 7 (x) r 5 (x) + Z s 9 s 6 Z r 7 (x) 0 ! A(P(A, G2, G3, M2, MC , P2, N2)) 2 A(T (Y3)) 3 dydx =
3 − 12 r − 15 r 2 + 84 r 3 + 18 r 4 − 232 r 5 + 130 r 6 + 504 r 7 − 108 r 8 − 288 r 9 + 623 r 10 + 920 r 11 + 373 r 12´( r − 1) 3 2880 r 10 (r + 1) 5
where A(P(A, G2, G3, M2, MC , P2, N2)) = h √ 3`−2 y 2 − 4 √ 3y + 4 √ 3y x − 6 + 12 x − 6 x 2 + 7 r 2 − 4 r 3 √ 3y − 12 r 3 x + 8 r 4 √ 3y x + 12 r 4 x 2 + 4 r 4 y 2´i
.h 24 r 2 i .
Case 6: i where A(P(A, N1, Q1, G3, M2, MC , P2, N2)) = h √ 3`4 r y 2 +12 x+13 r +12 r x−4 √ 3y −12+4 √ 3r y −8 √ 3r 2 y +18 x 2 r 3 − 12 r x 2 + 6 r 3 y 2 − 24 r 2 x + 12 √ 3r 3 y x´i .h 24 r i .
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1(
Case 7:
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 5 s 8 Z r 2 (x) r 8 (x) + Z s 10 s 5 Z r 2 (x) r 3 (x) + Z s 12 s 10 Z r 6 (x) r 3 (x) ! A(P(A, N1, Q1, L3, MC , P2, N2)) 2 A(T (Y3)) 3 dydx = −
h 6144−110592 r −310846464 r 7 +2127553557 r 12 +570050560 r 8 −5031936 r 3 +936960 r 2 +19526656 r 4 +147203072 r 6 + 7627473 r 20 + 1419072042 r 16 − 762467328 r 17 + 288811029 r 18 − 68327424 r 19 − 59166720 r 5 − 923627520 r 9 + 1340817105 r 10 − 1765251072 r 11 − 2350015488 r 13 + 2339575338 r 14 − 2016377856 r 15
i.h 262440`r 2 + 1´5 r 10 i where A(P(A, N1, Q1, L3, MC , P2, N2)) = h √ 3`−4 √ 3r y + 2 √ 3r 2 y − 6 x 2 r 2 − 12 x − 12 r − 12 r 3 x + 9 r 4 x 2 + 8 r 2 + 12 r x + 6 x 2 + 6 r 4 √ 3y x + 2 r 2 y 2 − 4 √ 3y x + 3 r 4 y 2 − 4 r 3 √ 3y + 4 √ 3y + 2 y 2 + 6 r 2 x + 6´i
.h 12 r 2 i .
Case 8:
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 8 s 3 Z r 2 (x) r 5 (x) + Z s 5 s 8 Z r 8 (x) r 5 (x) ! A(P(A, N1, P1, L2, L3, MC , P2, N2)) 2 A(T (Y3)) 3 dydx =
h`4 26497 r 16 − 2443992 r 15 + 6726107 r 14 − 11753232 r 13 + 15220771 r 12 − 16367448 r 11 + 15754449 r 10 − 13773024 r 9 + 10839672 r 8 − 7552440 r 7 + 4592889 r 6 − 2374272 r 5 + 1018899 r 4 − 344088 r 3 + 81891 r 2 − 11664 r + 729− 12 r + 7 r 2 + 3´2
i.h 699840`r 2 + 1´5 r 10 i where A(P(A, N1, P1, L2, L3, MC , P2, N2)) = h √ 3`−4 r 3 √ 3y − 12 r 3 x + 8 r 4 √ 3y x + 12 r 4 x 2 + 4 r 4 y 2 − 4 √ 3r y − 12 r + 12 r x + 3 y 2 + 6 √ 3y − 6 √ 3y x + 8 r 2 + 9 − 18 x + 9 x 2´i
.h 12 r 2 i .
Case 9: i.h 1399680`r 2 + 1´5`2 r 2 + 1´5 r 10 i where A(P(A, N1, P1, L2, L3, L4, L5, P2, N2)) = h √ 3`18 − 18 x − 24 r − 12 r 3 x + 12 r 4 x 2 + 12 r 2 + 12 r x + 4 √ 3r y − 4 r 3 √ 3y + 4 r 4 y 2 − 6 √ 3y x + 8 r 4 √ 3y x + 9 x 2 + 15 y 2 − 6 √ 3y´i
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 3 s 2 Z ℓam(x) r 5 (x) + Z s 7 s 3 Z ℓam(x) r 2 (x) + Z s 8 s 7 Z r 8 (x) r 2 (x)
.h 12 r 2 i .
Case 10:
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1(15 +18708475 r 16´( r − 1) 2 (2 r − 1) 2 i.h 32805`r 2 + 1´5`2 r 2 + 1´5 r 8 i
where A(P(A, N1, Q1, L3, L4, L5, P2, N2)) = h √ 3`2 √ 3r 2 y + 15 − 6 x 2 r 2 − 12 x − 24 r − 12 r 3 x + 9 r 4 x 2 + 12 r 2 + 12 r x − 8 √ 3y + 6 x 2 + 6 r 4 √ 3y x + 14 y 2 − 4 √ 3y x + 2 r 2 y 2 − 4 r 3 √ 3y + 3 r 4 y 2 + 6 r 2 x + 4 √ 3r y´i
.h 12 r 2 i .
Case 11:
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 13 s 12 Z r 3 (x) r 6 (x) + Z 1/2 s 13 Z r 2 (x) r 6 (x) ! A(P(A, N1, Q1, G3, M2, N3, N2)) 2 A(T (Y3)) 3 dydx = −
h −253952 + 1529856 r 2 + 601574256 r 8 − 385780320 r 13 − 776518272 r 9 + 7803648 r 4 − 70917120 r 5 − 396524160 r 7 + 209710080 r 6 +869661288 r 10 −845940960 r 11 +668092108 r 12 +147067614 r 14 −32610600 r 15 +3173067 r 16
i.h 8398080 r 6 i where A(P(A, N1, Q1, G3, M2, N3, N2)) = h √ 3`4 r y 2 + 12 x + 4 √ 3r y + 9 r − 4 √ 3y + 12 r x − 12 + 9 x 2 r 3 + 6 √ 3r 3 y x − 12 r x 2 − 4 √ 3r 2 y − 12 r 2 x + 3 r 3 y 2´i
.h 24 r i .
Case 12:
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 12 s 10 Z r 2 (x) r 6 (x) + Z s 13 s 12 Z r 2 (x) r 3 (x) ! A(P(A, N1, Q1, L3, N3, N2)) 2 A(T (Y3)) 3 dydx =
h`6 4827 r 16 − 444528 r 15 + 1223334 r 14 − 1793232 r 13 + 1839416 r 12 − 2003712 r 11 + 2286224 r 10 − 2421504 r 9 + 3095088 r 8 − 4428288 r 7 + 5889152 r 6 − 6093312 r 5 + 4557056 r 4 − 2340864 r 3 + 774144 r 2 − 147456 r + 12288− 12 r + 7 r 2 + 4´2
i.h 8398080 r 10 i where A(P(A, N1, Q1, L3, N3, N2)) = h √ 3`−12 x 2 r 2 − 24 x − 24 r − 12 r 3 x + 9 r 4 x 2 + 4 y 2 − 8 √ 3r y + 6 r 4 √ 3y x + 8 √ 3y + 12 r 2 + 24 r x + 12 x 2 − 8 √ 3y x + 4 r 2 y 2 − 4 r 3 √ 3y + 3 r 4 y 2 + 4 √ 3r 2 y + 12 r 2 x + 12´i
.h 24 r 2 i .
Case 13: i where A(P(A, N1, Q1, L3, N3, N2)) = h √ 3`−12 x 2 r 2 − 24 x − 24 r − 12 r 3 x + 9 r 4 x 2 + 4 y 2 − 8 √ 3r y + 6 r 4 √ 3y x + 8 √ 3y + 12 r 2 + 24 r x + 12 x 2 − 8 √ 3y x + 4 r 2 y 2 − 4 r 3 √ 3y + 3 r 4 y 2 + 4 √ 3r 2 y + 12 r 2 x + 12´i
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1(
.h 24 r 2 i .
Case 14: i where A(P(A, N1, Q1, L3, L4, Q2, N2)) = h √ 3`−3 x 2 r 2 − 6 x − 12 r − 6 r 3 x + 3 r 4 x 2 + 2 √ 3r y + 6 r 2 + 6 r x + 3 x 2 − 2 √ 3y − 2 √ 3r 2 y x + 2 r 4 √ 3y x + 2 √ 3r 2 y − r 2 y 2 + 5 y 2 − 2 r 3 √ 3y + r 4 y 2 − 2 √ 3y x + 6 + 6 r 2 x´i .h 6 r 2 i .
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z s 11
Case 15:
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z 1/2 s 13 Z r 3 (x) r 2 (x)
A(P(A, N1, Q1, G3, M2, N3, N2)) 2 A(T (Y3)) 3 dydx = h`6 3855 r 10 − 498960 r 9 + 1650060 r 8 − 3036960 r 7 + 3703292 r 6 − 3657696 r 5 + 3268368 r 4 − 2419200 r 3 + 1550448 r 2 − 725760 r + 155520´(−6 + 5 r) 2
i.h 4199040 r 2 i where A(P(A, N1, Q1, G3, M2, N3, N2)) = h √ 3`4 r y 2 + 12 x + 4 √ 3r y + 9 r − 4 √ 3y + 12 r x − 12 + 9 x 2 r 3 + 6 √ 3r 3 y x − 12 r x 2 − 4 √ 3r 2 y − 12 r 2 x + 3 r 3 y 2´i
.h 24 r i .
Case 16:
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1 (X1), X1 ∈ Ts) = Z 1/2 s 14 Z r 10 (x) r 12 (x) A(P(A, N1, Q1, L3, N3, N2)) 2 A(T (Y3)) 3 dydx = −
h`2 93 r 16 + 2344 r 15 + 4662 r 14 − 9088 r 13 − 32320 r 12 + 42976 r 11 + 175408 r 10 − 119680 r 9 − 544144 r 8 + 372352 r 7 + 1216512 r 6 − 882688 r 5 − 1564672 r 4 + 1373184 r 3 + 924672 r 2 − 1314816 r + 380928(
−2 + r)`r 2 + 2 r − 4´2 i.h 23040 (r + 2) 5 r 4 i
where A(P(A, N1, Q1, L3, N3, N2)) = h √ 3`−12 x 2 r 2 − 24 x − 24 r − 12 r 3 x + 9 r 4 x 2 + 4 y 2 − 8 √ 3r y + 6 r 4 √ 3y x + 8 √ 3y + 12 r 2 + 24 r x + 12 x 2 − 8 √ 3y x + 4 r 2 y 2 − 4 r 3 √ 3y + 3 r 4 y 2 + 4 √ 3r 2 y + 12 r 2 x + 12´i
.h 24 r 2 i .
Case 17: 39916108 r 11 +343932568 r 10 +108508576 r 9 −906967296 r 8 −96480192 r 7 +1702951296 r 6 −293251072 r 5 −1994987520 r 4 + 981590016 r 3 + 1118830592 r 2 − 1135919104 r + 287604736´(r − 1)
P ({X2, X3} ⊂ N r P E (X1) ∪ Γ r 1(
i.h 466560 r 4 (r + 2) 5 i where A(P(A, N1, Q1, L3, N3, N2)) = h √ 3`−3 x 2 r 2 − 6 x − 12 r − 6 r 3 x + 3 r 4 x 2 + 2 √ 3r y + 6 r 2 + 6 r x + 3 x 2 − 2 √ 3y − 2 √ 3r 2 y x + 2 r 4 √ 3y x + 2 √ 3r 2 y − r 2 y 2 + 5 y 2 − 2 r 3 √ 3y + r 4 y 2 − 2 √ 3y x + 6 + 6 r 2 x´i .h 6 r 2 i .
Adding up the P ({X 2 , X 3 } ⊂ N r P E (X 1 ) ∪ Γ r 1 (X 1 ), X 1 ∈ T s ) values in the 17 possible cases above, and multiplying by 6 we get, for r ∈ 6/5, √ 5 − 1 , ν or (r) = − −413208 r+3070468 r 2 −74801558 r 8 +75243552 r 13 −4883958 r 9 +14541630 r 4 +28880−11254002 r 3 − 3667716 r 5 +64360782 r 7 +13122 r 21 −3300900 r 17 +156014 r 18 −175011 r 19 +62825 r 20 +1458 r 22 −19812000 r 6 + 99831906 r 10 − 120628524 r 11 + 33155180 r 12 − 67685050 r 14 + 5055135 r 15 + 11053023 r 16 116640 r 6 r 2 + 1 2 r 2 + 1 (r + 2) 3 (r + 1) 3 .
The ν or (r) values for the other intervals can be calculated similarly.
Appendix 5: The Asymptotic Means of Relative Edge Density Under Segregation and Association Alternatives
Let µ S and (r, ε) and µ A and (r, ε) be the means of relative edge density for the AND-underlying graph under the segregation and association alternatives. Define µ S or (r, ε) and µ A or (r, ε) similarly. Derivation of µ S and (r, ε) involves detailed geometric calculations and partitioning of the space of (r, ε, x) for r ∈ [1, ∞), ε ∈ 0, √ 3/3 , and x ∈ T e . See Appendix 6 for the derivation of µ(r, ε) at a demonstrative interval.
µ S and (r, ε) Under Segregation Alternatives
Under segregation, we compute µ S and (r, ε) and µ S or (r, ε) explicitly. For ε ∈ 0, √ 3/8 , µ S and (r, ε) = 4 i=1 ̟ and i (r, ε) I(r ∈ I i ) where ̟ and 1 (r, ε) = − (r − 1)`5 r 5 + 288 r 5 ε 4 + 1152 r 4 ε 4 − 148 r 4 + 1440 r 3 ε 4 + 245 r 3 − 178 r 2 + 576 r 2 ε 4 − 232 r + 1285 4 r 2 (2 ε − 1) 2 (2 ε + 1) 2 (r + 2) (r + 1)
̟ and 2 (r, ε) = − h 1152 r 5 ε 4 +101 r 5 +3456 r 4 ε 4 −801 r 4 +1302 r 3 +1152 r 3 ε 4 −732 r 2 −3456 r 2 ε 4 −536 r−2304 rε 4 +672
i.
h 216 (r + 2) r`16 ε 4 − 8 ε 2 + 1´(r + 1) i.h 24 r 4`1 6 ε 4 − 8 ε 2 + 1´(r + 1) (r + 2) i ̟ and 4 (r, ε) = − 16 r 7 ε 4 + 16 r 6 ε 4 − 3 r 5 − 16 r 5 ε 4 − 3 r 4 − 16 r 4 ε 4 + 9 r 3 + 9 r 2 − 18 r + 6 3 (r + 1) r 4 (4 ε 2 − 1) 2 with the corresponding intervals I 1 = 1, 4/3 , I 2 = 4/3, 3/2 , I 3 = 3/2, 2 , and I 4 = 2, ∞ .
For ε ∈ 0, √ 3/8 , µ S or (r, ε) = ̟ or 1 (r, ε) = h 47 r 6 − 195 r 5 + 576 r 4 ε 4 − 288 r 4 ε 2 + 860 r 4 − 846 r 3 + 1728 r 3 ε 4 − 864 r 3 ε 2 − 108 r 2 − 576 r 2 ε 2 + 1152 r 2 ε 4 + 720 r − 256
i.h 108 r 2`1 6 rε 4 − 8 rε 2 + r − 16 ε 2 + 2 + 32 ε 4´( r + 1) i ̟ or 2 (r, ε) = h 175 r 5 −579 r 4 +1450 r 3 +1152 r 3 ε 4 −576 r 3 ε 2 +3456 r 2 ε 4 −1728 r 2 ε 2 −732 r 2 +2304 rε 4 −536 r −1152 rε 2 + 672 i.h 216 (r + 2) r (2 ε − 1) 2 (2 ε + 1) 2 (r + 1) i ̟ and 3 (r, ε) = − h 27 r 8 −63 r 7 −270 r 6 +1728 r 6 ε 2 −384 r 6 ε 4 +1024 ε 3 √ 3r 5 −1152 r 5 ε 4 +576 r 5 ε 2 +756 r 5 +1536 r 4 ε 3 √ 3− 2376 r 4 −6912 r 4 ε 2 −2560 √ 3ε 3 r 3 +2304 r 3 ε 4 +2736 r 3 +1152 r 3 ε 2 +1296 r 2 −3072 r 2 ε 3 √ 3+1536 r 2 ε 4 +6912 r 2 ε 2 −3312 r+ 864 i.h 72 r 4 (r + 1)`16 rε 4 − 8 rε 2 + r − 16 ε 2 + 2 + 32 ε 4´i ̟ and 4 (r, ε) = − h −18 − 48 r 5 ε 4 − 48 r 4 ε 4 + 72 r 4 ε 2 − 144 r 2 ε 2 − 9 r 4 − 32 r 3 ε 4 − 144 r 3 ε 2 + 72 r 5 ε 2 − 9 r 5 − 32 r 2 ε 4 + 54 r+ 64 r 2 ε 3 √ 3 + 64 √ 3ε 3 r 3 i.h 9 r 4`4 ε 2 − 1´2 (r + 1) i with the corresponding intervals I i are same as before. (r, ε) = − h −128+768 r 6 √ 3ε 3 +360 r+8640 ε 4 +5760 ε 2 +393 r 4 −54 r 2 +6912 r 4 ε 2 +5 r 6 −153 r 5 −423 r 3 −4608 r 4 √ 3ε 3 + 6912 √ 3r 2 ε 3 +1728 ε 2 r−3072 √ 3ε 3 −7776 r 2 ε 4 −864 r 6 ε 4 −2592 r 5 ε 4 −18144 ε 4 r 3 +12960 ε 4 r−576 r 6 ε 2 −3456 r 3 ε 2 +1728 r 5 ε 2 − 7776 r 4 ε 4 − 12096 r 2 ε 2 i.h 6 " √ 3 + 6 ε " 2 " −6 ε + √ 3 " 2 (r + 2) r 2 (r + 1) i ς and 2 (r, ε) = h −672 r+20736 ε 4 +13824 ε 2 −1302 r 4 +536 r 2 −101 r 6 +801 r 5 +732 r 3 −3072 r 6 √ 3ε 3 +18432 r 4 √ 3ε 3 −9216 √ 3rε 3 − 19968 √ 3r 2 ε 3 +4608 √ 3r 3 ε 3 +31104 r 4 ε 4 +4608 r 2 ε 2 −17280 r 4 ε 2 +58752 ε 4 r 2 −6912 ε 2 r+3456 r 6 ε 4 +10368 r 5 ε 4 +72576 ε 4 r 3 + 31104 ε 4 r + 2304 r 6 ε 2 + 17280 r 3 ε 2 − 6912 r 5 ε 2 i.h 216 (r + 2) r 2 (r + 1)`−1 + 12 ε 2´2 i ς and 3 (r, ε) = h 9(r 8 −13 r 7 +30 r 6 −192 r 6 ε 2 +1152 r 6 ε 4 +148 r 5 +3456 r 5 ε 4 −576 r 5 ε 2 −448 r 4 +2688 r 4 ε 4 −128 r 4 ε 2 +1152 ε 4 r 3 + 264 r 3 + 768 r 3 ε 2 + 512 r 2 ε 2 + 768 ε 4 r 2 + 288 r 2 − 368 r + 96)
i.h 8 r 4 " −6 ε + √ 3 " 2 " √ 3 + 6 ε " 2 (r + 1) (r + 2)
i ς and 4 (r, ε) = 9(r 5 + 6 r + r 4 − 3 r 3 − 3 r 2 − 2 + 144 r 5 ε 4 + 144 r 4 ε 4 + 48 ε 4 r 3 + 48 ε 4 r 2 − 24 r 5 ε 2 − 24 r 4 ε 2 + 32 r 3 ε 2 + 32 r 2 ε 2 ) r 4 (r + 1)`− √ 3 + 6 ε´2`√3 + 6 ε´2
with the corresponding intervals I i are same as before.
For ε ∈ 0, 7 √ 3 − 3 √ 15 /12 ≈ .042 , µ A or (r, ε) = ς or 1 (r, ε) = h −256+720 r−13824 ε 4 −9216 ε 2 +860 r 4 −108 r 2 +47 r 6 −195 r 5 −846 r 3 +12096 r 4 ε 4 +6912 r 2 ε 2 +1152 r 4 ε 2 + 31104 ε 4 r 2 −6144 √ 3ε 3 +3072 r 6 √ 3ε 3 −6144 r 4 √ 3ε 3 +13824 √ 3r 2 ε 3 +4608 √ 3r 5 ε 3 +13824 ε 2 r −10368 r 5 ε 4 +57024 ε 4 r 3 − 20736 ε 4 r − 2304 r 6 ε 2 − 17280 r 3 ε 2 − 3456 r 6 ε 4 i.h 12 (r + 2) " −6 ε + √ 3 " 2 " √ 3 + 6 ε " 2 r 2 (r + 1) i ς or 2 (r, ε) = − h −672+579 r 4 −1450 r 3 +536 r+20736 r 4 ε 4 +32832 r 2 ε 2 −114048 ε 4 r 2 −7488 r 3 ε 2 +8064 ε 2 r−175 r 5 +6912 r 5 ε 4 + 4608 r 5 ε 2 − 24192 ε 4 r 3 − 76032 ε 4 r + 12288
√ 3r 3 ε 3 − 9216 r 4 √ 3ε 3 + 4608 √ 3r 2 ε 3 + 732 r 2 − 6144 √ 3r 5 ε 3 − 9216 √ 3ε 3 − 19968 √ 3rε 3 − 27648 ε 2 i.h 216 r (r + 2) (r + 1)`−1 + 12 ε 2´2 i ς or 3 (r, ε) = −
h 9(96+384 r 4 ε 2 +192 r 6 ε 2 −2304 r 4 ε 4 −30 r 6 −1152 r 6 ε 4 +84 r 5 +576 r 5 ε 2 +3 r 8 −7 r 7 −368 r+304 r 3 +144 r 2 − 3456 r 5 ε 4 − 264 r 4 )
i.h 8 r 4 (r + 2) (r + 1) " √ 3 + 6 ε " 2 " −6 ε + √ 3 " 2 i ς or 4 (r, ε) = 9(−6 r + r 4 + r 5 + 2 + 144 r 5 ε 4 + 144 r 4 ε 4 − 24 r 5 ε 2 − 24 r 4 ε 2 ) r 4 (r + 1)`−6 ε + √ 3´2`√3 + 6 ε´2
with the corresponding intervals I i are same as before.
Appendix 6: Derivation of µ S and (r, ε) and µ S or (r, ε)
We demonstrate the derivation of µ S (r, ε) for segregation with ε ∈ 0, √ 3/8 and among the intervals of r that do not vanish as ε → 0. So the resultant expressions can be used in PAE analysis.
Derivation of µ S and (r, ε)
By symmetry, µ S and (r, ε) = P X 2 ∈ N r P E (X 1 , ε) ∩ Γ r 1 (X 1 , ε) = 6 P X 2 ∈ N r Y (X 1 , ε) ∩ Γ r 1 (X 1 , ε), X 1 ∈ T s \ T (y, ε) .
Let q(y i , x) be the line parallel to e i and crossing T (Y 3 ) such that d(y i , q(y i , x)) = ε for i = 1, 2, 3. Furthermore,
let T ε := T (Y 3 ) \ ∪ 3 j=1 T (y i , ε). Then q(y, x) = 2 ε − √ 3 x, q(y 2 , x) = √ 3 x − √ 3 + 2 ε, and q(y 3 , x) = √ 3/2 − ε. Now, let V 1 = q(y, x) ∩ yy 2 = 2 ε/ √ 3, 0 , V 2 = q(y 2 , x) ∩ yy 2 = 1 − 2 ε/ √ 3, 0 , V 3 = q(y 2 , x) ∩ y 2 y 3 = 1 − ε/ √ 3, ε , V 4 = q(y 3 , x) ∩ y 2 y 3 = 1/2 + ε/ √ 3, √ 3/2 − ε , V 5 = q(y 3 , x) ∩ yy 3 = 1/2 − ε/ √ 3, √ 3/2 − ε , V 6 = q(y, x) ∩ yy 3 = ε/ √ 3, ε .
See Figure 23.
The points G i , for i = 1, 2, . . . , 6, P i , for i = 1, 2, L i , for i = 1, 2, . . . , 6, N i , for i = 1, 2, 3, Q i , for i = 1, 2 and the lines r i (x), for i = 1, 2, . . . , 11 are as in Appendix 3. s 0 = −2 r/3 + 1, s 1 = −r + 3/2, s 2 = 3/(8 r), s 3 = 1 − r/2, s 4 = 3 2 (2 r 2 +1) , s 5 = 3−3 r+2 r 2 6 r , s 6 = 1/(2 r), s 7 = 1/(2 r), s 8 = − −2 r 2 −6+r 3 +2 r 4 (r 2 +1) , s 9 = − −4−6 r+3 r 2 12 r , s 10 = 1/ (r + 1), s 11 = − −2 r+r 2 −1 4 r , s 12 = −3 r+2 r 2 +4 6 r , s 13 = 9−3 r 2 +2 r 3 −2 r 6 (r 2 +1) , s 14 = 3 r/8, s 15 = r − r 3 /8 − 1/2
ℓ 1 (x) = 1/3 √ 3 −3 x + 2 ε √ 3 , ℓ 2 (x) = −1/3 √ 3(3 x r−3+2 ε √ 3) r , ℓ 3 (x) = − √ 3(x r−1) r , ℓ 4 (x) = 1/3 √ 3 −3 x + 2 ε √ 3r q 1 = 1/2 ε √ 3, q 2 = 2/3 ε √ 3, q 3 = −1/4 −3+2 ε √ 3 r , q 4 = 3/4 r −1 , q 7 = 1/2 ε √ 3r, and q 8 = 2/3 ε √ 3r
Then T (y, ε) = T (y, Q 1 , Q 6 ), T (y 2 , ε) = T (Q 2 , y 2 , Q 3 ), and T (y 3 , ε) = T (Q 4 , Q 5 , y 3 ), and for ε ∈ 0, √ 3/4 , T ε is the hexagon with vertices, Q i , i = 1, . . . , 6. So we have A(T ε ) = −ε 2 √ 3 + √ 3/4.
For r ∈ 1, 4/3 , since ε small enough that q 2 (x) ∩ T e = ∅, then N (x, ε) T ε for all x ∈ T e \ T (y, ε). There are 14 cases to consider for the AND-underlying version: Case 1:
P (X2 ∈ N r P E (X1, ε) ∩ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z q 7 q 1 Z ℓam(x) ℓ 1 (x) + Z q 2 q 7 Z ℓ 4 (x) ℓ 1 (x) + Z q 8 q 2 Z ℓ 4 (x) 0 ! A(P(V1, N1, N2, V6)) A(Tε) 2 dydx = 4 ε 4`− 3 r 2 + 2 + r 69 (4 ε 2 − 1) 2 where A(P(V1, N1, N2, V6)) = − 4 (−ε 2 √ 3+1/4 √ 3) 2 √ 3(−r 2 y 2 −2 r 2 y √ 3x−3 r 2 x 2 +4 ε 2 ) 9 (4 ε 2 −1) 2 .
Case 2:
P (X2 ∈ N r P E (X1, ε) ∩ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z q 8 q 7 Z ℓam(x) ℓ 4 (x) + Z s 2 q 8 Z ℓam(x) 0 + Z s 6 s 2 Z r 5 (x) 0 ! A(P(G1, N1, N2, G6)) A(Tε) 2 dydx = − 256 ε 4 r 12 − 256 ε 4 r 8 − 9 r 4 + 9 576 r 6 (4 ε 2 − 1) 2 where A(P(G1, N1, N2, G6)) = 4 (−ε 2 √ 3+1/4 √ 3) 2 (y+ √ 3x) 2 √ 3(r 4 −1)
9 r 2 (4 ε 2 −1) 2 .
Case 3:
P (X2 ∈ N r P E (X1, ε) ∩ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z s 6 s 11 Z r 7 (x) r 5 (x) + Z s 10 s 6 Z r 7 (x) 0 !
A(P(G1, N1, P2, M3, G6)) A(Tε) 2 dydx = 9 r 9 − 13 r 8 − 14 r 7 + 30 r 6 − 22 r 5 + 22 r 4 − 6 r 3 − 10 r 2 + r + 3 96 (4 ε 2 − 1) 2 r 6 (r + 1) 3
where A(P(G1, N1, P2, M3, G6)) = − 2 (−ε 2 √ 3+1/4 √ 3)
2 (−12 r 3 y+2 r 4 √ 3 y 2 +12 r 4 y x−12 r 3 √ 3x+6 r 4 √ 3x 2 +3 √ 3r 2 +2 √ 3y 2 +12 y x+6 √ 3x 2 ) 9 r 2 (4 ε 2 −1) 2 .
Case 4:
P (X2 ∈ N r P E (X1, ε) ∩ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z s 5 s 2 Z ℓam(x) r 5 (x) + Z s 4 s 5 Z ℓam(x) r 2 (x) + Z s 13 s 4 Z r 8 (x) r 2 (x) ! A(P(G1, M1, P1, P2, M3, G6)) A(Tε) 2 dydx = h
243+7022682 r 12 −1296 r+36612 r 4 −952704 r 17 +137472 r 18 −578976 r 7 +7057828 r 14 −5116608 r 15 +2792712 r 16 −7725792 r 13 − 5484816 r 11 + 3631995 r 10 − 2213712 r 9 + 1213271 r 8 + 3051 r 2 − 11664 r 3 − 101952 r 5 + 292518 r 6
i.h 15552`r 2 + 1´3
2 r 2 + 1´3 r 6`4 ε 2 − 1´2 i where A(P(G1, M1, P1, P2, M3, G6)) = − 4 (−ε 2 √ 3+1/4 √ 3) 2 (−12 r 3 y−12 r 3 √ 3x+3 √ 3r 2 +3 r 4 √ 3 y 2 +18 r 4 y x+9 r 4 √ 3x 2 + √ 3y 2 +6 y x+3 √ 3x 2 ) 9 (4 ε 2 −1) 2 r 2 .
Case 5:
P (X2 ∈ N r P E (X1, ε)∩Γ r 1 (X1, ε), X1 ∈ Ts\T (y, ε)) = Z s 13 s 4 Z r 9 (x) r 8 (x) + Z s 12 s 13 Z r 9 (x) r 2 (x) ! A(P(G1, M1, L2, Q1, P2, M3, G6)) A(Tε) 2 dydx = −
h 4(400 r 15 −2832 r 14 +8012 r 13 −13608 r 12 +16350 r 11 −14292 r 10 +8677 r 9 −2442 r 8 −1963 r 7 +3288 r 6 −2751 r 5 +1710 r 4 −743 r 3 + 288 r 2 − 118 r + 24)
i.h 243 r 3`2 r 2 + 1´3`r 2 + 1´3`16 ε 4 − 8 ε 2 + 1´i
where A(P(G1, M1, L2, Q1, P2, M3, G6)) = − h 4`−ε 2 √ 3 + 1/4 √ 3´2`−9 + 42 √ 3y x − 45 x 2 + 36 x − 15 y 2 + 21 r 2 y 2 + 2 r 4 y 4 −12 r 4 x 2 y 2 +12 r 4 y 2 x+18 x 3 −12 √ 3y +42 y 2 x−24 r 3 y 2 −6 r 2 y √ 3x+4 √ 3 y 3 x+12 y x 3 √ 3 +54 r 2 x 3 +4 r 4 √ 3 y 3 + 12 r 4 y √ 3x − 12 r 4 √ 3x 2 y + 18 r 4 x 2 + 6 r 4 y 2 − 36 r 4 x 3 + 18 r 4 x 4 − 18 r 2 √ 3x 2 y + 12 r 3 √ 3x 2 y + 12 r 2 x 3 √ 3y − 4 r 2 √ 3 y 3 x + 12 r 2 y √ 3 − 45 r 2 x 2 + 9 r 2 − 12 r 3 y √ 3 − 4 r 3 √ 3 y 3 − 18 r 2 y 2 x + 12 r 3 y 2 x − 42 y x 2 √ 3 + 6 r 2 √ 3 y 3 + 2 r 2 y 4 − 24
y 2 x 2 − 18 r 2 x 4 − 36 r 3 x 3 − 36 r 3 x + 72 r 3 x 2 − 2 √ 3 y 3´i .h 3 r 2`− √ 3y − 3 + 3 x´`−y − √ 3 + √ 3x´`4 ε 2 − 1´2 i .
Case 6:
P (X2 ∈ N r P E (X1, ε) ∩ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z s 10 s 11 Z r 3 (x) r 7 (x) + Z s 9 s 10 Z r 3 (x) 0 + Z 1/2 s 9
Z r 6 (x) 0 ! A(P(G1, G2, Q1, P2, M3, G6)) A(Tε) 2 dydx = 324 r 11 − 1620 r 10 − 618 r 9 + 4626 r 8 + 990 r 7 − 2454 r 6 + 2703 r 5 − 5571 r 4 − 3827 r 3 + 1455 r 2 + 3072 r + 1024 7776 r 6 (r + 1) 3 (16 ε 4 − 8 ε 2 + 1)
where A(P(G1, G2, Q1, P2, M3, G6)) = − h 2`−ε 2 √ 3 + 1/4 √ 3´2`−9 √ 3r 2 − 24 √ 3r x − 21 r 2 y − 8 r 2 √ 3 y 2 + 24 r 2 √ 3x 2 − 3 r 2 √ 3x + 24 r y + 24 y x − 24 √ 3x 2 − 8 √ 3 y 2 − 6 √ 3 + 18 √ 3x − 4 y 3 + 12 √ 3r + 12 √ 3r x 2 + 4 √ 3 y 2 r − 18 y + 12 r 4 x 2 y − 24 √ 3r 3 x 2 + 8 √ 3r 3 y 2 + 12 r 4 x 3 √ 3 − 24 y r x − 4 r 2 y 3 + 24 r 3 y − 4 r 4 y 2 √ 3x − 12 x 2 y + 12 r 2 x 2 y − 12 r 2 x 3 √ 3 + 4 y 2 √ 3x − 4 r 4 √ 3 y 2 − 24 r 4 y x + 24 r 3 √ 3x − 12 r 4 √ 3x 2 − 4 r 4 y 3 + 4 r 2 y 2 √ 3x + 12
x 3 √ 3´i .h 3 r 2`− √ 3y − 3 + 3 x´`4 ε 2 − 1´2 i .
Case 7:
P (X2 ∈ N r P E (X1, ε) ∩ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z s 14 s 4 Z ℓam (x) r 9 (x) + Z s 12 s 14 Z r 12 (x) r 9 (x) + Z s 15 s 12 Z r 12 (x) r 10 (x) ! A(P(G1, M1, L2, Q1, Q2, L5, M3, G6)) A(Tε) 2 dydx = h
1080 r 17 −18900 r 15 +17280 r 14 +65934 r 13 −112320 r 12 +152361 r 11 −367200 r 10 +491051 r 9 −409872 r 8 +282224 r 7 −60864 r 6 − 86886 r 5 + 70560 r 4 − 44672 r 3 + 30720 r 2 − 16640 r + 6144
i.h 10368 r 3`2 r 2 + 1´3`16 ε 4 − 8 ε 2 + 1´i
where A(P(G1, M1, L2, Q1, Q2, L5, M3, G6)) = h 4`−ε 2 √ 3 + 1/4 √ 3´2`−18 + 24 √ 3y x − 54 x 2 + 54 x − 6 y 2 + 21 r 2 y 2 + r 4 y 4 − 6 r 4 x 2 y 2 + 6 r 4 y 2 x − 4 y 4 + 18 x 3 − 6 √ 3y + 42 y 2 x − 24 r 3 y 2 − 18 r 2 y √ 3x + 12 √ 3 y 3 x + 12 y x 3 √ 3 + 72 r 2 x 3 + 2 r 4 √ 3 y 3 + 6 r 4 y √ 3x − 6 r 4 √ 3x 2 y + 9 r 4 x 2 + 3 r 4 y 2 − 18 r 4 x 3 + 9 r 4 x 4 + 12 r 3 √ 3x 2 y + 12 r 2 x 2 y 2 + 18 r 2 y √ 3 + 18 r 2 x − 81 r 2 x 2 + 9 r 2 − 12 r 3 y √ 3 − 4 r 3 √ 3 y 3 − 24 r 2 y 2 x + 12 r 3 y 2 x − 30 y x 2 √ 3 − 2 r 2 y 4 − 36 y 2 x 2 − 18 r 2 x 4 − 36 r 3 x 3 − 36
r 3 x + 72 r 3 x 2 − 6 √ 3y 3´i .h 3 r 2`√ 3y + 3 − 3 x´`−y − √ 3 + √ 3x´`4 ε 2 − 1´2 i .
Case 8:
P (X2 ∈ N r P E (X1, ε) ∩ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z 1/2 s 9 Z r 3 (x) r 6 (x)
A(P(G1, G2, Q1, N3, MC , M3, G6)) A(Tε) 2 dydx = − 81 r 12 + 2048 + 384 r 4 − 810 r 10 + 1296 r 8 − 3072 r 2 + 96 r 6 15552 r 6 (16 ε 4 − 8 ε 2 + 1)
where A(P(G1, G2, Q1, N3, MC, M3, G6)) = − h 2`−ε 2 √ 3 + 1/4 √ 3´2`−5 √ 3r 2 −24 √ 3r x−17 r 2 y−8 r 2 √ 3 y 2 +24 r 2 √ 3x 2 − 7 r 2 √ 3x + 24 r y + 24 y x − 24 √ 3x 2 − 8 √ 3 y 2 − 6 √ 3 + 18 √ 3x − 4 y 3 + 12 √ 3r + 12 √ 3r x 2 + 4 √ 3 y 2 r − 18 y + 3 r 4 x 2 y − 12 √
3r 3 x 2 + 4 √ 3r 3 y 2 + 3 r 4 x 3 √ 3 − 24 y r x − 4 r 2 y 3 + 12 r 3 y − r 4 y 2 √ 3x − 12 x 2 y + 12 r 2 x 2 y − 12 r 2 x 3 √ 3 + 4 y 2 √ 3x − r 4 √ 3 y 2 − 6 r 4 y x + 12 r 3 √ 3x − 3 r 4 √ 3x 2 − r 4 y 3 + 4 r 2 y 2 √ 3x + 12
x 3 √ 3´i .h 3 r 2`− √ 3y − 3 + 3 x´`4 ε 2 − 1´2 i .
Case 9:
P (X2 ∈ N r P E (X1, ε)∩Γ r 1 (X1, ε), X1 ∈ Ts\T (y, ε)) = Z s 13 s 5 Z r 2 (x) r 5 (x) + Z s 11 s 13 Z r 8 (x) r 5 (x) ! A(P(G1, M1, P1, P2, M3, G6)) A(Tε) 2 dydx = − h
243+8673 r 12 −1296 r+23571 r 4 −119712 r 7 −61488 r 11 +169716 r 10 −246672 r 9 +216121 r 8 +1404 r 2 −3888 r 3 −35424 r 5 + 48816 r 6
i.h 7776 r 6`4 ε 2 − 1´2`r 2 + 1´3 i where A(P(G1, M1, P1, P2, M3, G6)) = − 4 (−ε 2 √ 3+1/4 √ 3) 2 (−12 r 3 y−12 r 3 √ 3x+3 √ 3r 2 +3 r 4 √ 3 y 2 +18 r 4 y x+9 r 4 √ 3x 2 + √ 3y 2 +6 y x+3 √ 3x 2 ) 9 r 2 (4 ε 2 −1) 2 .
Case 10:
P (X2 ∈ N r P E (X1, ε) ∩ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z s 15 s 12 Z r 10 (x) r 2 (x) + Z 1/2 s 15 Z r 12 (x) r 2 (x) ! A(P(G1, M1, L2, Q1, N3, L4, L5, M3, G6)) A(Tε) 2 dydx = −
324 r 11 − 6949 r 9 + 7248 r 8 + 26896 r 7 − 24960 r 6 + 2160 r 5 − 259200 r 4 + 645760 r 3 − 552960 r 2 + 155648 r + 6144 31104 r 3 (16 ε 4 − 8 ε 2 + 1)
where A(P(G1, M1, L2, Q1, N3, L4, L5, M3, G6)) = h 2`−ε 2 √ 3 + 1/4 √ 3´2`−72 − 24 √ 3y x − 144 x 2 − 144 x r + 180 x + 24 y 2 + 72 r + 30 r 2 y 2 + r 4 y 4 − 6 r 4 x 2 y 2 + 6 r 4 y 2 x − 24 y 4 + 36 x 3 + 12 √ 3y + 84 y 2 x − 24 r 3 y 2 + 12 r 2 y √ 3x + 56 √ 3 y 3 x + 24 y x 3 √ 3 + 108 r 2 x 3 + 2 r 4 √ 3 y 3 + 6 r 4 y √ 3x − 6 r 4 √ 3x 2 y + 9 r 4 x 2 + 3 r 4 y 2 − 18 r 4 x 3 + 9 r 4 x 4 − 36 r 2 √ 3x 2 y + 12 r 3 √ 3x 2 y + 24 r 2 x 3 √ 3y − 8 r 2 √ 3y 3 x − 72 r y 2 + 96 r y 2 x + 72 r 2 x − 126 r 2 x 2 − 18 r 2 + 72 r x 2 − 12 r 3 y √ 3 − 4 r 3 √ 3 y 3 + 48 r y √ 3x − 48 r y x 2 √ 3 − 36 r 2 y 2 x + 12 r 3 y 2 x − 12 y x 2 √ 3 + 12 r 2 √ 3 y 3 + 4 r 2 y 4 − 120 y 2 x 2 − 36 r 2 x 4 − 36 r 3 x 3 − 36 r 3 x + 72
r 3 x 2 − 28 √ 3 y 3 − 16 r √ 3 y 3´i .h 3 r 2`√ 3y + 3 − 3 x´`−y − √ 3 + √ 3x´`4 ε 2 − 1´2 i .
Case 11:
P (X2 ∈ N r P E (X1, ε) ∩ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z 1/2 s 15 Z r 10 (x) r 12 (x)
A(P(L1, L2, Q1, N3, L4, L5, L6)) A(Tε) 2 dydx = 4 r 12 + 16 r 11 − 69 r 10 − 260 r 9 + 372 r 8 + 1248 r 7 + 112 r 6 − 2624 r 5 − 8256 r 4 + 12288 r 3 + 13568 r 2 − 27648 r + 11264 384 (16 r 2 ε 4 − 8 r 2 ε 2 + r 2 + 64 r ε 4 − 32 r ε 2 + 4 r + 64 ε 4 − 32 ε 2 + 4) r 2
where A(P(L1, L2, Q1, N3, L4, L5, L6)) = h 2`−ε 2 √ 3 + 1/4 √ 3´2`−72 + 24 √ 3y r − 72 √ 3y x − 216 x 2 − 72 x r + 180 x + 72 r + 24 r 2 y 2 + r 4 y 4 − 6 r 4 x 2 y 2 + 6 r 4 y 2 x − 32 y 4 − 72 x 4 + 180 x 3 + 12 √ 3y + 36 y 2 x − 24 r 3 y 2 + 24 r 2 y √ 3x + 56 √ 3 y 3 x + 24 y x 3 √ 3 + 108 r 2 x 3 + 72 r x 3 + 2 r 4 √ 3 y 3 + 6 r 4 y √ 3x − 6 r 4 √ 3x 2 y + 9 r 4 x 2 + 3 r 4 y 2 − 18 r 4 x 3 + 9 r 4 x 4 − 36 r 2 √ 3x 2 y + 12 r 3 √ 3x 2 y + 24 r 2 x 3 √ 3y − 8 r 2 √ 3 y 3 x − 24 r y 2 + 72 r y 2 x − 12 r 2 y √ 3 + 108 r 2 x − 144 r 2 x 2 − 36 r 2 − 72 r x 2 − 12 r 3 y √ 3 − 4 r 3 √ 3 y 3 +48 r y √ 3x−72 r y x 2 √ 3−36 r 2 y 2 x+12 r 3 y 2 x+36 y x 2 √ 3 +12 r 2 √ 3 y 3 +4 r 2 y 4 −72 y 2 x 2 −36 r 2 x 4 −36 r 3 x 3 − 36 r 3 x + 72
r 3 x 2 − 44 √ 3 y 3 − 8 r √ 3 y 3´i .h 3 r 2`√ 3y + 3 − 3 x´`−y − √ 3 + √ 3x´`4 ε 2 − 1´2 i .
Case 12: where A(P(L1, L2, Q1, Q2, L5, L6)) = − h 4`−ε 2 √ 3 + 1/4 √ 3´2`−18 + 12 √ 3y r − 90 x 2 + 36 x r + 54 x − 18 y 2 + 18 r 2 y 2 + r 4 y 4 − 6 r 4 x 2 y 2 + 6 r 4 y 2 x − 8 y 4 − 36 x 4 + 90 x 3 − 6 √ 3y + 18 y 2 x − 24 r 3 y 2 − 12 r 2 y √ 3x + 12 √ 3 y 3 x + 12 y x 3 √ 3 + 72 r 2 x 3 + 36 r x 3 +2 r 4 √ 3 y 3 +6 r 4 y √ 3x−6 r 4 √ 3x 2 y+9 r 4 x 2 +3 r 4 y 2 −18 r 4 x 3 +9 r 4 x 4 +12 r 3 √ 3x 2 y+12 r 2 x 2 y 2 +24 r y 2 −12 r y 2 x+ 12 r 2 y √ 3+36 r 2 x−90 r 2 x 2 −72 r x 2 −12 r 3 y √ 3−4 r 3 √ 3 y 3 −12 r y x 2 √ 3−24 r 2 y 2 x+12 r 3 y 2 x−6 y x 2 √ 3−2 r 2 y 4 −12 y 2 x 2 − 18 r 2 x 4 − 36 r 3 x 3 − 36 r 3 x + 72
P (X2 ∈ N r P E (X1, ε)∩Γ r 1 (X1, ε), X1 ∈ Ts\T (y, ε)) = Zsr 3 x 2 − 14 √ 3 y 3 + 4 r √ 3y 3´i .h 3 r 2`− √ 3y − 3 + 3 x´`−y − √ 3 + √ 3x´`4 ε 2 − 1´2 i .
Case 13:
P (X2 ∈ N r P E (X1, ε) ∩ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z s 11 s 13 Z r 2 (x) r 8 (x) + Z s 12 s 11 Z r 2 (x) r 3 (x) + Z s 9 s 12 Z r 6 (x) r 3 (x)
! A(P(G1, M1, L2, Q1, P2, M3, G6)) A(Tε) 2 dydx = 3654 r 12 − 35328 r 11 + 94802 r 10 − 100608 r 9 − 255 r 8 + 138240 r 7 − 193581 r 6 + 148224 r 5 − 86387 r 4 + 43008 r 3 − 12369 r 2 + 512 7776 r 6 (r 2 + 1) 3 (16 ε 4 − 8 ε 2 + 1)
where A(P(G1, M1, L2, Q1, P2, M3, G6)) = − h 4`−ε 2 √ 3 + 1/4 √ 3´2`−9 + 42 √ 3y x − 45 x 2 + 36 x − 15 y 2 + 21 r 2 y 2 + 2 r 4 y 4 −12 r 4 x 2 y 2 +12 r 4 y 2 x+18 x 3 −12 √ 3y +42 y 2 x−24 r 3 y 2 −6 r 2 y √ 3x+4 √ 3 y 3 x+12 y x 3 √ 3 +54 r 2 x 3 +4 r 4 √ 3 y 3 + 12 r 4 y √ 3x − 12 r 4 √ 3x 2 y + 18 r 4 x 2 + 6 r 4 y 2 − 36 r 4 x 3 + 18 r 4 x 4 − 18 r 2 √ 3x 2 y + 12 r 3 √ 3x 2 y + 12 r 2 x 3 √ 3y − 4 r 2 √ 3 y 3 x + 12 r 2 y √ 3 − 45 r 2 x 2 + 9 r 2 − 12 r 3 y √ 3 − 4 r 3 √ 3 y 3 − 18 r 2 y 2 x + 12 r 3 y 2 x − 42 y x 2 √ 3 + 6 r 2 √ 3 y 3 + 2 r 2 y 4 − 24
y 2 x 2 − 18 r 2 x 4 − 36 r 3 x 3 − 36 r 3 x + 72 r 3 x 2 − 2 √ 3 y 3´i .h 3 r 2`− √ 3y − 3 + 3 x´`−y − √ 3 + √ 3x´`4 ε 2 − 1´2 i .
Case 14:
P (X2 ∈ N r P E (X1, ε)∩Γ r 1 (X1, ε), X1 ∈ Ts\T (y, ε)) = Z s 9 s 12 Z r 2 (x) r 6 (x) + Z 1/2 s 9 Z r 2 (x) r 3 (x)
! A(P(G1, M1, L2, Q1, N3, MC , M3, G6)) A(Tε) 2 dydx = 49 r 12 + 124288 r 4 + 50688 r 7 + 384 r 11 − 3562 r 10 + 13440 r 9 − 36948 ν 8 + 27648 r 2 − 86016 r 3 − 1024 − 89088 r 5 + 160 r 6 15552 r 6 (16 ε 4 − 8 ε 2 + 1)
where A(P(G1, M1, L2, Q1, N3, MC , M3, G6)) = − h 2`−ε 2 √ 3 + 1/4 √ 3´2`−18+84 √ 3y x−90 x 2 +72 x−30 y 2 +38 r 2 y 2 + r 4 y 4 − 6 r 4 x 2 y 2 + 6 r 4 y 2 x + 36 x 3 − 24 √ 3y + 84 y 2 x − 24 r 3 y 2 − 4 r 2 y √ 3x + 8 √ 3 y 3 x + 24 y x 3 √ 3 + 108 r 2 x 3 + 2 r 4 √ 3 y 3 + 6 r 4 y √ 3x − 6 r 4 √ 3x 2 y + 9 r 4 x 2 + 3 r 4 y 2 − 18 r 4 x 3 + 9 r 4 x 4 − 36 r 2 √ 3x 2 y + 12 r 3 √ 3x 2 y + 24 r 2 x 3 √ 3y − 8 r 2 √ 3 y 3 x + 16 r 2 y √ 3 + 24 r 2 x − 102 r 2 x 2 + 6 r 2 − 12 r 3 y √ 3 − 4 r 3 √ 3 y 3 − 36 r 2 y 2 x + 12 r 3 y 2 x − 84 y x 2 √ 3 + 12 r 2 √ 3y 3 + 4 r 2 y 4 − 48 y 2 x 2 − 36 r 2 x 4 − 36 r 3 x 3 − 36 r 3 x + 72
r 3 x 2 − 4 √ 3 y 3´i .h 3 r 2`− √ 3y − 3 + 3 x´`−y − √ 3 + √ 3x´`4 ε 2 − 1´2 i .
Adding up the P (X 2 ∈ N r P E (X 1 , ε) ∩ Γ r 1 (X 1 , ε), X 1 ∈ T s \ T (y, ε)) values in the 14 possible cases above, and multiplying by 6 we get for r ∈ [1, 4/3), µ S and (r, ε) = − (r − 1) 5 r 5 + 288 r 5 ε 4 + 1152 r 4 ε 4 − 148 r 4 + 1440 r 3 ε 4 + 245 r 3 + 576 r 2 ε 4 − 178 r 2 − 232 r + 128 54 r 2 (2 + r) (2 ε − 1) 2 (2 ε + 1) 2 (r + 1) .
The µ S and (r, ε) values for the other intervals can be calculated similarly.
Derivation of µ S or (r, ε)
For r ∈ [1, 4/3), there are 16 cases to consider for the OR-underlying version: Case 1:
P (X2 ∈ N r P E (X1, ε) ∪ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z q 2 q 1 Z ℓam(x) ℓ 1 (x) + Z s 0 q 2 Z ℓam (x) 0 + Z s 1 s 0 Z ℓam(x) r 1 (x)
! A(P(V1, M1, MC , M3, V6)) A(Tε) 2 dydx = 6 ε 2 − 4 r 2 + 12 r − 9 27 (4 ε 2 − 1)
where A(P(V1, M1, MC, M3, V6)) = − 4 (−ε 2 √ 3+1/4 √ 3) 2 √ 3 9 (4 ε 2 −1)
.
Case 2:
P (X2 ∈ N r P E (X1, ε) ∪ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z s 1 s 0 Z r 1 (x) 0 + Z s 5 s 1 Z r 2 (x) 0 + Z s 3 s 5 Z r 5 (x) 0 + Z s 11 s 3 Z r 5 (x) r 3 (x) ! A(P(V1, M1, L2, L3, MC, M3, V6)) A(Tε) 2 dydx = h −2304 r 5 ε 2 +432 r−21960 r 4 −27+9624 r 7 +5952 r 6 ε 2 +288 r 4 ε 2 +1824 r 8 ε 2 −1817 r 8 −2880 r 2 +10368 r 3 +28224 r 5 −5760 r 7 ε 2 − 21964 r 6 i.h 864 r 6`1 6 ε 4 − 8 ε 2 + 1´i
where A(P(V1, M1, L2, L3, MC , M3, V6)) = − h −27 + 12 ε 2 r 2 x 2 + 36 √ 3y r + 108 √ 3y x − 162 x 2 − 108 x r − 8 ε 2 √ 3r 2 y x + 108 x − 54 y 2 + 36 r − 5 r 2 y 2 − 3 y 4 − 27 x 4 + 108 x 3 − 36 √ 3y + 108 y 2 x + 10 r 2 y √ 3x + 12 √ 3 y 3 x + 36 y x 3 √ 3 − 36 r x 3 + 36 r y 2 − 36 r y 2 x − 10 r 2 y √ 3 + 30 r 2 x − 15 r 2 x 2 − 15 r 2 + 108 r x 2 − 72 r y √ 3x + 36 r y x 2 √ 3 + 12 r 2 ε 2 − 108 y x 2 √ 3 − 54 y 2 x 2 − 12 √ 3y 3 + 4 r √ 3 y 3 + 4 ε 2 r 2 y 2 − 24 ε 2 r 2 x + 8 ε 2 √ 3r 2 y i.h 4 r 2`− √ 3y − 3 + 3 x´`−y − √ 3 + √ 3x´i.
Case 3:
P (X2 ∈ N r P E (X1, ε)∪Γ r 1 (X1, ε), X1 ∈ Ts\T (y, ε)) = Z s 2 s 1 Z ℓam(x) r 2 (x) + Z s 5 s 2 Z r 5 (x) r 2 (x) ! A(P(V1, M1, L2, L3, L4, L5, M3, V6)) A(Tε) 2 dydx = −
h −3456 r 5 ε 2 + 1296 r − 65772 r 4 + 26880 r 7 + 9216 r 6 ε 2 + 432 r 4 ε 2 + 3072 r 8 ε 2 − 4864 r 8 − 8640 r 2 + 31104 r 3 + 83808 r 5 − 9216 r 7 ε 2 − 63744 r 6 − 81
i.h 2592 r 6`1 6 ε 4 − 8 ε 2 + 1´i
where A(P(V1, M1, L2, L3, L4, L5, M3, V6)) = − h −54 + 12 ε 2 r 2 x 2 + 36 √ 3y r + 54 √ 3y x − 189 x 2 − 180 x r − 8 ε 2 √ 3r 2 y x + 162 x − 27 y 2 + 72 r − 9 r 2 y 2 − 15 y 4 − 27 x 4 + 108 x 3 − 18 √ 3y + 108 y 2 x + 18 r 2 y √ 3x + 36 √ 3 y 3 x + 36 y x 3 √ 3 − 36 r x 3 + 12 r y 2 x − 18 r 2 y √ 3 + 54 r 2 x − 27 r 2 x 2 − 27 r 2 + 144 r x 2 − 48 r y √ 3x + 12 r y x 2 √ 3 + 12 r 2 ε 2 − 72 y
x 2 √ 3 − 90 y 2 x 2 − 24 √ 3 y 3 − 4 r √ 3 y 3 + 4 ε 2 r 2 y 2 − 24 ε 2 r 2 x + 8 ε 2 √ 3r 2 y i.h 4 r 2`− √ 3y − 3 + 3 x´`−y − √ 3 + √ 3x´i.
Case 4:
P (X2 ∈ N r P E (X1, ε)∪Γ r 1 (X1, ε), X1 ∈ Ts\T (y, ε)) = Z s 11 s 3 Z r 3 (x) 0 + Z s 6 s 11 Z r 5 (x) 0 ! A(P(V1, G2, G3, M2, MC , M3, V6)) A(Tε) 2 dydx = −
−8 r + 24 r 2 + 56 r 7 − 32 r 3 − 13 r 8 + 32 r 4 ε 2 + 32 r 8 ε 2 − 128 r 5 ε 2 + 192 r 6 ε 2 − 128 r 7 ε 2 + 64 r 5 − 92 r 6 + 1 96 r 6 (4 ε 2 − 1) 2
where A(P(V1, G2, G3, M2, MC , M3, V6)) = − √ 3 y 2 +6 y−6 y x+3 √ 3−6 √ 3x+3 √ 3x 2 −2 √ 3r 2 +4 √ 3r 2 ε 2 12 r 2 .
Case 5:
P (X2 ∈ N r P E (X1, ε)∪Γ r 1 (X1, ε), X1 ∈ Ts\T (y, ε)) = Z s 6 s 11 Z r 7 (x) r 5 (x) + Z s 10 s 6 Z r 7 (x) 0 !
A(P(V1, G2, G3, M2, MC , P2, N2, V6)) A(Tε) 2 dydx = − −1 + 32 r 5 ε 2 + 5 r + 34 r 4 + 15 r 7 + 64 r 6 ε 2 − 32 r 4 ε 2 − 32 r 8 ε 2 − 17 r 9 + 29 r 8 − 3 r 2 − 17 r 3 − 2 r 5 − 64 r 7 ε 2 − 43 r 6 + 32 r 9 ε 2 96 (r + 1) 3 (4 ε 2 − 1) 2 r 6
where A(P(V1, G2, G3, M2, MC, P2, N2, V6)) = − h 2`−ε 2 √ 3 + 1/4 √ 3´2`2 √ 3 y 2 +12 y−12 y x+6 √ 3−12 √ 3x+6 √ 3x 2 − 7 √ 3r 2 + 12 r 3 y + 12 r 3 √ 3x − 4 r 4 √ 3 y 2 − 24 r 4 y x − 12 r 4 √ 3x 2 + 8 √ 3r 2 ε 2´i
.h 9 r 2`4 ε 2 − 1´2 i .
Case 6:
P (X2 ∈ N r P E (X1, ε) ∪ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z s 5 s 2 Z ℓam(x) r 5 (x) + Z s 4 s 5 Z ℓam(x) r 2 (x) + Z s 13 s 4 Z r 8 (x) r 2 (x) ! A(P(V1, N1, P1, L2, L3, L4, L5, P2, N2, V6)) A(Tε) 2 dydx =
h −243−28578916 r 12 −1147392 r 15 ε 2 −1344384 r 11 ε 2 −1734912 r 13 ε 2 −304128 r 17 ε 2 +989424 r 10 ε 2 −10368 r 5 ε 2 +3888 r−438777 r 4 + 2204160 r 17 − 355328 r 18 + 5753232 r 7 + 39312 r 6 ε 2 + 1296 r 4 ε 2 + 296208 r 8 ε 2 − 20639832 r 14 + 13254912 r 15 − 6591792 r 16 + 1693728 r 12 ε 2 +1507392 r 14 ε 2 +637056 r 16 ε 2 +26417664 r 13 +26760576 r 11 −21960774 r 10 +15877152 r 9 −10180620 r 8 −28107 r 2 + 128304 r 3 +1222128 r 5 −120960 r 7 ε 2 −2856483 r 6 +92160 r 18 ε 2 −563328 r 9 ε 2 i.h 7776 r 6`r2 + 1´3`16 ε 4 − 8 ε 2 + 1´`2 r 2 + 1´3
i where A(P(V1, N1, P1, L2, L3, L4, L5, P2, N2, V6)) = − h 4`−ε 2 √ 3 + 1/4 √ 3´2`−54 + 12 ε 2 r 2 x 2 + 36 √ 3y r + 54 √ 3y x − 189 x 2 − 180 x r − 8 ε 2 √ 3r 2 y x + 162 x − 27 y 2 + 72 r − 12 r 2 y 2 − 4 r 4 y 4 + 24 r 4 x 2 y 2 − 24 r 4 y 2 x − 15 y 4 − 27 x 4 + 108 x 3 − 18 √ 3y + 108 y 2 x + 24 r 3 y 2 + 24 r 2 y √ 3x + 36 √ 3 y 3 x + 36 y x 3 √ 3 − 36 r x 3 − 8 r 4 √ 3 y 3 − 24 r 4 y √ 3x + 24 r 4 √ 3x 2 y − 36 r 4 x 2 − 12 r 4 y 2 + 72 r 4 x 3 − 36 r 4 x 4 − 12 r 3 √ 3x 2 y + 12 r y 2 x − 24 r 2 y √ 3 + 72 r 2 x − 36 r 2 x 2 − 36 r 2 + 144 r x 2 + 12 r 3 y √ 3 + 4 r 3 √ 3 y 3 − 48 r y √ 3x + 12 r y x 2 √ 3 − 12 r 3 y 2 x + 12 r 2 ε 2 − 72 y x 2 √ 3 − 90 y 2 x 2 + 36 r 3 x 3 + 36 r 3 x − 72 r 3 x 2 − 24 √ 3 y 3 − 4 r √ 3 y 3 + 4 ε 2 r 2 y 2 − 24 ε 2 r 2 x + 8 ε 2 √ 3r 2 y´i
.h 3 r 2`− √ 3y − 3 + 3 x´`−y − √ 3 + √ 3x´`4 ε 2 − 1´2 i .
Case 7: V1, N1, Q1, L3, L4, L5, P2, N2, V6)) A(Tε) 2 dydx = − h 8(−2−55766 r 12 −864 r 15 ε 2 −4104 r 11 ε 2 −3024 r 13 ε 2 +3690 r 10 ε 2 −108 r 5 ε 2 +24 r−1833 r 4 +21576 r 7 +342 r 6 ε 2 +18 r 4 ε 2 + 1710 r 8 ε 2 −20056 r 14 +6912 r 15 −1152 r 16 +3816 r 12 ε 2 +1800 r 14 ε 2 +288 r 16 ε 2 +38376 r 13 +65532 r 11 −63642 r 10 +52020 r 9 −36277 r 8 − 142 r 2 + 576 r 3 + 4848 r 5 − 864 r 7 ε 2 − 10994 r 6 − 2700 r 9 ε 2 )
P (X2 ∈ N r P E (X1, ε)∪Γ r 1 (X1, ε), X1 ∈ Ts\T (y, ε)) = Z s 13 s 4 Z r 9 (x) r 8 (x) + Z s 12 s 13 Z r 9 (x) r 2 (x) ! A(P(
i.h 243 r 4`2 r 2 + 1´3`r 2 + 1´3`16 ε 4 − 8 ε 2 + 1´i
where A(P(V1, N1, Q1, L3, L4, L5, P2, N2, V6)) = −
h 4`−ε 2 √ 3 + 1/4 √ 3´2`−36 √ 3r 2 − 180 √ 3r x + 12 √ 3r 2 ε 2 − 90 r 2 y − 30 r 2 √ 3 y 2 + 18 r 2 √ 3x 2 + 54 r 2 √ 3x + 108 r y + 54 y x − 135 √ 3x 2 − 9 √ 3 y 2 − 45 √ 3 + 72 r 2 y x + 126 √ 3x − 60 y 3 + 72 √ 3r + 144 √ 3r x 2 − 18 y + 18 x 4 r 2 √ 3 − 36 x 3 √ 3r + 36 x 3 r 3 √ 3 − 3 r 4 √ 3 y 4 + 54 r 4 x 2 y − 72 √
3r 3 x 2 + 24 √ 3r 3 y 2 + 54 r 4 x 3 √ 3 − 144 y r x − 36 r 3 x 2 y − 12 y 3 r + 96 y 3 x − 18 r 2 y 3 − 18 x 4 √ 3 + 12 r 3 y 3 + 36 r 3 y − 18 r 4 y 2 √ 3x − 108 x 2 y + 12 √ 3 y 2 r x − 12 r 3 √ 3x y 2 + 54 r 2 x 2 y − 54 r 2 x 3 √ 3 + 72 y 2 √ 3x − 9 r 4 √ 3 y 2 − 54 r 4 y x + 36 r 3 √ 3x − 27 r 4 √ 3x 2 − 2 y 4 r 2 √ 3 − 18 r 4 y 3 + 18 r 2 y 2 √ 3x+72 x 3 √ 3−72 y 2 √ 3x 2 +24 ε 2 r 2 y−27 r 4 √ 3x 4 +12 r 2 y 3 x−36 r 2 x 3 y−14 y 4 √ 3+72 x 3 y+36 x 2 r y+18 r 4 √ 3 y 2 x 2 + 4 ε 2 √ 3r 2 y 2 + 12 ε 2 √ 3r 2 x 2 − 24 ε 2 √ 3r 2 x − 24 ε 2 r 2 y x´i
.h 3 r 2`− √ 3y − 3 + 3 x´2`4 ε 2 − 1´2 i .
Case 8:
P (X2 ∈ N r P E (X1, ε) ∪ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z s 10 s 11 Z r 3 (x) r 7 (x) + Z s 9 s 10 Z r 3 (x) 0 + Z 1/2 s 9 Z r 6 (x) 0 ! A(P(V1, N1, Q1, G3, M2, MC , P2, N2, V6)) A(Tε) 2 dydx =
h −81 r 9 +189 r 8 −561 r 7 +1008 r 7 ε 2 +45 r 6 −432 r 6 ε 2 +1894 r 5 −3120 r 5 ε 2 +18 r 4 −144 r 4 ε 2 −1912 r 3 +2304 r 3 ε 2 −224 r 2 + 768 r 2 ε 2 + 384 r + 128
i.h 1296 r 4`1 6 ε 4 − 8 ε 2 + 1´(r + 1) 3 i where A(P(V1, N1, Q1, G3, M2, MC , P2, N2, V6)) = h 2`−ε 2 √ 3 + 1/4 √ 3´2`− √ 3r x − 24 r 2 y − 8 r 2 √ 3 y 2 + 24 r 2 √ 3x 2 − 24 r 2 √ 3x + 36 r 3 y x + 8 ε 2 √ 3r x − 8 ε 2 √ 3r + 25 r y + 24 y x − 12 √ 3x 2 − 4 √ 3y 2 − 8 ε 2 r y − 12 √ 3 + 24 √ 3x + 13 √ 3r − 24 √ 3r x 2 + 8 √ 3 y 2 r − 24 y + 12 x 3 √ 3r − 18 x 3 r 3 √ 3 + 18 √ 3r 3 x 2 + 6 √ 3r 3 y 2 − 18 r 3 x 2 y + 4 y 3 r + 6 r 3 y 3 − 4 √ 3y 2 r x + 6 r 3 √ 3x y 2 − 12 x 2 r y´i .h 3 r`√3y + 3 − 3 x´`4 ε 2 − 1´2 i .
Case 9: P (X2 ∈ N r P E (X1, ε) ∪ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = ! A(P(V1, N1, Q1, L3, L4, Q2, N2, V6)) A(Tε) 2 dydx = h −512+81297 r 12 +55296 r 11 ε 2 −51264 r 10 ε 2 +6912 r 5 ε 2 +6144 r+72576 r 4 −1512 r 18 −798720 r 7 −35424 r 6 ε 2 −9216 r 4 ε 2 − 45792 r 8 ε 2 −83538 r 14 −17280 r 15 +18252 r 16 −51840 r 12 ε 2 +6912 r 14 ε 2 +167616 r 13 −565920 r 11 +888957 r 10 −1023600 r 9 + 998852 r 8 − 7424 r 2 − 6144 r 3 − 262080 r 5 + 41472 r 7 ε 2 + 533036 r 6 + 82944 r 9 ε 2 i.h 5184 r 4`2 r 2 + 1´3`16 ε 4 − 8 ε 2 + 1´i
where A(P(V1, N1, Q1, L3, L4, Q2, N2, V6)) = − h 8`−ε 2 √ 3 + 1/4 √ 3´2`−18 √ 3r 2 −90 √ 3r x+6 √ 3r 2 ε 2 −54 r 2 y−15 r 2 √ 3 y 2 + 27 r 2 √ 3x 2 + 18 r 2 √ 3x + 54 r y + 54 y x − 63 √ 3x 2 − 9 √ 3 y 2 − 18 √ 3 + 54 r 2 y x + 54 √ 3x − 24 y 3 + 36 √ 3r + 72 √ 3r x 2 − 18 y + 9 x 4 r 2 √ 3 − 18 x 3 √ 3r + 18 x 3 r 3 √ 3 − r 4 √ 3 y 4 + 18 r 4 x 2 y − 36 √ 3r 3 x 2 + 12 √ 3r 3 y 2 + 18 r 4 x 3 √ 3 − 72 y r x − 18 r 3 x 2 y − 6 y 3 r + 36 y 3 x − 9 x 4 √ 3 + 6 r 3 y 3 + 18 r 3 y − 6 r 4 y 2 √ 3x − 72 x 2 y − 6 √ 3r 2 y 2 x 2 + 6 √ 3 y 2 r x − 6 r 3 √ 3x y 2 − 36 r 2 x 3 √ 3 + 36 y 2 √ 3x − 3 r 4 √ 3 y 2 −18 r 4 y x+18 r 3 √ 3x−9 r 4 √ 3x 2 + y 4 r 2 √ 3−6 r 4 y 3 +12 r 2 y 2 √ 3x+36 x 3 √ 3−30 y 2 √ 3x 2 +12 ε 2 r 2 y −9 r 4 √ 3x 4 − 5 y 4 √ 3+36 x 3 y+18 x 2 r y+6 r 4 √ 3 y 2 x 2 +2 ε 2 √ 3r 2 y 2 +6 ε 2 √ 3r 2 x 2 −12 ε 2 √ 3r 2 x−12 ε 2 r 2 y x´i .h 3 r 2`√ 3y + 3 − 3 x´2`4 ε 2 − 1´2 i .
Case 10: P (X2 ∈ N r P E (X1, ε) ∪ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z 1/2 s 9 Z r 3 (x) r 6 (x)
A(P(V1, N1, Q1, G3, M2, N3, N2, V6)) A(Tε) 2 dydx = − 2496 r 4 + 1728 r 6 ε 2 − 4608 r 4 ε 2 + 512 − 81 r 10 + 270 r 8 − 2176 r 2 + 3072 r 2 ε 2 − 1080 r 6 5184 r 4 (16 ε 4 − 8 ε 2 + 1)
where A(P(V1, N1, Q1, G3, M2, N3, N2, V6)) = − h 2`−ε 2 √ 3 + 1/4 √ 3´2`3 √ 3r x − 12 r 2 y − 4 r 2 √ 3 y 2 + 12 r 2 √ 3x 2 − 12 r 2 √ 3x + 18 r 3 y x + 8 ε 2 √ 3r x − 8 ε 2 √ 3r + 21 r y + 24 y x − 12 √ 3x 2 − 4 √ 3 y 2 − 8 ε 2 r y − 12 √ 3 + 24 √ 3x + 9 √ 3r − 24 √ 3r x 2 + 8 √ 3 y 2 r − 24 y + 12 x 3 √ 3r − 9 x 3 r 3 √ 3 + 9 √ 3r 3 x 2 + 3 √ 3r 3 y 2 − 9 r 3 x 2 y + 4 y 3 r + 3 r 3 y 3 − 4 √ 3y 2 r x + 3 r 3 √ 3x y 2 − 12 x 2 r y´i .h 3 r`− √ 3y − 3 + 3 x´`4 ε 2 − 1´2 i .
Case 11: P (X2 ∈ N r P E (X1, ε) ∪ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z s 13 s 5 Z r 2 (x)
r 5 (x) + Z s 11 s 13 Z r 8 (x)
r 5 (x) ! A(P(V1, N1, P1, L2, L3, MC , P2, N2, V6)) A(Tε) 2 dydx = − h −43855 r 12 +14112 r 12 ε 2 +271488 r 11 −48384 r 11 ε 2 −746553 r 10 +81792 r 10 ε 2 −117504 r 9 ε 2 +1230336 r 9 −1404177 r 8 + 123840 r 8 ε 2 +1236528 r 7 −89856 r 7 ε 2 −901350 r 6 +58752 r 6 ε 2 −20736 r 5 ε 2 +550800 r 5 −276453 r 4 +2592 r 4 ε 2 +104976 r 3 − 26649 r 2 + 3888 r − 243
i.h 7776 r 6`1 6 ε 4 − 8 ε 2 + 1´`r 2 + 1´3
i where A(P(V1, N1, P1, L2, L3, MC , P2, N2, V6)) = − h 4`−ε 2 √ 3 + 1/4 √ 3´2`−27 + 12 ε 2 r 2 x 2 + 36 √ 3y r + 108 √ 3y x − 162 x 2 − 108 x r − 8 ε 2 √ 3r 2 y x + 108 x − 54 y 2 + 36 r − 8 r 2 y 2 − 4 r 4 y 4 + 24 r 4 x 2 y 2 − 24 r 4 y 2 x − 3 y 4 − 27 x 4 + 108 x 3 − 36 √ 3y + 108 y 2 x + 24 r 3 y 2 + 16 r 2 y √ 3x + 12 √ 3 y 3 x + 36 y x 3 √ 3 − 36 r x 3 − 8 r 4 √ 3 y 3 − 24 r 4 y √ 3x + 24 r 4 √ 3x 2 y − 36 r 4 x 2 − 12 r 4 y 2 + 72 r 4 x 3 − 36 r 4 x 4 − 12 r 3 √ 3x 2 y + 36 r y 2 − 36 r y 2 x − 16 r 2 y √ 3 + 48 r 2 x − 24 r 2 x 2 − 24 r 2 + 108 r x 2 + 12 r 3 y √ 3 + 4 r 3 √ 3 y 3 − 72 r y √ 3x + 36 r y x 2 √ 3 − 12 r 3 y 2 x + 12 r 2 ε 2 − 108 y x 2 √ 3 − 54 y 2 x 2 + 36 r 3 x 3 + 36 r 3 x − 72 r 3 x 2 − 12 √ 3 y 3 + 4 r √ 3y 3 + 4 ε 2 r 2 y 2 − 24 ε 2 r 2 x + 8 ε 2 √ 3r 2 y´i .h 3 r 2`− √ 3y − 3 + 3 x´`−y − √ 3 + √ 3x´`4 ε 2 − 1´2 i .
Case 12:
P (X2 ∈ N r P E (X1, ε)∪Γ r 1 (X1, ε), X1 ∈ Ts\T (y, ε)) = Z s 15 s 12 Z r 10 (x)
r 2 (x) + Z 1/2 s 15 Z r 12 (x) r 2 (x)
! A(P(V1, N1, Q1, L3, N3, N2, V6)) A(Tε) 2 dydx = − h 5184 r 8 ε 2 −71424 r 6 ε 2 +138240 r 5 ε 2 −73728 r 4 ε 2 −1053 r 12 +16230 r 10 −17856 r 9 −68908 r 8 +104448 r 7 +276688 r 6 −916608 r 5 + 1032192 r 4 − 516096 r 3 + 80128 r 2 + 12288 r − 1024
i.h 31104 r 4`1 6 ε 4 − 8 ε 2 + 1´i
where A(P(V1, N1, Q1, L3, N3, N2, V6)) = − h 2`−ε 2 √ 3 + 1/4 √ 3´2`−36 √ 3r 2 −216 √ 3r x+24 √ 3r 2 ε 2 −108 r 2 y−48 r 2 √ 3 y 2 + 72 r 2 √ 3x 2 +36 r 2 √ 3x+216 r y +432 y x−216 √ 3x 2 −72 √ 3 y 2 −36 √ 3+72 r 2 y x+144 √ 3x−48 y 3 +72 √ 3r +216 √ 3r x 2 + 72 √ 3y 2 r − 144 y + 36 x 4 r 2 √ 3 − 72 x 3 √ 3r + 36 x 3 r 3 √ 3 − 3 r 4 √ 3 y 4 + 54 r 4 x 2 y − 72 √ 3r 3 x 2 + 24 √ 3r 3 y 2 + 54 r 4 x 3 √ 3 − 432 y r x − 36 r 3 x 2 y + 24 y 3 r + 48 y 3 x − 36 r 2 y 3 − 36 x 4 √ 3 + 12 r 3 y 3 + 36 r 3 y − 18 r 4 y 2 √ 3x − 432 x 2 y − 72 √ 3y 2 r x − 12 r 3 √ 3x y 2 + 108 r 2 x 2 y − 108 r 2 x 3 √ 3 + 144 y 2 √ 3x − 9 r 4 √ 3y 2 − 54 r 4 y x + 36 r 3 √ 3x − 27 r 4 √ 3x 2 − 4 y 4 r 2 √ 3 − 18 r 4 y 3 + 36 r 2 y 2 √ 3x + 144 x 3 √ 3 − 72 y 2 √ 3x 2 + 48 ε 2 r 2 y − 27 r 4 √ 3x 4 + 24 r 2 y 3 x − 72 r 2 x 3 y − 4 y 4 √ 3 + 144 x 3 y + 216 x 2 r y + 18 r 4 √ 3 y 2 x 2 + 8 ε 2 √ 3r 2 y 2 + 24 ε 2 √ 3r 2 x 2 − 48 ε 2 √ 3r 2 x − 48 ε 2 r 2 y x´i .h 3 r 2`− √ 3y − 3 + 3 x´2`4 ε 2 − 1´2 i .
Case 13: P (X2 ∈ N r P E (X1, ε) ∪ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z 1/2 s 15 Z r 10 (x) r 12 (x) A(P(V1, N1, Q1, L3, N3, N2, V6)) A(Tε) 2 dydx = h −13 r 13 −78 r 12 +42 r 11 +892 r 10 +64 r 9 ε 2 +220 r 9 −4952 r 8 +384 r 8 ε 2 −768 r 7 −3072 r 6 ε 2 +18048 r 6 −3136 r 5 −2048 r 5 ε 2 + 8192 r 4 ε 2 − 39296 r 4 + 20992 r 3 + 4096 r 3 ε 2 + 41984 r 2 − 8192 r 2 ε 2 − 48128 r + 14336
i.h 384`16 r 3 ε 4 − 8 r 3 ε 2 + r 3 + 96 r 2 ε 4 − 48 r 2 ε 2 + 6 r 2 + 192 r ε 4 − 96 r ε 2 + 12 r + 128 ε 4 − 64 ε 2 + 8´r 2 i where A(P(V1, N1, Q1, L3, N3, N2, V6)) = − h 2`−ε 2 √ 3 + 1/4 √ 3´2`−36 √ 3r 2 −216 √ 3r x+24 √ 3r 2 ε 2 −108 r 2 y−48 r 2 √ 3 y 2 + 72 r 2 √ 3x 2 +36 r 2 √ 3x+216 r y +432 y x−216 √ 3x 2 −72 √ 3 y 2 −36 √ 3+72 r 2 y x+144 √ 3x−48 y 3 +72 √ 3r +216 √ 3r x 2 + 72 √ 3y 2 r − 144 y + 36 x 4 r 2 √ 3 − 72 x 3 √ 3r + 36 x 3 r 3 √ 3 − 3 r 4 √ 3 y 4 + 54 r 4 x 2 y − 72 √ 3r 3 x 2 + 24 √ 3r 3 y 2 + 54 r 4 x 3 √ 3 − 432 y r x − 36 r 3 x 2 y + 24 y 3 r + 48 y 3 x − 36 r 2 y 3 − 36 x 4 √ 3 + 12 r 3 y 3 + 36 r 3 y − 18 r 4 y 2 √ 3x − 432 x 2 y − 72 √ 3y 2 r x − 12 r 3 √ 3x y 2 + 108 r 2 x 2 y − 108 r 2 x 3 √ 3 + 144 y 2 √ 3x − 9 r 4 √ 3y 2 − 54 r 4 y x + 36 r 3 √ 3x − 27 r 4 √ 3x 2 − 4 y 4 r 2 √ 3 − 18 r 4 y 3 + 36 r 2 y 2 √ 3x + 144 x 3 √ 3 − 72 y 2 √ 3x 2 + 48 ε 2 r 2 y − 27 r 4 √ 3x 4 + 24 r 2 y 3 x − 72 r 2 x 3 y − 4 y 4 √ 3 + 144 x 3 y + 216 x 2 r y + 18 r 4 √ 3 y 2 x 2 + 8 ε 2 √ 3r 2 y 2 + 24 ε 2 √ 3r 2 x 2 − 48 ε 2 √ 3r 2 x − 48 ε 2 r 2 y x´i .h 3 r 2`− √ 3y − 3 + 3 x´2`4 ε 2 − 1´2 i .
Case 14:
P (X2 ∈ N r P E (X1, ε) ∪ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = ! A(P(V1, N1, Q1, L3, L4, Q2, N2, V6)) A(Tε) 2 dydx = − h −189 r 13 −1134 r 12 +297 r 11 +11718 r 10 +864 r 9 ε 2 +3672 r 9 −66096 r 8 +5184 r 8 ε 2 +2592 r 7 ε 2 −12932 r 7 −32832 r 6 ε 2 +248616 r 6 − 30448 r 5 −33408 r 5 ε 2 +76032 r 4 ε 2 −551584 r 4 +273152 r 3 +55296 r 3 ε 2 +595456 r 2 −73728 r 2 ε 2 −668160 r+197632
i.h 5184 r 2 r 3 + 6 r 2 + 12 r + 8´`16 ε 4 − 8 ε 2 + 1´i
where A(P(V1, N1, Q1, L3, L4, Q2, N2, V6)) = − h 8`−ε 2 √ 3 + 1/4 √ 3´2`−18 √ 3r 2 −90 √ 3r x+6 √ 3r 2 ε 2 −54 r 2 y−15 r 2 √ 3 y 2 + 27 r 2 √ 3x 2 + 18 r 2 √ 3x + 54 r y + 54 y x − 63 √ 3x 2 − 9 √ 3 y 2 − 18 √ 3 + 54 r 2 y x + 54 √ 3x − 24 y 3 + 36 √ 3r + 72 √ 3r x 2 − 18 y + 9 x 4 r 2 √ 3 − 18 x 3 √ 3r + 18 x 3 r 3 √ 3 − r 4 √ 3 y 4 + 18 r 4 x 2 y − 36 √ 3r 3 x 2 + 12 √ 3r 3 y 2 + 18 r 4 x 3 √ 3 − 72 y r x − 18 r 3 x 2 y − 6 y 3 r + 36 y 3 x − 9 x 4 √ 3 + 6 r 3 y 3 + 18 r 3 y − 6 r 4 y 2 √ 3x − 72 x 2 y − 6 √ 3r 2 y 2 x 2 + 6 √ 3 y 2 r x − 6 r 3 √ 3x y 2 − 36 r 2 x 3 √ 3 + 36 y 2 √ 3x − 3 r 4 √ 3 y 2 −18 r 4 y x+18 r 3 √ 3x−9 r 4 √ 3x 2 + y 4 r 2 √ 3−6 r 4 y 3 +12 r 2 y 2 √ 3x+36 x 3 √ 3−30 y 2 √ 3x 2 +12 ε 2 r 2 y −9 r 4 √ 3x 4 − 5 y 4 √ 3+36 x 3 y+18 x 2 r y+6 r 4 √ 3 y 2 x 2 +2 ε 2 √ 3r 2 y 2 +6 ε 2 √ 3r 2 x 2 −12 ε 2 √ 3r 2 x−12 ε 2 r 2 y x´i .h 3 r 2`− √ 3y − 3 + 3 x´2`4 ε 2 − 1´2 i .
Case 15: P (X2 ∈ N r P E (X1, ε) ∪ Γ r 1 (X1, ε), X1 ∈ Ts \ T (y, ε)) = Z s 11 s 13 Z r 2 (x)
r 8 (x) + Z s 12 s 11 Z r 2 (x) r 3 (x) + Z s 9 s 12 Z r 6 (x) r 3 (x)
! A(P(V1, N1, Q1, L3, MC , P2, N2, V6)) A(Tε) 2 dydx = h 4536 r 12 ε 2 −11753 r 12 −13824 r 11 ε 2 +69120 r 11 +23976 r 10 ε 2 −186683 r 10 −34560 r 9 ε 2 +305664 r 9 +35496 r 8 ε 2 −346171 r 8 − 27648 r 7 ε 2 +302592 r 7 +17208 r 6 ε 2 −220201 r 6 −6912 r 5 ε 2 +135936 r 5 +1152 r 4 ε 2 −69760 r 4 +28416 r 3 −8384 r 2 +1536 r−128
i. h 1944 r 6`r2 + 1´3`16 ε 4 − 8 ε 2 + 1´i
where A(P(V1, N1, Q1, L3, MC , P2, N2, V6)) = − h 4`−ε 2 √ 3 + 1/4 √ 3´2`−24 √ 3r 2 − 108 √ 3r x + 12 √ 3r 2 ε 2 − 66 r 2 y − 26 r 2 √ 3 y 2 + 30 r 2 √ 3x 2 + 30 r 2 √ 3x + 108 r y + 216 y x − 108 √ 3x 2 − 36 √ 3 y 2 − 18 √ 3 + 48 r 2 y x + 72 √ 3x − 24 y 3 + 36 √ 3r + 108
√ 3r x 2 + 36 √ 3y 2 r − 72 y + 18 x 4 r 2 √ 3 − 36 x 3 √ 3r + 36 x 3 r 3 √ 3 − 3 r 4 √ 3 y 4 + 54 r 4 x 2 y − 72 √ 3r 3 x 2 + 24 √ 3r 3 y 2 + 54 r 4 x 3 √ 3 − 216 y r x − 36 r 3 x 2 y + 12 y 3 r + 24 y 3 x − 18 r 2 y 3 − 18 x 4 √ 3 + 12 r 3 y 3 + 36 r 3 y − 18 r 4 y 2 √ 3x − 216 x 2 y − 36 √ 3y 2 r x − 12 r 3 √ 3x y 2 + 54 r 2 x 2 y − 54 r 2 x 3 √ 3 + 72 y 2 √ 3x − 9 r 4 √ 3y 2 − 54 r 4 y x + 36 r 3 √ 3x − 27 r 4 √ 3x 2 − 2 y 4 r 2 √ 3 − 18 r 4 y 3 + 18 r 2 y 2 √ 3x + 72 x 3 √ 3 − 36 y 2 √ 3x 2 + 24 ε 2 r 2 y − 27 r 4 √ 3x 4 + 12 r 2 y 3 x − 36 r 2 x 3 y − 2 y 4 √ 3 + 72 x 3 y + 108 x 2 r y + 18 r 4 √ 3 y 2 x 2 + 4 ε 2 √ 3r 2 y 2 + 12 ε 2 √ 3r 2 x 2 − 24 ε 2 √ 3r 2 x − 24 ε 2 r 2 y x´i .h 3 r 2`√ 3y + 3 − 3 x´2`4 ε 2 − 1´2 i .
Case 16: P (X2 ∈ N r P E (X1, ε)∪Γ r 1 (X1, ε), X1 ∈ Ts\T (y, ε)) = Z s 9 s 12 Z r 2 (x)
r 6 (x) + Z 1/2 s 9 Z r 2 (x) r 3 (x)
! A(P(V1, N1, Q1, L3, N3, N2, V6)) A(Tε) 2 dydx = h −147 r 12 +55296 r 5 ε 2 −12288 r+351872 r 4 −142080 r 7 +1024−73728 r 6 ε 2 −9216 r 4 ε 2 +576 r 8 ε 2 −1152 r 11 +7018 r 10 −20352 r 9 + 51188 r 8 + 64000 r 2 − 190464 r 3 − 414720 r 5 + 27648 r 7 ε 2 + 305920 r 6
i.h 15552 r 6`1 6 ε 4 − 8 ε 2 + 1´i
where A(P(V1, N1, Q1, L3, N3, N2, V6)) = − h 2`−ε 2 √ 3 + 1/4 √ 3´2`−36 √ 3r 2 −216 √ 3r x+24 √ 3r 2 ε 2 −108 r 2 y−48 r 2 √ 3 y 2 + 72 r 2 √ 3x 2 +36 r 2 √ 3x+216 r y +432 y x−216 √ 3x 2 −72 √ 3 y 2 −36 √ 3+72 r 2 y x+144 √ 3x−48 y 3 +72 √ 3r +216 √ 3r x 2 + 72 √ 3y 2 r − 144 y + 36 x 4 r 2 √ 3 − 72 x 3 √ 3r + 36 x 3 r 3 √ 3 − 3 r 4 √ 3 y 4 + 54 r 4 x 2 y − 72 √ 3r 3 x 2 + 24 √ 3r 3 y 2 + 54 r 4 x 3 √ 3 − 432 y r x − 36 r 3 x 2 y + 24 y 3 r + 48 y 3 x − 36 r 2 y 3 − 36 x 4 √ 3 + 12 r 3 y 3 + 36 r 3 y − 18 r 4 y 2 √ 3x − 432 x 2 y − 72 √ 3y 2 r x − 12 r 3 √ 3x y 2 + 108 r 2 x 2 y − 108 r 2 x 3 √ 3 + 144 y 2 √ 3x − 9 r 4 √ 3y 2 − 54 r 4 y x + 36 r 3 √ 3x − 27 r 4 √ 3x 2 − 4 y 4 r 2 √ 3 − 18 r 4 y 3 + 36 r 2 y 2 √ 3x + 144 x 3 √ 3 − 72 y 2 √ 3x 2 + 48 ε 2 r 2 y − 27 r 4 √ 3x 4 + 24 r 2 y 3 x − 72 r 2 x 3 y − 4 y 4 √ 3 + 144 x 3 y + 216 x 2 r y + 18 r 4 √ 3 y 2 x 2 + 8 ε 2 √ 3r 2 y 2 + 24 ε 2 √ 3r 2 x 2 − 48 ε 2 √ 3r 2 x − 48 ε 2 r 2 y x´i .h 3 r 2`√ 3y + 3 − 3 x´2`4 ε 2 − 1´2 i .
Adding up the P (X 2 ∈ N r P E (X 1 , ε) ∪ Γ r 1 (X 1 , ε), X 1 ∈ T s \ T (y, ε)) values in the 16 possible cases above, and multiplying by 6 we get for r ∈ [1, 4/3), µ S or (r, ε) = 47 r 6 − 195 r 5 + 576 r 4 ε 4 + 860 r 4 − 288 r 4 ε 2 − 864 r 3 ε 2 − 846 r 3 + 1728 r 3 ε 4 − 108 r 2 + 1152 r 2 ε 4 − 576 r 2 ε 2 + 720 r − 256 108 r 2 (2 + r) 16 ε 4 − 8 ε 2 + 1 (r + 1) .
The µ S or (r, ε) values for the other intervals can be calculated similarly. For r = ∞, it is trivial to see that µ(r) = 1. In fact, for fixed ε > 0, µ(r) = 1 for r ≥ √ 3/(2 ε).
Remark 7.1. Derivation of µ A and (r, ε) and µ A or (r, ε) is similar to the segregation case.
Appendix 7: Proof of Corollary 6.1:
Recall that S and n (r) = ρ and I,n (r) is the relative edge density of the AND-underlying graph for the multiple triangle case. Then the expectation of S and n (r) is E S and n (r) = 2 n (n − 1) i<j E h and ij (r) = E h and 12 (r) = P (X 2 ∈ N r P E (X 1 ) ∩ Γ r 1 (X 1 )) = µ and (r).
But, by definition of N r P E (·) and Γ r 1 (·), if X 1 and X 2 are in different triangles, then P (X 2 ∈ N r P E (X 1 )∩Γ r 1 (X 1 )) = 0. So by the law of total probability µ and (r) := P (X 2 ∈ N r P E (X 1 ) ∩ Γ r 1 (X 1 ))
= Jm i=1 P (X 2 ∈ N r P E (X 1 ) ∩ Γ r 1 (X 1 ) | {X 1 , X 2 } ⊂ T i ) P ({X 1 , X 2 } ⊂ T i ) = Jm i=1
µ and (r) P ({X 1 , X 2 } ⊂ T i ) (since P (X 2 ∈ N r P E (X 1 ) ∩ Γ r 1 (X 1 ) | {X 1 , X 2 } ⊂ T i ) = µ and (r)) = µ and (r)
Jm i=1 A(T i ) Jm i=1 A(T i ) 2 (since P ({X 1 , X 2 } ⊂ T i ) = A(T i ) Jm i=1 A(T i ) 2
).
= µ and (r)
Jm i=1 w 2 i .
where µ and (r) is given by Equation (8).
Likewise, we get µ or (r) = µ or (r) Jm i=1 w 2 i where µ or (r) is given by Equation (9).
Furthermore, the asymptotic variance is ν and (r) = E h and 12 (r)h and 13 (r) − E h and 12 (r) E h and 13 (r) = P ({X 2 , X 3 } ⊂ N r P E (X 1 ) ∩ Γ r 1 (X 1 )) − ( µ and (r)) 2 .
Then for J m > 1, we have P ({X 2 , X 3 } ⊂ N r P E (X 1 )∩Γ r 1 (X 1 )) = y 2 = (1, 0) y 1 = (0, 0) e 1 e 2 M 3 M C ℓ r1 (x 1 , x) ℓ r2 (x 1 , x)
x 1 ε q(y 1 , x) q(y 2 , x) q(y 3 , x) N 2 N 1 N 1
y 3 = (1/2, √ 3/2) U 2 U 1 N 2 V 2 V 3 V 4 V 5 V 1 V 6
Figure 23: The vertices for N r P E (x 1 , ε) ∩ Γ r 1 (x 1 , ε) regions for x 1 ∈ T s in addition to the ones given in Figure 24 because of the restrictive nature of the alternatives. Figure 24: An illustration of the vertices for possible types of N r P E (x 1 ) ∩ Γ r 1 (x 1 ) for x 1 ∈ T s . Figure 25: Prototype regions R i for various types of N r P E (x 1 ) ∩ Γ r 1 (x 1 ) and the corresponding points whose x-coordinates are s k values. Figure 26: Prototype regions R i for various types of N r P E (x 1 ) ∩ Γ r 1 (x 1 ) and the corresponding points whose x-coordinates are s k values.
E [ρ and (D)] only depends on F and N (·). Then 0 ≤ E [ρ and (D)] = 2 n (n − 1) i<j E[h and ij ] = E h and 12 = µ and (N )
.=
Let A ij be the event that {X i X j ∈ A} = {X j ∈ N (X i )},then h and ij = I(A ij ) · I(A ji ) = I(A ij ∩ A ji ). In particular, h and 12 = I(A 12 ) · I(A 21 ) = I(A 12 ∩ A 21 ). Then (µ and (N )) 2 . Furthermore, E (h and 12 ) 2 = E (I(A 12 ∩ A 21 )) 2 = E[(I(A 12 ∩ A 21 )] = µ and (N ). So Var h and 12 = µ and (N ) − [µ and (N )] E h and 13 = µ and (N ) and, E h and 12 .h and 13 = E[I(A 12 ∩ A 21 ) (I(A 13 ∩ A 31 )] = E[(I(A 12 ∩ A 21 ∩ A 13 ∩ A 31 )]
Figure 4 :Figure 5 :
45Depicted are ρ and n (2) approx ∼ N 11 24 , 58901 362880 n for n = 10, 20, 100 (left to right). Histograms are based on 1000 Monte Carlo replicates. Solid lines are the corresponding normal densities. Notice that the vertical axes are differently scaled. Depicted are ρ or n (2) approx ∼ N 19 24 , 13189 120960 n for n = 10, 20, 100 (left to right). Histograms are based on 1000 Monte Carlo replicates. Solid lines are the corresponding normal densities. Notice that the vertical axes are differently scaled.
Figure 6 :Figure 7 :
67Depicted are the histograms for 10000 Monte Carlo replicates of ρ and 10 (1.05) (left) and ρ and 10 (5) (right) indicating severe small sample skewness for extreme values of r. Notice that the vertical axes are differently scaled. Depicted are the histograms for 10000 Monte Carlo replicates of ρ or 10 (1) (left) and ρ or 10 (5) (right) indicating severe small sample skewness for extreme values of r. Notice that the vertical axes are differently scaled.
Figure 8 :
8Pitman asymptotic efficiency against segregation (left) and association (right) as a function of r. Some values of note: PAE S (r = 1) = 160/7, PAE S and (r = 1) = 4000/17, PAE S or (r = 1) = 160/9, lim r→∞ PAE S (r) = lim r→∞ PAE S and (r) = lim r→∞ PAE S or (r) = ∞, and PAE S and (r) has a local supremum at ≈ 1.35. Also PAE A (r = 1) = 0, PAE A and (r = 1) = PAE A or (r = 1) = ∞, lim r→∞ PAE A (r) = lim r→∞ PAE A and (r) = lim r→∞ PAE A and (r) = 0, argsup r∈[1,∞] PAE A (r) ≈ 1.1, and PAE A and (r) has a local supremum at r = 1.5 and a local infimum at r ≈ 1.2Remark 4.2. Hodges-Lehmann Asymptotic Efficiency: Hodges-Lehmann asymptotic efficiency (HLAE) (Hodges and Lehmann(1956)) is given by HLAE(ρ and n (r), ε) := (µ and (r, ε) − µ and (r)) 2 ν and (r, ε) .
presents a Monte Carlo investigation of empirical power based on Monte Carlo critical values against H as a function of r for n = 10 with 1000 replicates. The corresponding empirical power estimates are given in
Figure 9 :
9Two Monte Carlo experiments against the segregation alternatives H S √ 3/8 . Depicted are kernel density estimates of ρ and n (1.1) for n = 10 (top left) and n = 100 (top right) and ρ or n (1.1) for n = 10 (bottom left) and n = 100 (bottom right) under the null (solid) and alternative (dashed) cases.
Figure 10 :
10Empirical power estimates based on Monte Carlo critical values as a function of r against segregation alternatives with the AND-underlying case (top two) and OR-underlying case (bottom two); in both cases, we have H S √ 3/8 (left) and H S √ 3/4 (right) for n = 10 and N mc = 1000 Monte Carlo replicates.
, N mc = 1000, and n = 10 at α = .05. n = 10 and N mc = 10000 AND-
Figure 11 :
11The empirical size (circles joined with solid lines) and power estimates (triangles with dotted lines) based on the asymptotic critical value against segregation alternatives in the AND-underlying case (top two) and the OR-underlying case (bottom two); in both cases, H S √ 3/8 (left) and H S √ 3/4 (right) as a function of r, for n = 10 and N mc = 10000.
Figure 12 :
12Two Monte Carlo experiments against the association alternative H A √ 3/12 . Depicted are kernel density estimates of ρ and n (1.1) for n = 10 (top left) and n = 100 (top right) and ρ or n (1.1) for n = 10 (bottom left) and n = 100 (bottom right) under the null (solid) and alternative (dashed).
Figure 13 :
13Empirical power estimates based on Monte Carlo critical values against the association alternatives with the AND-underlying case (top two) and OR-underlying case (bottom two), in both cases, 24 (right) as a function of r, for n = 10 and N mc = 1000.
Figure 14 :
14The empirical size (circles joined with solid lines) and power estimates (triangles with dotted lines) based on the asymptotic critical value against association alternatives in the AND-underlying case (top two) and the OR-underlying case (bottom two), in both cases, 24 (right) as a function of r, for n = 10 and N mc = 10000.
Figure 15 :Figure 16 :
1516Realization of segregation (left), H o (middle), and association (right) for |Y m | = 10 and n = 100. Realization of segregation (left), H o (middle), and association (right) for |Y m | = 10 and n = 1000.
Figure 17 :
17The empirical size (circles joined with solid lines) and power estimates (triangles with dotted lines) based on the asymptotic critical value for the AND-underlying case (top) and the OR-underlying case (bottom) in the multiple triangle case, in both cases, H S √ 3/8 (left) and H A √ 3/12 (right) as a function of r, for n = 100.
Figure 18 :
18The empirical size (circles joined with solid lines) and power estimates (triangles with dotted lines) based on the asymptotic critical value for the AND-underlying case (top) and the OR-underlying case (bottom) in the multiple triangle case, in both cases, H S √ 3/8 (left) and H A √ 3/12 (right) as a function of r, for n = 500.
Figure 19 :
19Var h and 12 (r) (left) and Var [h or 12 (r)] (right) as a function of r for r ∈ [1, 5]. APPENDIX Appendix 1: The Variance of Relative Edge Density for the AND-Underlying Graph Version:
r 8 (r+1) 2 .
82See Figure 19. Note that Var and (r = 1) = 0 and lim r→∞ Var and (r) = 0 (at rate O(r −2 )), and argsup r∈[1,∞) Var and (r) ≈ 2.1126 with sup Var and (r) = .25. Moreover, ν and (r) := Cov h and 12 (r),
, I 11 = [2, ∞). See Figure 20. Note that Cov and (r = 1) = 0 and lim r→∞ ν and (r) = 0 (at rate O(r −2 )), and argsup r∈[1,∞) ν and (r) ≈ 2.69 with sup ν and (r) ≈ .0537. Appendix 2: The Variance of Relative Edge Density for the OR-Underlying Graph Version:
Figure 20 :
20ν and (r) = Cov h and 12 (r), h and 13 (r) (left) and ν or (r) = Cov [h or 12 (r), h or 13 (r)] (right) as a function of r for r ∈ [1i (r) I(I i )
2 r 25 − 32 r 24 − 129 r 23 + 236 r 22 + 4157 r 21 − 15610 r 20 + 21289 r 19 + 67536 r 18 − 511355 r 17 + 1161830 r 16 − 634128 r 15 − 3001568 r 14 + 9512164 r 13 − 11014136 r 12 + 2344968 r 11 + 7126240 r 10 − 13850504 r 9 + 14466592 r 8 − 3823216 r 7 − 4018976 r 6 + 5155776 r 5 − 4633984 r 4 + 1959808 r 3 − 244480 r 2 − 3584 r − 1024) 2 − 1)(r + 2) 3 (r − 1)(r + 1) 2 r 10 (2 r 24 − 34 r 23 − 101 r 22 + 433 r 21 + 5400 r 20 − 26982 r 19 + 23049 r 18 + 166787 r 17 − 717366 r 16 + 1196092 r 15 + 89468 r 14 − 5130844 r 13 + 12748688 r 12 − 11274744 r 11 − 12243496 r 10 + 33980568 r 9 −14886656 r 8 −19910592 r 7 +20667776 r 6 −1262208 r 5 −5402752 r 4 +2217088 r 3 −235776 r 2 −2560 r−8 − 48 r 7 − 648 r 6 + 396 r 5 + 214 r 4 − 190 r 3 + 39 r 2 − 4 , I 11 = [2, ∞). SeeFigure 20. Note that Cov or (r = 1) = 1/3240 and lim r→∞ ν or (r) = 0 (at rate O(r −6 )), and argsup r∈[1,∞) ν or (r) ≈ 1.765 with sup ν or (r) ≈ .0318.Appendix 3: Derivation of µ and (r) and ν and (r) under the Null Case
Figure 21 :
21The cases for relative position of ℓ s (r, x) with various r values. These are the prototypes for various types of N r P E (x 1 ).
Z r 6 (x) 0 !
60A(P(G1, G2, Q1, P2, M3, G6)) A(T (Y3)) 2 dydx = 324 r 11 − 1620 r 10 − 618 r 9 + 4626 r 8 + 990 r 7 − 2454 r 6 + 2703 r 5 − 5571 r 4 − 3827 r 3 + 1455 r 2 + 3072 r + 1024 7776 (r + 1) 3 r 6
M1, P1, P2, M3, G6)) A(T (Y3)) 2 dydx = h 137472 r 18 − 952704 r 17 + 2792712 r 16 − 5116608 r 15 + 7057828 r 14 − 7725792 r 13 + 7022682 r 12 − 5484816 r 11 + 3631995 r 10 − 2213712 r 9 + 1213271 r 8 − 578976 r 7 + 292518 r 6 − 101952 r 5 + 36612 r 4 − 11664 r 3 + 3051 r 2 − 1296 r + 243i.
2 + r)`2369 r 11 − 11342 r 10 + 29934 r 9 − 50340 r 8 + 54056 r 7 − 51824 r 6 + 48320 r 5 − 20864 r 4 − 640 r 3 − 1280 r 2 + 512 r + 1024´i.h 15552 r 6
195456 r 6 + 324 r 11 − 76720 r 7 − 801792 r 2 + 217856 r + 946432 r 3 − 239904 r 5 − 275328 r 4 + 39408 r 8 − 11849 r 9 31104 r 3
)`1080 r 16 + 1080 r 15 − 17820 r 14 − 540 r 13 + 65394 r 12 − 46926 r 11 + 105435 r 10 − 261765 r 9 + 229286 r 8 − 180586 r 7 + 101638 r 6 + 40774 r 5 − 46112 r 4 + 24448 r 3 − 20224 r 2 + 10496 r − 6144´i .h 10368 r 3`2 r 2 + 1´3
Z
X1), X1 ∈ Ts) = ℓam(x) r 10 (x) ! A(P(L1, L2, Q1, Q2, L5, L6)) A(T (Y3)) 2 dydx = − h`1 35 r 11 + 675 r 10 − 1350 r 9 − 9450 r 8 + 702 r 7 + 39150 r 6 + 24272 r 5 − 47432 r 4 − 135040 r 3 + 57088 r 2 + 204800 r− 134144´(r − 1)
19 − 122472 r 18 + 139968 r 17 + 524880 r 16 − 553095 r 15 − 595971 r 14 + 368826 r 13 − 724758 r 12 − 543876 r 11 + 1416996 r 10 + 1646470 r 9 + 92870 r 8 + 523048 r 7 − 768368 r 6 − 1729902 r 5 − 1434990 r 4 + 122185 r 3 + 941941 r 2 + 573440 r + 114688
PZ r 6
6({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) = (x) r 3 (x) ! A(P(G1, M1, L2, Q1, P2, M3, G6)) 2 A(T (Y3)) 3 dydx = − h 32768 − 409264128 r 7 + 1455989508 r 12 + 680709729 r 8 − 4423680 r 3 + 155509 r 2 + 22889801 r 4 + 202936917 r 6 + 6011901 r 20 + 1060982949 r 16 − 614739456 r 17 + 240330993 r 18 − 56097792 r 19 − 77783040 r 5 − 999857664 r 9 + 1299257316 r 10 − 1461851136 r 11 − 1407624192 r 13 + 1414729905 r 14 − 1352392704 r 15
P
17496 r + 5003898912 r 28 + 31646646384 r 26 + 110098944 r 30 − 1090803456 r 29 − 14630751360 r 27 + 66339 r 2 − 99072645696 r 23 + 79269457632 r 24 + 66073158 r 8 − 4870743552 r 13 − 168073488 r 9 + 535086 r 4 − 262440 r 3 − 1737936 r 5 − 18592416 r 7 − 107383563504 r 21 − 41219053272 r 17 + 58981892347 r 18 − 78265758888 r 19 + 95887286866 r 20 + 109053166552 r 22 + 5500548 r 6 + 466565130 r 10 − 1070573040 r 11 + 2380992104 r 12 + 9191633420 r 14 − 16312513248 r 15 + 26801184917 r 16 − 54759787776 r 25 i.h 1399680`r 2 + 1´5`2 r 2 + 1´5 r 10 i where A(P(G1, M1, P1, P2, M3, G6)) = − √ 3(−4 r 3 √ 3y−12 r 3 x+3 r 2 +6 r 4 √ 3y x+9 r 4 x 2 +3 r 4 y 2 +y 2 +2 √ 3y x+3 x 2 ) ({X2, X3} ⊂ N r P E (X1) ∩ Γ r 1 (X1), X1 ∈ Ts) G1, M1, L2, Q1, P2, M3, G6)) 2 A(T (Y3)) 3 dydx = h 4`162576 r 22 − 1083456 r 21 + 3368016 r 20 − 6969888 r 19 + 11578088 r 18 − 15664080 r 17 + 18796852 r 16 − 19984824 r 15 +
+ 811008 r 2 + 329205504 r 8 − 582626304 r 13 − 489563136 r 9 − 65536 r 4 − 168708096 r 7 − 57883680 r 17 + 18009258 r 18 − 3623400 r 19 + 352563 r 20 + 41502720 r 6 + 659111904 r 10 − 761846400 r 11 + 725173376 r 12 + 409477188 r 14 − 254829600 r 15 + 135968852 r 16 i.h 8398080 r 10
14 − 392112 r 13 + 680784 r 12 − 1040256 r 11 + 1385628 r 10 − 1337760 r 9 + 816224 r 8 − 253824 r 7 + 469088 r 6 − 1029888 r 5 + 820992 r 4 − 488448 r 3 + 190976 r 2 + 49152 r + 8192´`−12 r + 7 r 2 + 4´2 i.h 8398080 r 10 i where A(P(G1, M1, L2, Q1, N3, MC , M3, G6)
M1, L2, Q1, N3, L4, L5, M3, G6)) 2 A(T (Y3)) 3 dydx = h 4423680 − 4627454976 r 6 + 511684992 r 11 + 2163142656 r 7 − 660127744 r 2 − 31555584 r + 3534520320 r 3 + 7647989760 r 5 + 7785504 r 15 − 1313880 r 16 + 19683 r 18 − 7240624128 r 4 − 1511047552 r 8 + 1204122240 r 9 − 796453824 r 10 − 282583320 r 12 + 107804736 r 13 − 30362052 r 14 i.h 16796160 r 6 i where A(P(G1, M1, L2, Q1, N3, L4, L5, M3, G6)
G1, M1, L2, Q1, Q2, L5, M3, G6)) 2 A(T (Y3)) 3 dydx = − h (r − 1)`−1474560 + 8847360 r + 111456 r 26 + 111456 r 27 − 27738112 r 2 + 23311152 r 23 − 167184 r 24 − 808889416 r 8 − 2228253688 r 13 +366739256 r 9 −207619072 r 4 +98557952 r 3 +397199360 r 5 +802401664 r 7 −34733448 r 21 −624736557 r 17 + 400615470 r 18 − 134938386 r 19 + 39014136 r 20 − 18026064 r 22 − 640058432 r 6 + 407655352 r 10 − 1227078728 r 11 + 1996721576 r 12 + 2033409092 r 14 − 1681870468 r 15 + 1064030499 r 16 − 2842128 r 25´i .h 1866240`2 r 2 + 1´5 r 6 i where A(P(G1, M1, L2, Q1, Q2, L5, M3, G6)
r 3
3(x) ! A(P(A, M1, L2, L3, MC , M3)) A(T (Y3)) 2 dydx = − (r − 1)`1817 r 7 − 7807 r 6 + 14157 r 5 − 14067 r 4 + 7893 r 3 − 2475 r 2 + 405 r − 278 64 r 6 where A(P(A, M1, L2, L3, MC, M3))
P(A, N1, Q1, G3, M2, MC , P2, N2)) A(T (Y3)) 2 dydx = − 81 r 9 − 189 r 8 + 561 r 7 − 45 r 6 − 1894 r 5 − 18 r 4 + 1912 r 3 + 224 r 2 − 384 r − 128 1296 (r + 1) 3 r 4
N1, P1, L2, L3, L4, L5, P2, N2)) A(T (Y3)) 2 dydx = − h 355328 r 18 − 2204160 r 17 + 6591792 r 16 − 13254912 r 15 + 20639832 r 14 − 26417664 r 13 + 28578916 r 12 − 26760576 r 11 + 21960774 r 10 − 15877152 r 9 + 10180620 r 8 − 5753232 r 7 + 2856483 r 6 − 1222128 r 5 + 438777 r 4 − 128304 r 3 + 28107 r 2 − 3888 r + 243
P(A, N1, Q1, G3, M2, N3, N2)) A(T (Y3)) 2 dydx = − 1536 − 6528 r 2 + 133834 r 8 − 48240 r 9 + 95616 r 4 − 20736 r 3 − 158976 r 5 − 200064 r 7 + 196680 r 6 + 7107 r 10 15552 r 4
Z
X1), X1 ∈ Ts) = Z s 14 s 10 Z r 10 (x) r 12 (x) r 3 (x) ! A(P(A, N1, Q1, L3, N3, N2)) A(T (Y3)) 2 dydx = h 1024 − 12288 r + 295680 r 7 + 1053 r 12 − 197140 r 8 + 626688 r 3 − 100864 r 2 − 1294848 r 4 − 686528 r 6 + 1282560 r 5 + 114336 r 9 − 30930 r 10 i.h 31104 r 4
− 1)`1512 r 17 + 1512 r 16 − 16740 r 15 + 540 r 14 + 84078 r 13 − 83538 r 12 − 164835 r 11 + 401085 r 10 − 487872 r 9 + 535728 r 8 − 463124 r 7 + 335596 r 6 − 197440 r 5 + 64640 r 4 − 7936 r 3 − 1792 r 2 + 5632 r − 512´i
Figure 26): Case 1:
119155 r 11 − 845345 r 10 + 2724777 r 9 − 5206743 r 8 + 6475257 r 7 − 5454855 r 6 + 3155193 r 5 − 1249479 r 4 + 332181 r 3 − 56619 r 2 + 5589 r − 243´i
Z r 6 (x) 0 !
60X1), X1 ∈ Ts) = A(P(A, N1, Q1, G3, M2, MC, P2, N2)) 2 A(T (Y3)) 3 dydx = − h 19683 r 15 − 59049 r 14 + 83106 r 13 + 167670 r 12 − 211626 r 11 + 344466 r 10 − 142614 r 9 − 2573586 r 8 − 128853 r 7 + 3465675 r 6 + 1103824 r 5 − 1473304 r 4 − 730880 r 3 + 107776 r 2 + 158720 r + 31744i.h 1049760 (r + 1) 5 r 6
367416 r + 60475010560 r 28 + 437704472832 r 26 + 1444872192 r 30 − 13250101248 r 29 − 185909870592 r 27 + 4148739 r 2 − 2027754648576 r 23 + 1397612375040 r 24 + 20429177589 r 8 − 677278256112 r 13 − 49656902904 r 9 + 159963012 r 4 − 30005640 r 3 − 681714144 r 5 − 7515142416 r 7 − 3097406755584 r 21 − 2609245249920 r 17 + 3051035360256 r 18 − 3315184235136 r 19 + 3337272236928 r 20 + 2631941507968 r 22 + 2435971806 r 6 + 109069315047 r 10 − 218273842152 r 11 + 400534503738 r 12 + 1059615993384 r 14 − 1538314485120 r 15 + 2076627064432 r 16 − 845838600192 r 25
Z r 9
9X1), X1 ∈ Ts) = (x) r 2 (x) ! A(P(A, N1, Q1, L3, L4, L5, P2, N2)) 2 A(T (Y3)) 3 dydx = h 64`12 − 144 r + 924 r 2 − 683328 r 23 + 112976 r 24 + 757211 r 8 − 10554918 r 13 − 1513230 r 9 + 16242 r 4 − 4320 r 3 − 51372 r 5 − 344988 r 7 −4867848 r 21 −18583080 r 17 +16493828 r 18 −12883116 r 19 +8668124 r 20 +2177536 r 22 +141366 r 6 +2774371 r 10 − 4692510 r 11 +7331714 r 12 +14002613 r 14 −16948218 r
3538944 r + 8927944704 r 7 − 1883996112 r 12 − 9492593152 r 8 − 146866176 r 3 + 29196288 r 2 + 220250112 r 4 − 4486594560 r 6 + 213597 r 20 − 259250904 r 16 + 69124752 r 17 − 10683306 r 18 + 864387072 r 5 + 5220357120 r 9 − 1081136256 r 10 + 602097408 r 11 + 2223664128 r 13 − 1509638512 r 14 + 716568768 r 15 i.h 16796160 r 8
r − 1)`−16384 + 278528 r + 215136 r 28 + 40176 r 26 + 215136 r 29 − 3381264 r 27 − 2301952 r 2 − 99212040 r 23 − 25050384 r 24 − 312101312 r 8 − 7215869272 r 13 − 147586784 r 9 − 42770432 r 4 + 12591104 r 3 + 114049024 r 5 + 345810944 r 7 + 55914462 r 21 − 2082969096 r 17 + 43443459 r 18 + 826941555 r 19 − 641846754 r 20 + 209930616 r 22 − 232963072 r 6 + 1311322268 r 10 − 3191747236 r 11 + 5434516904 r 12 + 7756861008 r 14 − 6865898928 r 15 + 4727296416 r 16 + 26115696 r 25´i .h 466560`2 r 2 + 1´5 r 8
20 + 73953 r 19 + 213678 r 18 − 433512 r 17 − 2873232 r 16 + 627264 r 15 + 20218896 r 14 + 5675184 r 13 − 97577924 r 12 −
8 + 128 r 8 ε 4 + 384 r 7 ε 4 + 39 r 7 + 128 r 6 ε 4 − 90 r 6 − 444 r 5 − 384 r 5 ε 4 + 1344 r 4 − 256 r 4 ε 4 − 792 r 3 − 864 r 2 + 1104 r − 288
are the realizations of 100 and 1000 observations, respectively, independent n = 10 and N mc = 1000 AND-underlying caser
1
11/10
6/5
4/3
√
2
3/2
2
3
5
10
C A
mc
0.0
0.0
0.02
0.06
0.08
0.1
0.24
0.46
0.68
0.82
α A
mc (n)
0.000 0.000 0.005 0.030 0.027 0.037 0.038 0.043 0.048 0.041
β A
mc (
√
3/12)
0.000 0.000 0.003 0.045 0.057 0.077 0.154 0.136 0.077 0.055
β A
mc (5
√
3/24) 0.000 0.000 0.009 0.051 0.060 0.081 0.492 0.964 0.941 0.396
n = 10 and N mc = 1000 OR-underlying case
r
1
11/10
6/5
4/3
√
2
3/2
2
3
5
10
C A
mc
0.26
0.26
0.28
0.31
0.3
0.35
0.6
0.84
0.95
1.00
α A
mc (n)
0.000 0.000 0.040 0.045 0.049 0.042 0.049 0.044 0.022 0.019
β A
mc (
√
3/12)
0.000 0.000 0.169 0.227 0.331 0.328 0.396 0.163 0.069 0.032
β A
mc (5
√
3/24) 0.000 0.000 0.000 0.352 0.352 0.612 0.988 1.000 0.935 0.344
Table 3 :
3Monte Carlo critical values, C A mc , empirical significance levels, α A mc (n), and empirical power estimates, β A mc , based on Monte Carlo critical values under H A , N mc = 1000, and n = 10 at α = .05.√
3/12 and H A
5
√
3/24 n = 10 and N mc = 1000 AND-underlying case
r
1
11/10
6/5
4/3
√
2
3/2
2
3
5
10
α A (n)
0.7707 0.3343 0.1872 0.0859 0.0774 0.0671 0.0551 0.0593 0.0771 0.1182
β A
n (r,
√
3/12)
0.7406 0.2829 0.1869 0.1156 0.1323 0.1506 0.2053 0.1599 0.1336 0.1618
β A
n (r, 5
√
3/24) 0.7415 0.2923 0.1833 0.1220 0.1491 0.1891 0.5605 0.9664 0.9510 0.6241
n = 10 and N mc = 1000 OR-underlying case
r
1
11/10
6/5
4/3
√
2
3/2
2
3
5
10
α A (n)
0.5194 0.3935 0.2302 0.0920 0.0834 0.0665 0.0759 0.0980 0.0708 0.0193
β A
n (r,
√
3/12)
0.6293 0.6258 0.5661 0.4318 0.4247 0.4346 0.4343 0.2624 0.1421 0.0336
β A
n (r, 5
√
3/24) 0.6315 0.6340 0.6259 0.6265 0.6279 0.7480 0.9900 1.0000 0.9649 0.3505
15 +540 r 11 −2025 r 10 −8100 r 9 +10152 r 8 +38448 r 7 −14878 r 6 −71704 r 5 −87608 r 4 +192128 r 3 +147712 r 2 −338944 r+ 134144 i.h 10368 r 2`r2 + 4 r + 4´`16 ε 4 − 8 ε 2 + 1´is 14
Z ℓam(x)
r 12 (x)
+
Z 1/2
s 15
Z ℓam(x)
r 10 (x)
!
A(P(L1, L2, Q1, Q2, L5, L6))
A(Tε) 2
dydx =
−
h
135 r 12
i=1 ̟ or i (r, ε) I(r ∈ I i ) where
i=1 ς or i (r, ε) I(r ∈ I i ) where
AcknowledgmentsThis work was partially sponsored by the Defense Advanced Research Projects Agency as administered by the Air Force Office of Scientific Research under contract DOD F49620-99-1-0213 and by Office of Naval Research Grant N00014-95-1-0777 and by TUBITAK Kariyer Project Grant 107T647.Appendix 8: Proof of Theorem 6.2:Recall that ρ and II,n (r) is the version II of the relative edge density of the AND-underlying graph for the multiple triangle case. Then the expectation of ρ and II,n (r) is where µ and (r) is given by Equation(8). Likewise, we get µ or (r) = µ or (r) where µ or (r) is given by Equation(9).Next,(r) and ρ and[l](r) are independent for k = l. Then by(2)
|
[] |
[
"Markov Decision Process For Automatic Cyber Defense",
"Markov Decision Process For Automatic Cyber Defense",
"Markov Decision Process For Automatic Cyber Defense",
"Markov Decision Process For Automatic Cyber Defense"
] |
[
"Xiaofan Zhou \nThe University of Queensland\n4072St LuciaQLDAustralia\n",
"Simon Yusuf \nThe University of Queensland\n4072St LuciaQLDAustralia\n\nFederal University\nKashere, Gombe StateNigeria\n",
"Enoch ",
"] ",
"Dong Seong ",
"Xiaofan Zhou \nThe University of Queensland\n4072St LuciaQLDAustralia\n",
"Simon Yusuf \nThe University of Queensland\n4072St LuciaQLDAustralia\n\nFederal University\nKashere, Gombe StateNigeria\n",
"Enoch ",
"] ",
"Dong Seong "
] |
[
"The University of Queensland\n4072St LuciaQLDAustralia",
"The University of Queensland\n4072St LuciaQLDAustralia",
"Federal University\nKashere, Gombe StateNigeria",
"The University of Queensland\n4072St LuciaQLDAustralia",
"The University of Queensland\n4072St LuciaQLDAustralia",
"Federal University\nKashere, Gombe StateNigeria"
] |
[] |
It is challenging for a security analyst to detect or defend against cyber-attacks. Moreover, traditional defense deployment methods require the security analyst to manually enforce the defenses in the presence of uncertainties about the defense to deploy. As a result, it is essential to develop an automated and resilient defense deployment mechanism to thwart the new generation of attacks. In this paper, we propose a framework based on Markov Decision Process (MDP) and Q-learning to automatically generate optimal defense solutions for networked system states. The framework consists of four phases namely; the model initialization phase, model generation phase, Q-learning phase, and the conclusion phase. The proposed model collects real network information as inputs and then builds them into structural data. We implement a Q-learning process in the model to learn the quality of a defense action in a particular state. To investigate the feasibility of the proposed model, we perform simulation experiments and the result reveals that the model can reduce the risk of network systems from cyber attacks. Furthermore, the experiment shows that the model has shown a certain level of flexibility when different parameters are used for Q-learning.
|
10.48550/arxiv.2207.05436
|
[
"https://arxiv.org/pdf/2207.05436v2.pdf"
] | 250,451,027 |
2207.05436
|
5893a9b0f7e00737204408a8c64a7ffcea60e4bf
|
Markov Decision Process For Automatic Cyber Defense
Xiaofan Zhou
The University of Queensland
4072St LuciaQLDAustralia
Simon Yusuf
The University of Queensland
4072St LuciaQLDAustralia
Federal University
Kashere, Gombe StateNigeria
Enoch
]
Dong Seong
Markov Decision Process For Automatic Cyber Defense
Automation · Cyber-attacks · Defense · Deep Learning · Reinforce- ment Learning · Machine learning · Q-Learning
It is challenging for a security analyst to detect or defend against cyber-attacks. Moreover, traditional defense deployment methods require the security analyst to manually enforce the defenses in the presence of uncertainties about the defense to deploy. As a result, it is essential to develop an automated and resilient defense deployment mechanism to thwart the new generation of attacks. In this paper, we propose a framework based on Markov Decision Process (MDP) and Q-learning to automatically generate optimal defense solutions for networked system states. The framework consists of four phases namely; the model initialization phase, model generation phase, Q-learning phase, and the conclusion phase. The proposed model collects real network information as inputs and then builds them into structural data. We implement a Q-learning process in the model to learn the quality of a defense action in a particular state. To investigate the feasibility of the proposed model, we perform simulation experiments and the result reveals that the model can reduce the risk of network systems from cyber attacks. Furthermore, the experiment shows that the model has shown a certain level of flexibility when different parameters are used for Q-learning.
Introduction
Cyber-attacks have grown over the past few years to become more effective. In particular, cyber-criminals are now incorporating artificial intelligence (AI) to power cyberattacks (e.g., deep locker [13]) and to outsmart conventional defense mechanisms using various approaches [1,8,11]. For instance, a group of researchers at McAfee [9] in their 2020 threat prediction report have predicted the potential raise of less-skilled attackers to become more powerful to create and weaponize deepfake content. In addition, they have predicted that cyber-criminals will use AI to produce convincing real data capable of bypassing many user authentication mechanisms. Besides, the current stateof-the-art defense enforcement methods require the security expert to manually deploy cyber-defenses, thus faced with uncertainties about the best countermeasures to enforce in order to achieve optimal security.
To address these challenges, we propose a novel approach to automatically select and deploy cyber defense by formulating Markov Decision Process (MDP) that reflects both attack and defense scenarios. Specifically, we propose an automatic MDP modelingbased approach to automate defense deployment and selection using a Q-learning model (A Q-learning is a reinforcement learning policy that finds the next best action, given a current state). Here, we use the Q-learning model with the MDP framework to learn 1 Cite this article as:
Zhou, X., Enoch the quality of a defense action in the states. The proposed framework is divided into four phases; model initialization, model generation phase, Q-learning phase, and the conclusion phase. The model initialization phase takes a real network situation as the input and converts it into structured data; the model generation phase generates all the possible states for the MDP model using a breadth-first search algorithm; the Q-learning phase implements a Q-learning iteration which trains the model to learn the space and update the quality for each state-action pair, and the conclusion phase searches for the optimal solutions using the Q-table trained after the previous phase. The focus of this paper is to use an AI technique to automate cyber defense and thwart attacks. The main contributions of this paper are as follows:
-To design and implement an automation framework based on MDP and a deep learning algorithm for the automatic cyber defense of networked systems. -To collect real network data and generate an MDP structure model based on the real data. -To develop a Q-learning model which can train itself and generate an optimal defense solution. -To build a testbed and to demonstrate the usability and applicability of the proposed framework based on our framework.
The rest of the paper is organized as follows. Section 2 provides the related work on defense automation based on different approaches. Our proposed MDP-based framework model is presented in Section 3. In Section 4, we provide the experimental setup and analysis of the obtained results. We conclude the paper in Section 5.
Related work
In this section, we briefly survey related work on defense automation for both the traditional defense and AI-based approaches.
Ray et al. [12] proposed a framework based on UML-based use cases, state-chart diagrams, and XML to show attacker, attack actions, and the possible defense method. This work is still theoretical. Applebaum et al. [2] developed a practical framework based on MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) to test for weaknesses and train defenders. In their work, they used classical planning, Markov decision processes, and Monte Carlo simulations to plan attack scenarios and to proactively move through the entire target networked systems searching for weakness and training the defenders on possible defenses to deploy.
The authors in [7] presented a framework for automating threat response based on a machine learning approach. Also, Noor et al. [10] presented a framework for data breaches based on semantic analysis of attacker's attack patterns from a collection of threats. The focus of these papers is different from our work, as they have focused on automating threat responses from a given repository, while our proposed automation framework is based on simulation of real networks.
Zheng and Namin [14] presented a defense strategy against Distributed Denialof-Service (DDoS) in a Software-Defined Networking (SDN) using Markov Decision Process. The authors used three parameters to model the finite set states of the MDP model, including Flow Entry Size (F), Flow Queue Size (Q), and Transmitted Packets Count (T). The rewards function is related to these three parameters F, Q, and T. Each of them has been applied with different weight factors because they have different impacts on the network. Their results show that the model can keep the flow traffic optimized and detect potential DDoS attacks at an early stage. This work also showed that the model can control how the system makes a transition by adjusting the rewards weight factor. Also, Booker and Musman [3] presented a theoretical modelbased automated cyber response system, where they frame a cyber response problem as a Partially Observable Markov Decision Problem (POMDP). In another work, the authors extended their work where the POMDP is used to frame automated reasoning for defensive cyber-response that searches for a policy that maps to system states, and probabilistic beliefs.
The authors in [15] proposed a Markov Decision Process to model Moving Target Defense with the interaction between the defend and attack sides. the paper uses four states (Normal, Targeted, Exploited, Breached) with three possible defense strategies (wait, defend, reset) to describe the model. It also uses the Bellman equation and value iteration method to find out the optimal policy for each state. Their result demonstrated how much impact the cost will have on the optimal policy and how that will help the defender to make better defense strategies. Other authors such as [4,5] developed a blue team framework that can perform cyber defense generation, defense enforcement, and security evaluation using a defined workflow. However, the work did not use any AI technique to enhance system attack learning or to thwart cyber attacks.
The Proposed Approach
In this section, we describe the proposed framework for automatic cyber defense based on MDP. The workflow of the framework comprises of four phases; Initialization Phase, Generation Phase, Q-learning Phase, and Conclusion Phase. We explain them in detail as follows.
Model Initialization Phase
The first phase is the initialization phase. During this phase, the program takes some real network situations as the inputs. These inputs need to be recognized and transformed into programmed data and later implemented into the MDP model. Here, the more detailed the description of the network situation is, the more complex the model will become.
Model Generation Phase
The second phase is the model generation phase. During this phase, the program will generate all the possible states for the MDP using the input data collection from the previous phase. To guarantee all the states will be visited in a well-designed order, it is necessary to have a traversal method (and the Breadth-First Search (BFS) algorithm will be used in this phase). Algorithm 1 and Algorithm 2 are used for the model generation, including the generation of the next state and the defense states. There are two major assumptions during the model generation phase. Firstly, the attacker can only attack the host which is next to a compromised host or public internet. For example, if the attacker attempted to compromise one host in the network, this is only possible to happen when there is at least one neighbor host compromised, or the host is directly connected to the public internet. Secondly, there is no value to patch vulnerabilities on a host that has already been compromised. Once a host is marked as compromised in the model, it is assumed that the data on the host has already been fully breached or the host has already been controlled.
After all the states have been generated, a transition table will be constructed. The table has size s by s where s is the number of all states. Each cell contains transition information between the row state and the column state, or none represents no transition available between two states. The transition information includes data such as action, success rate, reward after success transition, and reward after the fail transition.
To appear In Information Security Applications. WISA 2022, Springer, Cham
Q-learning Phase
The third phase is the Q-learning phase. During this phase, the model will keep learning the space until the iteration is over. Before starting the learning process, a Q-value table will need to be initialized with rows and possible actions, and columns as all generated states. Here, each q-value represents the "quality" of a state and action pair. During this learning phase, the Q-table will keep updating until it has reached the maximum iteration.
Four parameters are needed for the Q-learning; learning rate, epsilon, epochs, and gamma (γ) or discounted factor which is ranging from 0 to 1. The γ parameter decides how important the future rewards will be. It is also used to approximate the noise in future rewards. The Q-learning phase is described by Algorithm 3. In this phase, if gamma is close to one, it means the agent mostly considers the future rewards while being willing to delay the immediate rewards. If gamma is close to zero, it means the agent will mostly only consider the immediate rewards. Equation (1) shows the detail calculation for the function QValueCalculation().
Q(s, a) = ((1 − α) * Q(s, a)) + (α * (reward + (γ * Q(s , a )))(1)
Here, the Q-learning needs to make sure every q-value has been updated with sufficient times to reflect the actual quality. The agent can increase the number of iterations (epochs) to increase the overall updated times. The agent can also adjust the epsilon to balance between exploration and exploitation.
Conclusion Phase
After the Q-learning process has been completed and the Q-table has finished its updates, the process will enter the conclusion phase. The main task in this phase is to find the optimal solution(s) for the current network state.
Experimental Setup
In this section, we use a real network to illustrate the framework used for the attack and defense scenarios. The network and attack model: The network structure is shown in Figure 1. The network consists of 8 hosts, named host 1 -host 8. The network has a router that controls access between the networked hosts. Hosts in the network have vulnerabilities that may or may not be patchable. Table 4 shows the vulnerabilities of each host. In the Table, we use V i to denote vulnerability ID, CVSS Score for Common Vulnerability and Scoring System Base Score, and Patch cost for the cost of patching vulnerabilities. The CVSS score is based on the severity scores provided by National Vulnerability Database [6], and we assume the patch cost value. We assume an attacker is located outside the network. The attacker is trying to compromise the host in the internal network. The attacker can directly connect to host 1 and host 2.
In our model, we represent the connections between hosts with links. For example, the hosts (h i ) information is going to be recorded as a list such as [h 1 , h 2 , ...,h n ], and links will be represented as [(h 1 , h 2 ), (h 2 , h 1 ), (h 1 , h 3 ) . . . ]. In a real situation, the network connections between two hosts are not always bi-directional. It is possible for a host to stop receiving packages from another host while it is still able to send packages to that host. Therefore, all the links recorded in the program are uni-directional. Defense model: Since it is infeasible to patch all vulnerabilities in real network environments, we assume only a few defense options can be selected for possible defense. We explain each of the defenses as follows and we show the available defense strategies in Table 2.
Results and analysis
In this section, we use the network scenario described to illustrate the phases of the framework with their results.
Initialization phase One of the features and an MDP-based model assumes that the environment is fully observable and known by the agent. In this phase, the hosts and vulnerabilities are collected and provided as input to the model, it is presumed that the data collected have covered all the hosts and vulnerabilities in the space. For this experiment, the following network data were collected:
-Host Address: The IP address for the host. This data is treated as the identifier for each host in the model. -CVSS Score: This value is collected from NVD. The number has a range from 0 to 10. The higher the number, the more severe the vulnerability is when it is compromised by attackers. This number will be used as a negative offset in the model's rewards calculation for state transition, particularly for an "attack" transition. -Vulnerability ID: is an identifier for each vulnerability. Hosts can have more than one vulnerability. -Patch Cost: is a number that represents the total cost of patching the vulnerability on the host. The number has a range from 0 to 10. For example, the cost of patching V 1 is 8.0 and the cost of patching V 3 is 6.5. The number will be used as a negative offset in the model's rewards calculation for state transition, particularly for a "patch" transition.
Attack Path is an optional input for the model, it decides whether the model is trying to solve a more particular problem or a wide-ranged problem. If the attack path is given, the attacker will only attack the host which is on the path. If the attack path is not specified in the model, the model will assume the attacker will attack any feasible host for the attacker to attack. The attack patch is an important element during the model generation phase. For this experiment, the following attack path (Figure 2) is used: Model Generation Phase Figure 3 shows how the BFS algorithm starts exploring the space from its root node, which corresponds to the initial state in the model. It explores the next level of states using all possible attack and defense actions. There are four possible actions to perform in the initial state in the figure, and thus it expands its branches to those four states. The algorithm will finish exploring all of the neighbor states at the same level before moving to the next depth level.
Each node in the BFS exploration tree will be visited only once, however it is still possible for two nodes to have the same state. This is because doing different actions in a different sequence is possible to result in the same state. Therefore, it is necessary to check duplications before adding the node to the state set in the MDP model.
Here, the attack path is an important element during the exploration of the BFS algorithm. If the attack path is not specified, the algorithm assumes the attacker will attack any feasible host (i.e., the host being attacked is adjacent to a compromised host and the host has at least one vulnerability.) Not specifying the attack path will add complexity and run time for the model generation phase. Figure 4 is initialized. Initially, the Q values in the cells are all zero. Each Q-value represents the "quality" of a state and action pair.
Q-learning Phase In this phase, the Q-Value Table shown in
The parameters for Q-learning iteration is listed as follows:
γ (Discount Factor) : 0.9 α (Learning Rate) : 0.1 -: 0.7 epochs : 5000
For this simulation experiment, 1492 possible states have been generated in total. After the model finishes its process, all the output data generated were written into a text file, including the optimal defense solutions and the full q-table after training. The q-table is recorded into n lines where n is the number of possible states. Each line represents the data for each state. One part of the line is used to describe the situation of the state, (1) "Compromised Hosts" gives a list of hosts that have been compromised by the attacker; (2) "Links" provide a list of existing connections between the hosts, blocked links will not be included; (3) "Vulnerabilities" give a list of existing vulnerabilities, patched vulnerabilities will not be included. Another part is the q-value for each action at this state. For example, the q-value for action 4 at this state is -2.1.
Compromised Hosts: [0, 2, 6] Links: [(0, 1), (0, 2), (1, 2), ..., (7,5), (7,6) If an attack path (Figure 2) is provided to the model, the optimal defense sequence for this network is D3 (Block port to 172.16.0.0 on 172.16.0.2). From the network structure perspective, we can see that if host2 blocks connection from host0, then according to the pre-defined attack path, the attacker will not be able to make any attack action. Therefore, the network is secured after performing only one defense action-D3.
From the q-table perspective (Table 3), D3 has the largest q-value at the State0 (Initial State), so D3 is added to the output sequence. For State5 (the state after performing D3 at State0), the q-value for attack action is 0 which is larger than any other defense action. Therefore, the model concludes that there is no need to perform any defense actions, and the search ends. If the attack path is not provided to the model, the optimal defense sequence for this network is D3-D1. From the network structure perspective, if host1 and host2 both block connection from host0, then the rest of the network is fully protected because host1 and host2 are the only passes where the attack can proceed its attack. From the q-table perspective (Table 4), D3 has the largest q-value at the State0 (Initial State), so D3 is added to the output sequence. D1 has the largest q-value at State5 (Initial State), so D1 is then added to the output sequence. For State29 (the state after performing D1 at State5), the q-value for attack action is 0 which is larger than any other defense action. The output result may significantly depend on the network situation, such as the cost of patching a vulnerability, the cost of blocking ports on a host, or the damage to a host after being attacked. For some network systems, blocking ports on host1 (D1) may result in further damage to the organization's service, because it not only blocks the attacker but also blocks all other normal users from accessing. In that case, the cost of D1 will be raised significantly, and as a result, the optimal defense sequence may not include D1. The model may choose other alternative defense strategies, such as patching vulnerabilities on host4 (D5), to minimize the damage.
Conclusion Phase
In this phase, all the q-values in the table are negative since the implementation of cyberdefense is a costly task. Either patching a vulnerability or host compromised puts a negative effect on the whole system. It is impossible to profit and earn positive rewards.
Finding the optimal strategy for one certain state can be achieved by looking at the corresponding q-value in the q-table. The larger the q-value is, the less damage the action will result. For example in Figure 4, Action1 has the largest q-value in the Table State1 column, which means Action1 theoretically is the best action to take at State1. On the contrary, Action 4 is the worst action to take. Finally, the model will look for a sequence of actions from the initial state. The model will keep searching through the q-table by using the following steps:
1. Find the best action in the initial state, add the action to the sequence; 2. Go to the consequence state with the action. For example, performing Action1 in State1 will result in State2. 3. Find the best action at that state and add the action to the sequence. Keep iterating step 2 & 3 until there is no following states, or the q-value shows there is no need to do any defense actions. (When the q-value of action attack is larger than any other defense actions, it means that performing any defense actions will be redundant and cause more damage to the system. That is when there is no need to do any defense actions).
After performing the above steps, the model will output a sequence of actions such as Action1-Action3-Action2. This action sequence is the solution that optimized the rewards, therefore minimizing the overall cost and damage to the network system. The initial state can be replaced by any state for this searching mechanism, which allows the agent to find an optimal solution in any situation in the network.
Effect of Q-learning Parameters on Optimal Reward
Different parameters' values may have a significant effect on the output result of the model. In this section, Q-learning parameters such as discount factor, epsilon, and iterations are investigated. This section will assess their performance with different data, and a suitable combination of parameters should be concluded to maximize the overall performance of the model. Discount Factor (γ) in Q-learning, γ ∈ (0, 1), indicates the importance of the future rewards compared to the immediate rewards. If γ is larger, it means the agent considers the future rewards more and is willing to delay the immediate rewards. As figure 4.3 shows, the optimal solution reward grows almost linearly as the discount factor increase. As the model digs deeper into space, the more certain it realizes that the optimal solution has a better effect on defending the network system. This explains why the damage becomes less when the discount factor increases.
Secondly, epsilon ( ) is a factor that balances exploration and exploitation. If is larger, the agent will have more possibility to explore the space (i.e. to choose action randomly). If is lesser, the agent will be more likely to choose the action with the highest q-value. Changing has a minor effect on the overall optimal rewards value, therefore in this experiment, the percentage improvement of the optimal solution's reward, compared to the rewards of not defending, is used as an index to test the performance of the model. The percentage is calculated by equation (2) Since the rewards are all negative, so the result needs to be negated. Figure 6 shows the improvement almost stays at 40% to 50% level when is less than 0.75. However, after the gets larger than 0.75, the improvement drops substantially and even decreases below 0% (the optimal reward is less than the attack reward) at 0.95. This shows when gets larger than a certain point, the agent tends to explore more paths. As a result, the agent did not give much weight to the optimal solution, therefore, reducing the difference between every action. Lastly, iteration times (epochs) represent the maximum number of iterations the Q-learning will iterate. As the iteration times increase, the overall quality and completeness of the output result are also increased. However, larger iteration times can also increase the run time for the model. Therefore, it is necessary to find a suitable number of iteration times that allows both a decent run time and an acceptable quality of the output results.
As figure 7 shows that the iteration times increase, the number of un-updated qvalue decreases (Note that the x-axis is in log-scale). The un-updated q-value means the q-value in the q-table that has not been updated once. Since the q-table for the experiment has a size of 13594 q-values, likely, some of them are not updated during the process. The pattern of the graph is similar to a logarithm equation. When epochs are relatively small, the un-updated q-values decrease substantially. When epochs are relatively large, the un-updated q-values only decrease a small amount. That is because when there is more and more q-value being updated, the probability for the agent to reach an un-updated q-value becomes less.
Although the parameters for Q-learning can vary between different tasks, an appropriate range of those parameters has been concluded for a network defense problem.
-Discount Factor (γ): The experiment reveals that the larger the γ is, the more rewards will be received from the optimal solution. However, it is also not proper to weigh too much on the future rewards, since the immediate rewards still need to be considered in some cases. In summary, the experiment suggests a range of 0.8 to 0.9 for the discount factor. -Epsilon ( ): The experiment shows that if increases over 0.75, the overall difference between the optimal solutions and other solutions will be reduced. Therefore, it will become hard to distinguish between a "good" and "bad" action. While if gets too small, the agent will be less likely to find alternative strategies that can further reduce the overall damage to the network system. As a result, the experiment suggests a range of 0.5 to 0.7 for the . -Iteration Times (epochs): The experiments prove that as epochs increase, the overall quality and completeness of the output result is also increased. However, the efficiency of improving the result decreases, and the run time raises when epochs increase to a larger number. Therefore, the experiments suggest a range of 5000 to 10000 for the epochs.
Model Efficiency Experiments
In the area of cyber defense, algorithm efficiency is also a key element to determine whether the system can successfully defend against the attacker. If the attacker's efficiency is better than the defense side, the optimal solution output from the model may be no longer applicable to the environment. The experiment uses a network with different numbers of hosts to test the efficiency of the model. Normally, if there are more hosts in the network, the model will become more complex and there will be more possible states to generate. As the state number increases, the complexity of training the model also increased. Figure 8 shows when the number of hosts is less than 8, the time to generate and the time to train the model does not increase much as the number of hosts increases. The time for model generation stays under 1 minute and the time for training stays under 5 seconds. However, when the number of hosts is greater than 8, the time spent starts to grow exponentially. On the other hand, the time to train the model is relatively faster than the time to generate the model.
Conclusion and Future Work
In this paper, we have presented an MDP-based optimal solution model for cyber defense. The model is composed of four sequential phases. The model initialization phase takes some real network situation as the input and converts it into structured data; the model generation phase generates all the possible states for the MDP model using a breadth-first search algorithm; the Q-learning phase implements a Q-learning iteration which trains the model to learn the space and update the quality for each state-action pair; the conclusion phase searches for the optimal solutions using the qtable trained after the previous phase. Real network simulation experiments have been done to test the usability and functions of the model. The result demonstrates the model can reduce the attack impact on the network system from a cyber-attack, in either network structure prospective or q-table perspective. In the future, we plan to add more defense actions to the model. Another potential development for the model is to make it a POMDP (Partially observable Markov decision process). Besides, We plan to collect more usable and real data from a bigger network.
, S. Y., and Kim, D. S. (2022). Markov Decision Process For Automatic Defense. In Information Security Applications (WISA 2022). Lecture Notes in Computer Science, Springer, Cham.
Fig. 1 .
1Real Network Structure
Fig. 2 .
2The
Fig. 3 .
3State Generation Diagram
Fig. 4 .
4Q-Value
Fig. 5 .
5Impact of Discount Factor on Optimal Reward
Fig. 6 .
6Improvement of Optimal Solution with Different Epsilon
Fig. 7 .
7Improvement of Iteration Times on the Number of Un-Updated Q-Value
Fig. 8 .
8Time Efficiency for Model Generation
Algorithm 1 :
1Initialize Statesqueue.add(initialState);
while queue not empty do
currentState ← queue.pop();
states.add(currentState);
GenerateNextState(currentState);
end
Algorithm 2 :
2Generate Next State/* Generate Attack States
*/
if attackPath is None then
for host ← adjacentHost do
if !host.compromised & host.hasVulnerabilities then
state ← AttackAction(host);
queue.add(state);
end
end
end
else
host ← GetNextHostOnPath();
if !host.compromised & host.hasVulnerabilities then
state ← AttackAction(host);
queue.add(state);
end
end
/* Generate Defense States
*/
for action ← defenseActionsList do
state ← DefenseAction(action);
queue.add(state);
end
Algorithm 3 :
3QLearningTrain Input: gamma, lrnRate, epsilon, maxEpochs; for i in range(maxEpochs) do currS ← 0; while True do /* Decide to explor or exploit */ if random.uniform(0, 1) < epsilon then action ← GetRandomNextAction(currS); /* Whether the action is successful or fail */ if random.uniform(0, 1) < trans[currS][nextS].rate then reward ← rewards[currS][nextS].success; QTable[currS][action] ← QValueCalculation(); currS ← nextS;else
action ← GetMaxAction(currS);
nextS = GetStateFromAction(action);
/* Finish if no following state
*/
if nextS is None then
break;
else
reward ← rewards[currS][nextS].fail;
nextS ← currentS;
nextA ← GetMaxNextAction(nextS);
futureQ ← QTable[nextS][nextA];
/* Update Q Table
*/
Table 1 .
1Hosts and Vulnerabilities InformationHost Address Vulnerability ID CVSS Score Patch Cost
172.16.0.1
V1
4.3
8.0
172.16.0.2
V2
2.1
5.0
172.16.0.3
V3
10.0
6.5
172.16.0.4
V4
4.3
3.5
172.16.0.5
V5
7.5
4.5
172.16.0.6
V6
8.8
5.0
172.16.0.7
V7
8.8
6.0
172.16.0.8
V8
6.1
7.0
To appear In Information Security Applications. WISA 2022, Springer, Cham-BLOCK(target, sub-target): Block port action takes two parameters, target, and
sub-target. Target tells the model to block port on which host, while sub-target
indicates which host should be blocked connection from. For example, command
BLOCK(172.16.0.2, 172.16.0.1) represents the host 172.16.0.1 should block port
from host 172.16.0.2.
-PATCH(target, vulnerability): Patch action takes two parameters, target, and vul-
nerability. For example, command PATCH(172.16.0.3, V3) represents patching vul-
nerability V3 on host 172.16.0.3.
Table 2. Available Defenses Options
Defenses ID Defense Detail
D1
Block port to Router on 172.16.0.1
D2
Patch V7 on 172.16.0.7
D3
Block port to Router on 172.16.0.2
D4
Block port to 172.16.0.7 on 172.16.0.3
D5
Patch V4 on 172.16.0.4
D6
Patch V6 on 172.16.0.6
Table 3 .
3Q-Table for Real Network Example with Attack Path (Partial)State
Attack Action D1 D2 D3 D4
D5 D6
State0 (Initial State)
-9.298
-6.19 -7.89 -2.1 -10.69 -5.39 -6.
Table 4 .
4Q-Table for Real Network Example without Attack Path (Partial)State
Attack Action D1
D2
D3
D4
D5
D6
State0 (Initial State)
-9.298
-6.19 -9.296 -5.97 -12.363 -7.283 -8.
(where OSR is Optimal Solution Reward, NDRs is No Defend Rewards).Improvement Percentage = −
OSR − N DRs
N DRs
(2)
A survey on cyber situation awareness systems: Framework, techniques, and insights. H Alavizadeh, J Jang-Jaccard, S Y Enoch, H Al-Sahaf, I Welch, S A Camtepe, D D Kim, ACM Computing Surveys (CSURAlavizadeh, H., Jang-Jaccard, J., Enoch, S.Y., Al-Sahaf, H., Welch, I., Camtepe, S.A., Kim, D.D.: A survey on cyber situation awareness systems: Framework, techniques, and insights. ACM Computing Surveys (CSUR) (2022)
Intelligent, Automated Red Team Emulation. A Applebaum, D Miller, B Strom, C Korban, R Wolf, Proceedings of the 32nd Annual Conference on Computer Security Applications. the 32nd Annual Conference on Computer Security ApplicationsApplebaum, A., Miller, D., Strom, B., Korban, C., Wolf, R.: Intelligent, Automated Red Team Emulation. In: Proceedings of the 32nd Annual Conference on Computer Security Applications. pp. 363-373 (2016)
A model-based, decision-theoretic perspective on automated cyber response. L B Booker, S A Musman, arXiv:2002.08957arXiv preprintBooker, L.B., Musman, S.A.: A model-based, decision-theoretic perspective on automated cyber response. arXiv preprint arXiv:2002.08957 (2020)
An integrated security hardening optimization for dynamic networks using security and availability modeling with multi-objective algorithm. S Y Enoch, J Mendonça, J B Hong, M Ge, D S Kim, Computer Networks. 208108864Enoch, S.Y., Mendonça, J., Hong, J.B., Ge, M., Kim, D.S.: An integrated security hard- ening optimization for dynamic networks using security and availability modeling with multi-objective algorithm. Computer Networks 208, 108864 (2022)
A practical framework for cyber defense generation, enforcement and evaluation. S Y Enoch, C Y Moon, D Lee, M K Ahn, D S Kim, Computer Networks. 208108878Enoch, S.Y., Moon, C.Y., Lee, D., Ahn, M.K., Kim, D.S.: A practical framework for cyber defense generation, enforcement and evaluation. Computer Networks 208, 108878 (2022)
FIRST: CVSS v3.1: Specification Document. Forum of Incident Response and Security Teams. FIRST: CVSS v3.1: Specification Document. Forum of Incident Response and Security Teams (June 2019), https://www.first.org/cvss/v3.1/specification-document
SCERM-A Novel Framework for Automated Management of Cyber Threat Response Activities. Z Iqbal, Z Anwar, Future Generation Computer Systems. Iqbal, Z., Anwar, Z.: SCERM-A Novel Framework for Automated Management of Cyber Threat Response Activities. Future Generation Computer Systems (2020)
The AI-based Cyber Threat Landscape: A Survey. N Kaloudi, J Li, ACM Computing Surveys (CSUR). 531Kaloudi, N., Li, J.: The AI-based Cyber Threat Landscape: A Survey. ACM Computing Surveys (CSUR) 53(1), 1-34 (2020)
Mcafee, Mcafee labs 2020 threats predictions report. McAfee: Mcafee labs 2020 threats predictions report (2019), https://www.mcafee.com/ blogs/other-blogs/mcafee-labs/mcafee-labs-2020-threats-predictions-report/
A Machine Learning Framework for Investigating Data Breaches based on Semantic Analysis of Adversary's Attack Patterns in Threat Intelligence Repositories. U Noor, Z Anwar, A W Malik, S Khan, S Saleem, Future Generation Computer Systems. 95Noor, U., Anwar, Z., Malik, A.W., Khan, S., Saleem, S.: A Machine Learning Frame- work for Investigating Data Breaches based on Semantic Analysis of Adversary's Attack Patterns in Threat Intelligence Repositories. Future Generation Computer Systems 95, 467-487 (2019)
Situational awareness framework for threat intelligence measurement of android malware. M Park, J Seo, J Han, H Oh, K Lee, JoWUA. 93Park, M., Seo, J., Han, J., Oh, H., Lee, K.: Situational awareness framework for threat intelligence measurement of android malware. JoWUA 9(3), 25-38 (2018)
Toward an automated attack model for red teams. H T Ray, R Vemuri, H R Kantubhukta, IEEE Security & Privacy. 34Ray, H.T., Vemuri, R., Kantubhukta, H.R.: Toward an automated attack model for red teams. IEEE Security & Privacy 3(4), 18-25 (2005)
Deeplocker: How ai can power a stealthy new breed of malware. Security Intelligence. M P Stoecklin, 8Stoecklin, M.P.: Deeplocker: How ai can power a stealthy new breed of malware. Security Intelligence, August 8 (2018)
Defending sdn-based iot networks against ddos attacks using markov decision process. J Zheng, A S Namin, 2018 IEEE International Conference on Big Data (Big Data). IEEEZheng, J., Namin, A.S.: Defending sdn-based iot networks against ddos attacks using markov decision process. In: 2018 IEEE International Conference on Big Data (Big Data). IEEE (2018)
Markov decision process to enforce moving target defence policies. J Zheng, A S Namin, arXiv:1905.09222arXiv preprintZheng, J., Namin, A.S.: Markov decision process to enforce moving target defence policies. arXiv preprint arXiv:1905.09222 (2019)
|
[] |
[
"Shearing box simulations in the Rayleigh unstable regime",
"Shearing box simulations in the Rayleigh unstable regime"
] |
[
"Farrukh Nauman \nDepartment of Physics and Astronomy\nUniversity of Rochester\n14627RochesterNYUSA\n\nNiels Bohr International Academy\nThe Niels Bohr Institute\nBlegdamsvej 17DK-2100Copenhagen ØDenmark\n",
"Eric G Blackman \nDepartment of Physics and Astronomy\nUniversity of Rochester\n14627RochesterNYUSA\n\nSchool of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA\n"
] |
[
"Department of Physics and Astronomy\nUniversity of Rochester\n14627RochesterNYUSA",
"Niels Bohr International Academy\nThe Niels Bohr Institute\nBlegdamsvej 17DK-2100Copenhagen ØDenmark",
"Department of Physics and Astronomy\nUniversity of Rochester\n14627RochesterNYUSA",
"School of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA"
] |
[] |
We study the stability properties of Rayleigh unstable flows both in the purely hydrodynamic and magnetohydrodynamic (MHD) regimes for two different values of the shear q = 2.1, 4.2 (q = −d ln Ω/d ln r) and compare it with the Keplerian case q = 1.5. We find that the q > 2 regime is unstable both in the hydrodynamic and in the MHD limit (with an initially weak magnetic field). In this regime, the velocity fluctuations dominate the magnetic fluctuations. In contrast, in the q < 2 (magnetorotational instability (MRI)) regime the magnetic fluctuations dominate. This highlights two different paths to MHD turbulence implied by the two regimes, suggesting that in the q > 2 regime the instability produces primarily velocity fluctuations that cause magnetic fluctuations, with the causality reversed for the q < 2 MRI unstable regime. We also find that the magnetic field correlation is increasingly localized as the shear is increased in the Rayleigh unstable regime. In calculating the time evolution of spatial averages of different terms in the MHD equations, we find that the q > 2 regime is dominated by terms which are nonlinear in the fluctuations, whereas for q < 2, the linear terms play a more significant role.
|
10.1093/mnras/stx209
|
[
"https://arxiv.org/pdf/1507.04711v2.pdf"
] | 119,115,998 |
1507.04711
|
b5bdc592b9b383eaa536a9a84b3ce54b13d6466d
|
Shearing box simulations in the Rayleigh unstable regime
23 Jan 2017 MNRAS 000, 1-10 (2015) Preprint 13 June 2021 13 June 2021
Farrukh Nauman
Department of Physics and Astronomy
University of Rochester
14627RochesterNYUSA
Niels Bohr International Academy
The Niels Bohr Institute
Blegdamsvej 17DK-2100Copenhagen ØDenmark
Eric G Blackman
Department of Physics and Astronomy
University of Rochester
14627RochesterNYUSA
School of Natural Sciences
Institute for Advanced Study
08540PrincetonNJUSA
Shearing box simulations in the Rayleigh unstable regime
23 Jan 2017 MNRAS 000, 1-10 (2015) Preprint 13 June 2021 13 June 2021Compiled using MNRAS L A T E X style file v3.0accretion, accretion discs -mhd -instabilities -turbulence
We study the stability properties of Rayleigh unstable flows both in the purely hydrodynamic and magnetohydrodynamic (MHD) regimes for two different values of the shear q = 2.1, 4.2 (q = −d ln Ω/d ln r) and compare it with the Keplerian case q = 1.5. We find that the q > 2 regime is unstable both in the hydrodynamic and in the MHD limit (with an initially weak magnetic field). In this regime, the velocity fluctuations dominate the magnetic fluctuations. In contrast, in the q < 2 (magnetorotational instability (MRI)) regime the magnetic fluctuations dominate. This highlights two different paths to MHD turbulence implied by the two regimes, suggesting that in the q > 2 regime the instability produces primarily velocity fluctuations that cause magnetic fluctuations, with the causality reversed for the q < 2 MRI unstable regime. We also find that the magnetic field correlation is increasingly localized as the shear is increased in the Rayleigh unstable regime. In calculating the time evolution of spatial averages of different terms in the MHD equations, we find that the q > 2 regime is dominated by terms which are nonlinear in the fluctuations, whereas for q < 2, the linear terms play a more significant role.
INTRODUCTION
Differentially rotating flows are ubiquitous in astrophysics and studying their stability has been a long-standing enterprise. Using the local shearing box approximation (Goldreich & Lynden-Bell (1965), Hawley et al. (1995) with Keplerian shear (q = 1.5), numerical simulations have shown that the Magnetorotational Instability (MRI) leads to turbulent growth of stresses in the presence of a weak magnetic field (for example, Velikhov (1959), Chandrasekhar (1960), Balbus & Hawley (1991)). The Rayleigh criterion, based on a linear modal analysis of axisymmetric perturbations, suggests that Keplerian flow is stable in hydrodynamics. This, however, does not rule out the possibility of subcritical transition to turbulence (Balbus et al. (1996), Lesur & Longaretti (2005)).
The (Rayleigh stable) Keplerian flow has understandably received the most attention because of its direct application in accretion discs, but here we focus on the stability properties of hydrodynamic and magnetohydrodynamic (MHD) flow in the Rayleigh unstable regime q > 2. A study of the Rayleigh unstable regime is of interest because a comprehensive understanding of shear driven MHD turbulence ⋆ E-mail: [email protected] † E-mail: [email protected] requires knowing the differences in the q < 2 and q > 2 regimes. Additionally, certain astrophysical flows are actually thought to be Rayleigh unstable. These include counter rotating accretion discs (e.g., Dyda et al. (2015)), counter rotating galaxies (e.g., Corsini (2014)) and the plunging region close to a black hole (e.g., Abramowicz et al. (1978), Abramowicz et al. (1996), Gammie (2004), Balbus (2012), Penna et al. (2013)). While the standard shearing box in the Rayleigh unstable regime poses challenges that we discuss further in section 2.2, certain properties of the shear instabilities in both the hydrodynamic and magnetohydrodynamic (MHD) case can be studied numerically with an appropriate configuration and code. Toward this end, we have conducted numerical simulations for three different values of q (1.5, 2.1, 4.2) both in pure hydrodynamics and MHD. We first used the publicly available finite volume code athena 1 ( (Gardiner & Stone (2005), Stone et al. (2008), Stone & Gardiner (2010)) and found that even though we started out with zero initial momenta, truncation errors introduced perturbations that led to the exponential growth of the mean momentum and the eventual crash of the simulation (the time step is inversely proportional to maximum velocity). We then chose the pseudospectral code snoopy 2 (Lesur & Longaretti (2005), Lesur & Longaretti (2011)) to simulate q > 2, which conserves the k = 0 mode.
In section 2 we review the linear stability theory of hydrodynamic and magnetohydrodynamic shear flows and discuss it in the context of shearing box approximation. In section 3, we describe the numerical setup and simulation results. We conclude in section 4.
STABILITY OF SHEAR FLOWS
Linear analysis
Following the discussion in Shakura & Postnov (2015), the dispersion relation for local axisymmetric perturbations of the form e i(ωt−krr−kzz) (see also Balbus (2012) for the special case of k r = 0) with the initial magnetic field B 0 pointing in the z direction is (Velikhov (1959), Chandrasekhar (1960, Balbus & Hawley (1991), Kato et al. (1998), Shakura & Postnov (2015)):
ω 4 − ω 2 2k 2 z v 2 A + k z k 2 κ 2 + k 2 z v 2 A k 2 z v 2 A + k z k 2 κ 2 − 4Ω 2 = 0, (1) where k 2 = k 2 r + k 2 z , κ 2 = 4Ω 2 + rdΩ 2 /dr = 2Ω 2 (2 − q), v 2 A = B 2 0 /(4πρ 0 ) and ρ 0 = initial density. The solution is ω 2 = k z k 2 k 2 v 2 A + κ 2 2 ± κ 2 4 + 4Ω 2 k 2 v 2 A .(2)
For the classical Rayleigh criterion in hydrodynamics, v A = 0 and the above relation gives ω 2 = (k z /k) 2 κ 2 . This implies that purely hydrodynamic perturbations are stable as long as κ 2 > 0, or equivalently q < 2. However, the addition of magnetic fields makes the q < 2 regime unstable and instead ω 2
MRI ∼ (k 2 v 2 A )/(κ 2 dΩ 2 /d ln r) = −q/(2 − q)k 2 v 2 A in the limit k 2 v 2 A << 1 (Balbus 2012).
We focus our attention to the q > 2 or κ 2 < 0 regime in this paper. It is convenient to define the two different branches of Eq. 2 in the limit of k 2 v 2 A << 1 as:
ω 2 R = k z k 2 κ 2 + k 2 v 2 A 1 + 4Ω 2 κ 2 (3) ω 2 VC = k 2 z v 2 A 1 − 4Ω 2 κ 2(4)
where ω R = Rayleigh mode and ω VC = Velikhov-Chandrasekhar mode.
As explained by Shakura & Postnov (2015), these modes are so named because we recover the classical Rayleigh instability criterion from the Rayleigh mode in the absence of magnetic field (v A = 0), and the VC mode vanishes in this limit. In the regime κ 2 < 0, it follows from above that the VC mode is stable for all wavenumbers and only the Rayleigh mode is unstable. This distinction between the Rayleigh and VC mode was not made in Balbus (2012
Shearing box in the Rayleigh unstable regime
The shearing box approximation in the ideal compressible MHD limit is discussed in Nauman & Blackman (2015).
Here we revisit that discussion in the context of non-ideal incompressible MHD equations since snoopy solves this set of equations. The shearing box equations in the frame comoving with the background shear velocity v sh = −qΩxe y are:
∂v ∂t + v sh ∂v ∂y + ∇ · (vv + T) = 2Ωv y e x + (2 − q)Ωv x e y + ν∇ 2 v, (5) ∂b ∂t = ∇ × (v × b) + η∇ 2 b,(6)∇ · v = 0,(7)∇ · b = 0,(8)
where v and b are the velocity and magnetic field respectively. Here T is a stress tensor given by
T = (p + b 2 /2)I − bb,(9)
where I is the identity matrix and p is thermal pressure. Upon volume averaging the Navier-Stokes equation (eq. 5), we obtain two coupled equations for the volume averaged velocities v x and v y :
∂ v x ∂t = 2Ω v y ,(10)∂ v y ∂t = Ω v x (q − 2).(11)
which yields the solution that both averaged velocities are proportional to exp(±iκt) for q < 2, or ∼ exp(±κt) for q > 2 where κ 2 = 2Ω 2 (2 − q). The above analysis shows that the 'x' and 'y' mean velocities will grow exponentially, if perturbed, in the Rayleigh unstable regime q > 2. This growth is a physical effect for finite perturbations. However if we set initial mean velocities to be zero the physical velocities should remain such, but in simulations they can grow because of truncation errors. We verified this with the finite volume code athena. The truncation errors seeded the mean velocities and they grew exponentially bringing the simulation to a halt in just a few shear times (1/(qΩ)). We therefore chose to use the publicly available incompressible pseudospectral code snoopy, which has the important property that the box averaged mean velocities do not grow throughout the duration of the simulation. This is because the nonlinear terms in the code are of the form (ik · v)v, and do not contribute when k = 0. Linear terms can only contribute to k = 0 mode evolution if the initial value for the fields at k = 0 is not set to zero, but we started all of our simulations without perturbations in this mode.
NUMERICAL RESULTS
Setup
Using snoopy, we solve the incompressible hydrodynamic and MHD equations in the shearing box approximation. We solve the equations where the background shear has been subtracted out. snoopy utilizes the the 2/3 antialiasing rule (Canuto et al. 2006). Shear periodic boundaries are remapped every t remap = L y /(qΩL x ) (Umurhan & Regev 2004). We define the Reynolds and magnetic Reynolds numbers Re = L 2 z qΩ/ν, Rm = L 2 z qΩ/η, respectively, where L z = Ω = 1 in code units. We fixed Re = Rm = 1600 for most of our runs.
We use large scale noise as initial perturbations (with zero mean) and set the net initial vertical field B 0 = 0.025 in code units, which corresponds to an initial plasma beta β = L 2 z Ω 2 /(B 2 0 /2) = 3200. The magnetic field is calculated in Alfven speed units. For all of our runs, we use the domain size L x = L y = L z = 1 with a resolution of 64 3 . Table 1 provides a summary of our runs.
Hydrodynamic shear flow stability
As discussed in the previous section, the q < 2 regime is stable in hydrodynamics (see also Tillmark & Alfredsson (1992), Bech & Andersson (1997), Brethouwer (2005) for earlier work). We checked this by simulating the Keplerian q = 1.5 regime as well as two different values of shear in the Rayleigh unstable regime q = 2.1, 4.2. We plot the time history of the kinetic energy and the Reynolds stresses in Fig. 1. As predicted by the standard modal analysis, the Keplerian flow is stable and its fluctuations exponentially decay to zero whereas the two Rayleigh unstable runs reach a saturated turbulent state in just a few shear times.
MHD shear flow stability
For MHD the regime 0 < q < 2 is unstable to the MRI. In Nauman & Blackman (2015), we focused on the dependence on q for q < 2 and found that the results were consistent with the linear calculations of Pessah et al. (2006) and the empirical results of Abramowicz et al. (1996). In contrast, the q > 2 case is stable to the MRI so a comparison of saturated states of the two regimes is instructive.
One common feature visible from Figs. 1, 2 and 3 is that the case of largest shear (blue line, q = 4.2) has the largest growth rate in both magnetic and kinetic energies. The trend of increased growth rate with shear is also a property of the q < 2 (κ > 0) MRI regime (Nauman & Blackman 2015). However, the important difference to note both in Fig. 2 and 3 is that the growth rate of the kinetic energy (Reynolds stress) is greater than that of magnetic energy (Maxwell stress) in the q > 2 regime. To further explore the difference between kinetic and magnetic energy in the q > 2 regime, we increased Re and Rm to 6400 and 12800 (at Pr M = Rm/Re = 1) for q = 4.2 and observed that the ratio of kinetic energy to magnetic energy in the saturated state decreased to nearly 2.7 for Re = Rm = 12800 compared to ∼ 5.0 for the Re = Rm = 1600 and 6400 cases. An extensive study of Re, Rm dependence is beyond the scope of the current paper. For Keplerian flow, the turbulent stresses also depend on dissipation coefficients (see for example, Riols et al. (2015)).
As reviewed in section 2.2 above linear theory suggests that we can break the dispersion relation into two different types of modes (Shakura & Postnov 2015): Rayleigh and Velikhov-Chandrasekhar (VC). For q > 2, the VC mode is stable at all wave numbers. Our results show that for q < 2, the magnetic energy leads the kinetic energy while for q > 2 the kinetic energy leads the magnetic energy. This result is reminiscent of isotropically forced box simulations of MHD turbulence in the following sense. In such simulations, the turbulent driver is imposed by hand as a forcing function. Normally the forcing is in the the Navier-Stokes equation, but it can also be imposed in the induction equation. When the forcing is imposed in the Navier-Stokes equation the saturated state reveals that the kinetic energy dominates the magnetic energy at the forcing scale and below. In contrast,
Run
Shear Table 1. The first three runs are MRI runs whereas the last two are the purely hydrodynamic runs. We do not list the Keplerian hydrodynamic run here as it did not become turbulent. All the quantities are time averaged from 1000(1/Ω) to 2000(1/Ω) (time averaging is defined by an overline) for all of the runs and volume averaged (represented by angled brackets) over the whole box. The stresses v x v y and b x b y are normalized by L 2 z Ω 2 , which equals unity according to our definitions. The fifth column represents the ratio of the Reynolds stress to the square of the azimuthal velocity α kin,y ≡ v x v y / v 2 y , while the last column shows this ratio corresponding to the magnetic field α mag,y ≡ − b x b y / b 2 y . It appears that α kin,y is a sensitive function of the shear parameter while α mag,y is roughly constant.
v x v y -b x b y α kin,y ≡ v x v y / v 2 y α mag,y ≡ − b x b y / b 2 y mhd15 1.5 0.4837 ± 0
when the forcing is in the induction equation, the magnetic energy dominates the kinetic energy at these scales (Park & Blackman 2012). These circumstances reflect the fact that the transfer of energy from the quantity that is driven (v or b) is not 100% efficient to the response quantity (b and v, respectively). Interpreted in this way, the results from our simulations suggest that the for the q > 2 regime, the Rayleigh mode acts more like an an effective "driving" in the Navier Stokes equation, whereas for the q < 2 regime, the VC mode perhaps leads to a kind of "effective" forcing in the induction equation. This physical distinction may be useful in the path toward constructing analytic theoretical approaches and is consistent with toy models in the MRI context that invoke forcing in the induction equation (e.g. Squire & Bhattacharjee (2015)). More work is needed to assess this rigorously.
Finally, we note that boxes that are sufficiently large in the direction normal to the shear (L y , L z ≫ L x ) can lead to qualitatively different regime of 'spatiotemporal chaos' (Pomeau (1986), Philip & Manneville (2011)). For q < 2 MHD shearing box simulations with L z ≫ L x , Shi et al. (2016) showed that coherent structures appear in the magnetic field while more recently Nauman & Pessah (2016) have shown that both velocity and magnetic fields develop coherent structures. The boxes used in the present study have L x = L y = L z = 1, so the extent to which a similar role of large boxes might also apply to the Rayleigh unstable regime should be investigated in future work.
Correlation in space (x-y plane)
Studying the physical effect of shear on the flow is aided by computing the autocorrelation function (ACF) of the velocity and magnetic fields in the x−y plane. This autocorrelation provides a dimensionless of measure of the length or time scale over which the velocity (or magnetic field) maintains a value similar to itself and thus provides a measure of the locality of interactions in a turbulent flow. For random functions, the ACF decays exponentially. A plot of the spatial ACF in the x − y plane characterizes the spatial anisotropy of the velocity and magnetic field fluctuations.
ACF(b(δx)) = i b i (x + δx, t)b i (x, t)d 3 x b 2 (x, t)d 3 x ,(12)
where
b 2 = b 2 x + b 2 y + b 2 z .
Note that ACF(b) is normalized to its maximum value at zero displacement (δx = δy = δz = 0). Like Guan et al. (2009), we subtract off volume averaged mean quantities (b = b total − b ). The overline represents the time averaging over ∼ 1000(1/Ω) time units of the saturated state. We use the analogous definition for the autocorrelation of velocity fields ACF(v(δx)). Fig. 4 shows the ACF(v(δx)) and ACF(b(δx)) of the three shear values we study in this paper, q = 1.5, 2.1, 4.2 for both the hydrodynamic and the magnetohydrodynamic runs. In contrast to previous work on the MRI (e.g., Guan et al. (2009), Simon et al. (2012, Nauman & Blackman (2015)), the tilt angle observed in plots of ACF(b(δx)) with respect to the y-axis is not constant with respect to variations in q. In addition, the hydrodynamic velocity ACF in fig. 4 for both q = 2.1, 4.2 is more localized compared to the MHD counterparts at these same q. Comparing the MHD ACF plots, the q = 2.1 and 4.2 MHD runs show a very localized magnetic field compared to the q = 1.5 run.
The tilt angles for the q < 2 cases previously studied were successfully modeled using an analysis of shear on fluctuations which assumed linear terms dominated nonlinear terms in the Navier-Stokes equation. Given that the q > 2 cases studied here do not show the same simple monatonic dependencies, we are led to investigate how the ratio of nonlinear to linear terms in the MHD equations vary a function of q. In the next section, we will show the non-linear terms in the Navier-Stokes equation do indeed dominate the linear terms for the q > 2 case when compared to the q < 2 MRI unstable cases of previous work. This is a step toward identifying the source of the more subtle dependence of tilt and localization on q in the q > 2 regime even if though exact dependence cannot yet be predicted analytically.
Shear dependence of stress and energy:
nonlinearities are more influential for q > 2 than q < 2
Here we provide three lines of evidence consistent with nonlinear terms being more influential than linear terms when it comes to understanding the behavior of stress and energy in saturation as a function of q for the q > 2 regime com- pared to the q < 2 regime. This is why it is more difficult to explain the q trends of tilt angle and localization for the q > 2 regime than the q < 2 regime.
Navier Stokes equation: Explicit comparison of nonlinear vs. linear terms for different q regimes
To investigate the effect of shear on the turbulent properties of the flow, we study the time history of the energies and stresses at early times before the flow reaches nonlinear saturation. We focus on the the 'x' and 'y' velocity equations here:
∂ t v x = 2Ωv y + B 0 ∂ z b x + ν∇ 2 v x + b · ∇b x − v · ∇v x(13)∂ t v y = (q − 2)Ωv x + B 0 ∂ z b y + ν∇ 2 v y + b · ∇b y − v · ∇v y .(14)
The last two terms represent non-linear terms in both equations. For q = 2, eq. 14 has no source term in the linear regime and is similar to the (non-rotating) plane Couette flow but with v x taking the role of shear velocity. In contrast, for q = 4 the source terms in eqs. 13 and 14 are both proportional to 2Ω. The q = 4 case results in apparent isotropy in the two components for the linear regime.
We plot the evolution of the different linear terms in the two equations and compare them with the rms value of the non-linear terms v · ∇v and b · ∇b for early times first 20Ω −1 times in figs. 5, 6, 7.
For q = 1.5, the 2Ωv y term in eq. 13 is comparable to the non-linear terms for q = 1.5 (top left panel of fig. 5), suggesting that for q < 2 the linear effects are very influential even as the saturated state is approached. This is assessed visually by noting that the red dashed curve overshoots the magnetic curve at most in the last few time steps of this plot. The linear term due to magnetic tension, B 0 ∂ z b x 3 , is nearly an order of magnitude weaker than the other terms in this plot. For q > 2 the top panels of (figs. 6, 7), show that the corresponding linear terms are nearly an order of magnitude weaker than nonlinear terms 13. Note here that the red dashed curve dominates over a longer range of time compared to the q = 1.5 case. Since the non-linear effects are dominating the linear velocity and magnetic field terms in this regime, the flow in this regime is expected to be more random with a smaller correlation length, consistent with fig. 4. Note also that for q > 2 (particularly in the q = 4.2 plot) the non-linear magnetic terms b · ∇b i (where i = x or y) are considerably weaker than the corresponding non-linear velocity term v · ∇v i (red dashed), suggesting that magnetic effects are subdominant in both the linear and non-linear regimes for the q > 2 regime (eq. 13).
Analogously, comparing the linear vs nonlinear terms of 14 for q = 1.5 vs q > 2 we find that in this case the nonlinear terms dominate the linear terms in both regimes, but that the red dashed curves of the bottom panels of figs. 6 and kinetic and magnetic energy while for the other two runs, both kinetic and magnetic energy decay.
Induction equation: Explicit comparison of nonlinear vs. linear terms for different q regimes
The induction equation has the form:
∂ t b x = B 0 ∂ z v x + η∇ 2 b x + b · ∇v x − v · ∇b x(15)∂ t b y = −qΩb x + B 0 ∂ z b y + η∇ 2 b y + b · ∇v y − v · ∇b y .(16)
The first two terms in the b x equation (eq. 15) and the first three terms in the b y equation (eq. 16) are linear. The terms of the form v · ∇b and b · ∇v are nonlinear because the velocity fields depend on the magnetic fields through the Navier Stokes equation (eqs. 13 and 14). When the magnetic fields are weak b 2 ≪ v 2 , then these terms could be considered approximately linear. However, for all of the shear values considered in this paper, the magnetic and kinetic energy are comparable right from the beginning of the simulations so it appears that the last two terms in both eqs. 15 and 16 are nonlinear.
The bottom panel in figures 8, 9 and 10 show that the generation of the azimuthal field b y due to the shearing of the radial field b x is very significant in the first few rotation times Ω −1 but is nearly an order of magnitude weaker than the v · ∇b y term in the saturated regime. The other nonlinear term b · ∇v y is slightly larger in magnitude for q = 2.1 and q = 4.2 compared to the qΩb x terms in the saturation regime but the two terms are nearly equal for q = 1.5. This suggests that stretching is more important for field growth in the q > 2 regime than the q < 2 regime.
Dependence of stresses and correlation time on q
To evaluate how α kin,y (≡ v x v y / v 2 y ) and α mag,y (= − b x b y / b 2 y ) vary with shear, we use the autocorrelation function of time to obtain the correlation time. Following our earlier work (Nauman & Blackman 2015):
ACF(v y (δt)) = v y (x, t + δt)v y (x, t)dt v 2 y (x, t)dt (17)
where the angle brackets represent volume averaging over all space. Time integration is done over several orbits in the turbulent saturated state. Similarly we can calculate the cross correlation in time of b x with b y and v x and v y . For example, for the velocities we have:
CCF(v x v y (δt)) = v x (x, t + δt)v y (x, t)dt v 2 x (x, t)dt v 2 y (x, t)dt .(18)
The correlation times, computed from an exponential fits to the plot of the ACF or CCF as in Fig. 11, tells us the characteristic time scale over which turbulent quantities such as the velocity are correlated to themselves or other quantities. From MRI simulations with q < 2, Nauman & Blackman (2015) found that the correlation time between x and y components of the field τ was roughly inversely proportional to the shear. There the stress ACF was calculated instead of the CCF but we checked that the CCF exhibits a similar 1/q behavior in the q < 2 regime .
The importance of the correlation time is that when linear stretching in the induction equation can be used to estimate the amplification of azimuthal fluctuations from radial fluctuations, the azimuthal field is amplified by shear during a correlation time with dominant term
α mag,y = − b x b y / b 2 y ∼ |qΩ|τ(19)
If τ ∼ 1/qΩ, α mag,y is roughly constant with shear. Indeed for q < 2 case, this was confirmed by the simulations. Moreover, the correlation times for the three quantities v y , b x and b x b y were very similar (see figure 13 of Nauman & Blackman (2015)). Using a similar argument for velocity as in eq. 19, we would get
α kin,y = v x v y / v 2 y ∼ (|(q − 2)Ω|τ) −1 .(20)
We now assess whether the above two equations, which are rooted in linear analysis, are equally effective at explaining the trends found in the q > 2 cases. We focus on the CCF (which is more relevant that the ACF) for stresses. We find that α mag,y is nearly constant just like the q < 2 MRI regime (see table 1) owing to the 1/q dependence of the correlation time for CCF (b x b y (δt)) ( fig. 11). However, α kin,y decreases both for the HD and MHD runs unlike the q < 2 cases 4 Eq. (20) would require that for α kin,y to decrease, τ has to go 4 Fig. 7 of Nauman & Blackman (2015) shows that v x v y / v 2 increases with shear. We did not plot the CCF (v x v y (δt)) in that paper but we checked that the correlation time for the Reynolds stress also varies as 1/q, which in a linear picture would explain the increase in the ratio of Reynolds stress to the kinetic energy as eq. 20 suggests with the assumption v 2 ∼ v 2 y . Figure 11. The CCF(b x b y (δt)) as defined in eq. 18 but only for MHD runs. The x-axis is in units of 1/Ω. The colour scheme is as follows: mhd15 (red), mhd21 (green), mhd42 (blue). (v x v y ) HYD 1/q Figure 14. Correlation time calculated from an exponential fit to the MHD simulation plots in figs. 12, 13, 11. The y-axis is in units of 1/Ω. down faster than |q − 2| −1 . To check this, we plot the ACF of v y in fig. 12, which shows a slight increase with shear. Then τ would be predicted to decrease by a factor of more than 22 as q varies from q = 2.1 to q = 4.2 if Eq. (20) were the whole story. But the CCF of v x v y in fig. 13 shows only a factor of 3 (for MHD) to 4 (for HD) times decrease with shear (see fig. 14). The likely explanation for this discrepancy is that Eq. (20) does not capture the effect of nonlinear terms. Indeed the comparison of linear and non-linear terms in figs. 6, 7 shows that non-linear terms are generally more important than the linear terms for q > 2 regime.
Tilt angle dependence on q
The tilt angle in ACF(b(δx)) ( fig. 4) has been directly connected to the ratio of Maxwell stress to magnetic energy −b x b y / b 2 in previous work on MRI (e.g. Nauman & Blackman (2015)). Here we modify the definition to compare the stress to just the y-component of magnetic field squared, α mag,y = − b x b y / b 2 y = tan θ tilt . For our Rayleigh unstable simulations, the tilt angle observed from the ACF(b(δx)) and the definition based on α mag,y 5 disagree, in contrast the MRI q < 2 cases. For example, for q = 2.1 the α mag,y = 0.2334 which is equivalent to θ tilt ∼ 13.14 • whereas for q = 4.2, the α mag,y = 0.1712 which is equivalent to θ tilt ∼ 9.71 • (fig. 4). A visual inspection of fig. 4 shows that the q = 4.2 tilt angle is nearly 45 • .
From our discussion in sections 3.5.1 and 3.5.2, we are led again to the conclusions that this is further evidence for the more dominant role of nonlinear terms in the Rayleigh unstable regime compared to the MRI unstable q < 2 regime. This demonstrates the inadequacy of linear arguments to explain the correlation between b x and b y .
At present we do not have a non-linear model to explain the observed behavior in either the spatial correlation ( fig. 4) and the temporal correlation (fig. 14) but the identification that the nonlinear terms are essential is a step toward such. The importance of these nonlinear terms present a challenge for analytic explanations.
CONCLUSIONS
We have compared the turbulent saturation properties of Rayleigh unstable MHD shear flows with those of the more commonly studied MRI unstable but Rayleigh stable regime. Our results are summarized below:
(i) The Rayleigh unstable regime (q > 2) generates turbulent velocity flows with or without magnetic fields. In the presence of magnetic fields, the fluid turbulence drives dynamo amplification of the total magnetic energy.
(ii) In this q > 2 regime, we find that magnetic energy and Maxwell stresses saturate at lower values than the kinetic energy fluctuations and associated Reynolds stresses. In this regime therefore, the magnetic field is "slaved" to the flow turbulence. This contrasts the MRI unstable regime (q < 2) in which the magnetic fluctuations and magnetic stresses dominate the kinetic energy fluctuations and stresses.
(iii) The quantity α mag,y remains roughly constant in q > 2 regime, which is the same as for the q < 2 regime (Nauman & Blackman (2015)). The tilt angle in ACF(b(δx)), on the other hand, with respect to y-axis is not constant as q changes. This contrasts the behavior in the MRI regime where the tilt angle is constant with changing q.
(iv) We found that the magnetic structures of the flow become more localized as we increase the shear from q = 1.5 to 4.2.
Our work on MHD turbulence in the Rayleigh unstable regime has shown qualitative differences in the way quantities scale with q compared to the more well studied MRI unstable regimes. While the dependencies on q for MRI regime seems to be captured by analytic explanations that invoke linear analysis, the same linear estimates do not work for the q > 2 cases. We have traced the source of these differences to the stronger influence of non-linear effects in the Rayleigh unstable regime. A physical and analytic understanding of these differences requires non-linear modeling of MHD shear turbulence in the two regimes, which is good opportunity for work beyond the present scope.
Figure 1 .
1Time history plot of kinetic energy (solid) and Reynolds stress (dotted) for hyd15 (q = 1.5, red), hyd21 (q = 2.1, green) and hyd42 (q = 4.2, blue). The y-axis is in log scale and the x-axis is time in units of 1/Ω.
Figure 2 .
2Time history plot of kinetic (solid) and magnetic energies (dotted) for mhd15 (q = 1.5, red), mhd21 (q = 2.1, green) and mhd42 (q = 4.2, blue).
Figure 3 .
3Same as Fig. 2 but for Reynolds (solid) and Maxwell stresses (dotted).
Following the convention used by Guan et al. (2009) and Simon et al. (2012), we define the spatial ACF of the magnetic field component 'i' (i = x, y, or z) as:
Figure 4 .
4Contour plots of the autocorrelation of velocity and magnetic fields for different runs.
Figure 5 .
5The comparison of different linear and non-linear terms in eq. 13 (top panel) and 14 (bottom panel) for the first 50 Ω −1 times, with q = 1.5.
Figure 6 .
6The comparison of different linear and non-linear terms in eq. 13 (top panel) and 14 (bottom panel) for the first 50 Ω −1 times, with q = 2.1. 7 are more dominant over a longer time range than in the bottom panel of fig. 5.
Figure 7 .
7The comparison of different linear and non-linear terms in eq. 13 (top panel) and 14 (bottom panel) for the first 50 Ω −1 times, with q = 4.2.
Figure 8 .
8The comparison of different linear and non-linear terms in eq. 15 (top panel) and 16 (bottom panel) for the first 50 Ω −1 times, with q = 1.5.
Figure 9 .
9The comparison of different linear and non-linear terms in eq. 15 (top panel) and 16 (bottom panel) for the first 50 Ω −1 times, with q = 2.1.
Figure 10 .
10The comparison of different linear and non-linear terms in eq. 15 (top panel) and 16 (bottom panel) for the first 50 Ω −1 times, with q = 4.
Figure 12 .Figure 13 .
1213The ACF(v y (δt)) as defined in eq. 17. The colour scheme is as follows: mhd15 (red), mhd21 (green), mhd42 (blue), hyd21 (magenta), hyd42 (black). The CCF(v x v y (δt)) as defined in eq. 18. Colour scheme same asfig. 12.
). http://ipag.osug.fr/~lesurg/snoopy.html2
https://trac.princeton.edu/Athena/ c 2015 The Authors
MNRAS 000, 1-10 (2015)
For an initially zero net flux case, such a term would be absent in the linear limit. We did carry out zero net flux simulations for B z,ini = B 0 sin k x x for all three shear values at Re = Rm = 1600 and found that only the q = 4.2 run shows growth and sustenance of MNRAS 000, 1-10(2015)
We thank the referee for pointing this out.MNRAS 000, 1-10(2015)
This paper has been typeset from a T E X/L A T E X file prepared by the author.MNRAS 000, 1-10(2015)
ACKNOWLEDGMENTSWe thank G. Lesur for discussions about the snoopy code. FN acknowledges Horton Fellowship from the Laboratory for Laser Energetics at U. Rochester. We acknowledge support from NSF grant AST-1109285. EB acknowledges support from the Simons Foundation and the IBM-Einstein Fellowship fund while at IAS, and grants HST-AR-13916.002 and NSF-AST1515648. We acknowledge the Center for Integrated Research Computing at the University of Rochester for providing computational resources.
. M Abramowicz, M Jaroszynski, M Sikora, A&A. 63221Abramowicz M., Jaroszynski M., Sikora M., 1978, A&A, 63, 221
. M Abramowicz, A Brandenburg, J.-P Lasota, MNRAS. 28121Abramowicz M., Brandenburg A., Lasota J.-P., 1996, MNRAS, 281, L21
. S A Balbus, 10.1111/j.1745-3933.2012.01255.xMNRAS. 42350Balbus S. A., 2012, MNRAS, 423, L50
. S A Balbus, J F Hawley, 10.1086/170270ApJ. 376214Balbus S. A., Hawley J. F., 1991, ApJ, 376, 214
. S A Balbus, J F Hawley, J M Stone, 10.1086/177585ApJ. 46776Balbus S. A., Hawley J. F., Stone J. M., 1996, ApJ, 467, 76
. K H Bech, H I Andersson, Journal of Fluid Mechanics. 347289Bech K. H., Andersson H. I., 1997, Journal of Fluid Mechanics, 347, 289
. G Brethouwer, 10.1017/S0022112005006427Journal of Fluid Mechanics. 542305Brethouwer G., 2005, Journal of Fluid Mechanics, 542, 305
Spectral methods: Fundamentals in single domains, 1 edn. C Canuto, M Hussaini, A Quarteroni, T Zang, 10.1073/pnas.46.2.253Proceedings of the National Academy of Science. 46253Springer-Verlag Berlin Heidelberg Chandrasekhar SCanuto C., Hussaini M., Quarteroni A., Zang T., 2006, Spectral methods: Fundamentals in single domains, 1 edn. Springer- Verlag Berlin Heidelberg Chandrasekhar S., 1960, Proceedings of the National Academy of Science, 46, 253
E M Corsini, arXiv:1403.1263Astronomical Society of the Pacific Conference Series. Iodice E., Corsini E. M.48651Multi-Spin Galaxies, ASP Conference SeriesCorsini E. M., 2014, in Iodice E., Corsini E. M., eds, Astronomical Society of the Pacific Conference Series Vol. 486, Multi-Spin Galaxies, ASP Conference Series. p. 51 (arXiv:1403.1263)
. S Dyda, R V E Lovelace, G V Ustyugova, M M Romanova, A V Koldoba, 10.1093/mnras/stu2131MNRAS. 446613Dyda S., Lovelace R. V. E., Ustyugova G. V., Romanova M. M., Koldoba A. V., 2015, MNRAS, 446, 613
. C F Gammie, 10.1086/423443ApJ. 614309Gammie C. F., 2004, ApJ, 614, 309
. T A Gardiner, J M Stone, 10.1016/j.jcp.2004.11.016Journal of Computational Physics. 205509Gardiner T. A., Stone J. M., 2005, Journal of Computational Physics, 205, 509
. P Goldreich, D Lynden-Bell, MNRAS. 13097Goldreich P., Lynden-Bell D., 1965, MNRAS, 130, 97
. X Guan, C F Gammie, J B Simon, B M Johnson, 10.1088/0004-637X/694/2/1010ApJ. 6941010Guan X., Gammie C. F., Simon J. B., Johnson B. M., 2009, ApJ, 694, 1010
. J F Hawley, C F Gammie, S A Balbus, 10.1086/175311ApJ. 440742Hawley J. F., Gammie C. F., Balbus S. A., 1995, ApJ, 440, 742
Black-hole accretion disks. S Kato, J Fukue, S Mineshige, Kyoto University PressKyoto, JapanKato S., Fukue J., Mineshige S., eds, 1998, Black-hole accretion disks. Kyoto University Press (Kyoto, Japan), 1998
. G Lesur, P.-Y Longaretti, 10.1051/0004-6361:20053683A&A. 44425Lesur G., Longaretti P.-Y., 2005, A&A, 444, 25
. G Lesur, P.-Y Longaretti, 10.1051/0004-6361/201015740A&A. 52817Lesur G., Longaretti P.-Y., 2011, A&A, 528, A17
. F Nauman, E G Blackman, 10.1093/mnras/stu2226MNRAS. 4462102Nauman F., Blackman E. G., 2015, MNRAS, 446, 2102
. F Nauman, E G Blackman, 10.1093/mnras/stw032MNRAS. 457902Nauman F., Blackman E. G., 2016, MNRAS, 457, 902
. F Nauman, M E Pessah, K Park, E G Blackman, 10.1111/j.1365-2966.2012.21010.xarXiv:1609.08543MNRAS. 4232120preprintNauman F., Pessah M. E., 2016, preprint, (arXiv:1609.08543) Park K., Blackman E. G., 2012, MNRAS, 423, 2120
. R F Penna, A Sadowski, A K Kulkarni, R Narayan, 10.1093/mnras/sts185MNRAS. 4282255Penna R. F., Sadowski A., Kulkarni A. K., Narayan R., 2013, MNRAS, 428, 2255
. M E Pessah, C.-K Chan, D Psaltis, 10.1111/j.1365-2966.2006.10824.xMNRAS. 372183Pessah M. E., Chan C.-K., Psaltis D., 2006, MNRAS, 372, 183
. J Philip, P Manneville, 10.1103/PhysRevE.83.036308Phys. Rev. E. 8336308Philip J., Manneville P., 2011, Phys. Rev. E, 83, 036308
. Y Pomeau, 10.1016/0167-2789(86)90104-1Physica D Nonlinear Phenomena. 233Pomeau Y., 1986, Physica D Nonlinear Phenomena, 23, 3
. A Riols, F Rincon, C Cossu, G Lesur, G I Ogilvie, P.-Y Longaretti, 10.1051/0004-6361/201424324A&A. 57514Riols A., Rincon F., Cossu C., Lesur G., Ogilvie G. I., Longaretti P.-Y., 2015, A&A, 575, A14
. N Shakura, K Postnov, 10.1093/mnras/stu2560MNRAS. 4483697Shakura N., Postnov K., 2015, MNRAS, 448, 3697
. J.-M Shi, J M Stone, C X Huang, 10.1093/mnras/stv2815MNRAS. 4562273Shi J.-M., Stone J. M., Huang C. X., 2016, MNRAS, 456, 2273
. J B Simon, K Beckwith, P J Armitage, 10.1111/j.1365-2966.2012.20835.xMNRAS. 4222685Simon J. B., Beckwith K., Armitage P. J., 2012, MNRAS, 422, 2685
. J Squire, A Bhattacharjee, 10.1103/PhysRevLett.114.085002Physical Review Letters. 11485002Squire J., Bhattacharjee A., 2015, Physical Review Letters, 114, 085002
. J M Stone, T A Gardiner, 10.1088/0067-0049/189/1/142ApJS. 189142Stone J. M., Gardiner T. A., 2010, ApJS, 189, 142
. J M Stone, T A Gardiner, P Teuben, J F Hawley, J B Simon, 10.1086/588755ApJS. 178137Stone J. M., Gardiner T. A., Teuben P., Hawley J. F., Simon J. B., 2008, ApJS, 178, 137
. N Tillmark, P H Alfredsson, 10.1017/S0022112092001046Journal of Fluid Mechanics. 23589Tillmark N., Alfredsson P. H., 1992, Journal of Fluid Mechanics, 235, 89
. O M Umurhan, O Regev, 10.1051/0004-6361:20040573A&A. 427855Umurhan O. M., Regev O., 2004, A&A, 427, 855
. E P Velikhov, JETP. 36995Velikhov E. P., 1959, JETP, 36, 995
|
[] |
[
"Progress Towards the Total Domination Game 3 4 -Conjecture",
"Progress Towards the Total Domination Game 3 4 -Conjecture"
] |
[
"Michael A Henning [email protected] \nDepartment of Pure and Applied Mathematics\nUniversity of Johannesburg Auckland Park\n2006South Africa\n\nDepartment of Mathematics\nFurman University Greenville\nSCUSA\n",
"Douglas F Rall "
] |
[
"Department of Pure and Applied Mathematics\nUniversity of Johannesburg Auckland Park\n2006South Africa",
"Department of Mathematics\nFurman University Greenville\nSCUSA"
] |
[] |
In this paper, we continue the study of the total domination game in graphs introduced in [Graphs Combin. 31(5) (2015), 1453-1462], where the players Dominator and Staller alternately select vertices of G. Each vertex chosen must strictly increase the number of vertices totally dominated, where a vertex totally dominates another vertex if they are neighbors. This process eventually produces a total dominating set S of G in which every vertex is totally dominated by a vertex in S. Dominator wishes to minimize the number of vertices chosen, while Staller wishes to maximize it. The game total domination number, γ tg (G), of G is the number of vertices chosen when Dominator starts the game and both players play optimally. Henning, Klavžar and Rall [Combinatorica, to appear] posted the 3 4 -Game Total Domination Conjecture that states that if G is a graph on n vertices in which every component contains at least three vertices, then γ tg (G) ≤ 3 4 n. In this paper, we prove this conjecture over the class of graphs G that satisfy both the condition that the degree sum of adjacent vertices in G is at least 4 and the condition that no two vertices of degree 1 are at distance 4 apart in G. In particular, we prove that by adopting a greedy strategy, Dominator can complete the total domination game played in a graph with minimum degree at least 2 in at most 3n/4 moves.
| null |
[
"https://arxiv.org/pdf/1512.02916v1.pdf"
] | 119,663,160 |
1512.02916
|
f6521322c614ae0788a149db411fbcfbfeae43ca
|
Progress Towards the Total Domination Game 3 4 -Conjecture
9 Dec 2015
Michael A Henning [email protected]
Department of Pure and Applied Mathematics
University of Johannesburg Auckland Park
2006South Africa
Department of Mathematics
Furman University Greenville
SCUSA
Douglas F Rall
Progress Towards the Total Domination Game 3 4 -Conjecture
9 Dec 2015Total domination gameGame total domination number3/4-Conjecture AMS subject classification: 05C65, 05C69
In this paper, we continue the study of the total domination game in graphs introduced in [Graphs Combin. 31(5) (2015), 1453-1462], where the players Dominator and Staller alternately select vertices of G. Each vertex chosen must strictly increase the number of vertices totally dominated, where a vertex totally dominates another vertex if they are neighbors. This process eventually produces a total dominating set S of G in which every vertex is totally dominated by a vertex in S. Dominator wishes to minimize the number of vertices chosen, while Staller wishes to maximize it. The game total domination number, γ tg (G), of G is the number of vertices chosen when Dominator starts the game and both players play optimally. Henning, Klavžar and Rall [Combinatorica, to appear] posted the 3 4 -Game Total Domination Conjecture that states that if G is a graph on n vertices in which every component contains at least three vertices, then γ tg (G) ≤ 3 4 n. In this paper, we prove this conjecture over the class of graphs G that satisfy both the condition that the degree sum of adjacent vertices in G is at least 4 and the condition that no two vertices of degree 1 are at distance 4 apart in G. In particular, we prove that by adopting a greedy strategy, Dominator can complete the total domination game played in a graph with minimum degree at least 2 in at most 3n/4 moves.
Introduction
The domination game in graphs was first introduced by Brešar, Klavžar, and Rall [2] and extensively studied afterwards in [1,3,4,5,6,8,9,11,15,17,18] and elsewhere. A vertex dominates itself and its neighbors. A dominating set of G is a set S of vertices of G such that every vertex in G is dominated by a vertex in S. The domination game played on a graph G consists of two players, Dominator and Staller, who take turns choosing a vertex from G. Each vertex chosen must dominate at least one vertex not dominated by the vertices previously chosen. The game ends when the set of vertices chosen becomes a dominating set in G. Dominator wishes to minimize the number of vertices chosen, while Staller wishes to end the game with as many vertices chosen as possible. The game domination number, γ g (G), of G is the number of vertices chosen when Dominator starts the game and both players play optimally.
Much interest in the domination game arose from the 3/5-Game Domination Conjecture posted by Kinnersley, West, and Zamani in [17], which states that if G is an isolate-free forest on n vertices, then γ g (G) ≤ 3 5 n. This conjecture remains open, although to date it is shown to be true for graphs with minimum degree at least 2 (see, [12]), and for isolate-free forests in which no two leaves are at distance 4 apart (see, [6]).
Recently, the total version of the domination game was investigated in [13], where it was demonstrated that these two versions differ significantly. A vertex totally dominates another vertex if they are neighbors. A total dominating set of a graph G is a set S of vertices such that every vertex of G is totally dominated by a vertex in S. The total domination game consists of two players called Dominator and Staller, who take turns choosing a vertex from G. Each vertex chosen must totally dominate at least one vertex not totally dominated by the set of vertices previously chosen. Following the notation of [13], we call such a chosen vertex a legal move or a playable vertex in the total domination game. The game ends when the set of vertices chosen is a total dominating set in G. Dominator's objective is to minimize the number of vertices chosen, while Staller's is to end the game with as many vertices chosen as possible.
The game total domination number, γ tg (G), of G is the number of vertices chosen when Dominator starts the game and both players employ a strategy that achieves their objective. If Staller starts the game, the resulting number of vertices chosen is the Staller-start game total domination number, γ ′ tg (G), of G. A partially total dominated graph is a graph together with a declaration that some vertices are already totally dominated; that is, they need not be totally dominated in the rest of the game. In [13], the authors present a key lemma, named the Total Continuation Principle, which in particular implies that when the game is played on a partially total dominated graph G, the numbers γ tg (G) and γ ′ tg (G) can differ by at most 1. Determining the exact value of γ tg (G) and γ ′ tg (G) is a challenging problem, and is currently known only for paths and cycles [10]. Much attention has therefore focused on obtaining upper bounds on the game total domination number in terms of the order of the graph. The best general upper bound to date on the game total domination number for general graphs is established in [14]. Theorem 1 ([14]) If G is a graph on n vertices in which every component contains at least three vertices, then γ tg (G) ≤ 4 5 n.
Our focus in the present paper is the following conjecture posted by Henning, Klavžar and Rall [14]. Bujtás, Henning, and Tuza [7] recently proved the 3 4 -Conjecture over the class of graphs with minimum degree at least 2. To do this, they raise the problem to a higher level by introducing a transversal game in hypergraphs, and establish a tight upper bound on the game transversal number of a hypergraph with all edges of size at least 2 in terms of its order and size. As an application of this result, they prove that if G is a graph on n vertices with minimum degree at least 2, then γ tg (G) ≤ 8 11 n, which validates the 3 4 -Game Total Domination Conjecture on graphs with minimum degree at least 2.
For notation and graph theory terminology not defined herein, we in general follow [16]. We denote the degree of a vertex v in a graph G by d G (v), or simply by d(v) if the graph G is clear from the context. The minimum degree among the vertices of G is denoted by δ(G). A vertex of degree 1 is called a leaf and its neighbor a support vertex. If X and Y are subsets of vertices in a graph G, then the set X totally dominates the set Y in G if every vertex of Y is adjacent to at least one vertex of X. In particular, if X totally dominates the vertex set of G, then X is a total dominating set in G. For more information on total domination in graphs see the recent book [16]. Since an isolated vertex in a graph cannot be totally dominated by definition, all graphs considered will be without isolated vertices. We also use the standard notation [k] = {1, . . . , k}.
Main Result
In this paper we prove the following result. Its proof is given in Section 3. 3 4 -Game Total Domination Conjecture is true over the class of graphs G that satisfy both conditions (a) and (b) below:
Theorem 2 The
(a) The degree sum of adjacent vertices in G is at least 4. (b) No two leaves are at distance 4 apart in G.
As a special case of Theorem 2, the 3 4 -Game Total Domination Conjecture is valid on graphs with minimum degree at least 2. 3 4 -Game Total Domination Conjecture is true over the class of graphs with minimum degree at least 2.
Corollary 1 ([7]) The
Proof of Main Result
In this section, we give a proof of our main theorem, namely Theorem 2. For this purpose, we adopt the approach of the authors in [14] and color the vertices of a graph with four colors that reflect four different types of vertices. More precisely, at any stage of the game, if D denotes the set of vertices played to date where initially D = ∅, we define as in [14] a colored-graph with respect to the played vertices in the set D as a graph in which every vertex is colored with one of four colors, namely white, green, blue, or red, according to the following rules.
• A vertex is colored white if it is not totally dominated by D and does not belong to D.
• A vertex is colored green if it is not totally dominated by D but belongs to D.
• A vertex is colored blue if it is totally dominated by D but has a neighbor not totally dominated by D. • A vertex is colored red if it and all its neighbors are totally dominated by D.
As remarked in [14], in a partially total dominated graph the only playable vertices are those that have a white or green neighbor since a played vertex must totally dominate at least one new vertex. In particular, no red or green vertex is playable. Further, as observed in [14], once a vertex is colored red it plays no role in the remainder of the game, and edges joining two blue vertices play no role in the game. Therefore, we may assume a partially total dominated graph contains no red vertices and has no edge joining two blue vertices. The resulting graph is called a residual graph. We note that the degree of a white or green vertex in the residual graph remains unchanged from its degree in the original graph.
Where our approach in the current paper differs from that in [14] is twofold. First, we define two new colors in the colored-graph that may possibly be introduced as the game is played. Second, our assignment of weights to vertices of each color differs from the assignment in [14]. Here, we associate a weight with every vertex in the residual graph as follows:
Color of vertex Weight of vertex white 3 green 2 blue 1 red 0 We denote the weight of a vertex v in the residual graph G by w(v). For a subset S ⊆ V (G) of vertices of G, the weight of S is the sum of the weights of the vertices in S, denoted w(S). The weight of G, denoted w(G), is the sum of the weights of the vertices in G; that is, w(G) = w(V (G)). We define the value of a playable vertex as the decrease in weight resulting from playing that vertex.
We say that Dominator can achieve his 4-target if he can play a sequence of moves guaranteeing that on average the weight decrease resulting from each played vertex in the game is at least 4. In order to achieve his 4-target, Dominator must guarantee that a sequence of moves m 1 , . . . , m k are played, starting with his first move m 1 , and with moves alternating between Dominator and Staller such that if w i denotes the decrease in weight after move m i is played, then
k i=1 w i ≥ 4k ,(1)
and the game is completed after move m k . In the discussion that follows, we analyse how Dominator can achieve his 4-target. For this purpose, we describe a move that we call a greedy move.
• A greedy move is a move that decreases the weight by as much as possible. We say that Dominator follows a greedy strategy if he plays a greedy move on each turn.
We are now in a position to prove our main result, namely Theorem 2. Recall its statement.
Theorem 2. The 3 4 -Game Total Domination Conjecture is true over the class of graphs G that satisfy both conditions (a) and (b) below:
(a) The degree sum of adjacent vertices in G is at least 4.
(b) No two leaves are at distance 4 apart in G.
Proof. Let G be a graph that satisfies both conditions (a) and (b) in the statement of the theorem. Coloring the vertices of G with the color white we produce a colored-graph in which every vertex is colored white. In particular, we note that G has n white vertices and has weight w(G) = 3n. Before any move of Dominator, the game is in one of the following two phases.
• Phase 1, if there exists a legal move of value at least 5.
• Phase 2, if every legal move has value at most 4.
We proceed with the following claims.
Claim 2.1 Every legal move in a residual graph decreases the total weight by at least 3.
Proof. Every legal move in a colored-graph is a white vertex with at least one white neighbor or a blue vertex with at least one white or green neighbor. Let v be a legal move in a residual graph. Suppose that v is a white vertex, and so v has at least one white neighbor. When v is played, the vertex v is recolored green while each white neighbor of v is recolored blue, implying that the weight decrease resulting from playing v is at least 3. Suppose that v is a blue vertex, and so each neighbor of v is colored white or green. Playing the vertex v recolors each white neighbor of v blue or red and recolors each green neighbor of v red. The weight of each neighbor of the blue vertex v is therefore decreased by at least 2 when v is played, while the vertex v itself is recolored red and its weight decreases by 1. Hence, the total weight decrease resulting from playing v is at least 3. (✷)
Claim 2.2 Let R be the residual graph. If the game is in Phase 2 and if C is an arbitrary component of R, then one of the following holds.
(a) C ∼ = P 4 , with both leaves colored blue and both internal vertices colored white.
(b) C ∼ = P 3 , with both leaves colored blue and with the central vertex colored green.
(c) C ∼ = P 2 , with one leaf colored blue and the other colored green.
(d) C ∼ = P 2 , with one leaf colored blue and the other colored white.
Proof. Suppose the game is in Phase 2. We show first that every white vertex has at most one white neighbor in the residual graph R. Suppose, to the contrary, that a white vertex v has at least two white neighbors. When v is played the weight decreases by at least 1 + 2 · 2 = 5, since the vertex v is recolored green while each white neighbor of v is recolored blue. This contradicts the fact that every legal move decreases the weight by at most 4.
We show next that every blue vertex has degree 1 in the residual graph R. Suppose, to the contrary, that a blue vertex v has degree at least 2 in R. Playing the vertex v recolors each white neighbor of v blue or red and recolors each green neighbor of v red. Thus, playing the vertex v decreases the weight of each of its neighbors by at least 2. In addition, the vertex v is recolored red, and so its weight decreases by 1. Hence, the weight decrease resulting from playing v is at least 5, a contradiction.
Suppose that R contains a green vertex, v. Each neighbor of v is colored blue, and, by our earlier observations, is therefore a blue leaf. If v is a leaf, then the component containing v is a path isomorphic to P 2 with one leaf colored blue and the other colored green, and therefore satisfies condition (c) in the statement of the claim. Hence, we may assume that the (green) vertex v has at least two neighbors in R. If v has at least three neighbors in R, then since every neighbor of v is a blue leaf, the weight decrease resulting from playing an arbitrary neighbor of v is at least 5, noting that such a move recolors v and all its neighbors red. This produces a contradiction. Therefore, v has exactly two neighbors in R, implying that the component containing v is a path isomorphic to P 3 with both leaves colored blue and with the central vertex, namely v, colored green, and therefore satisfies condition (b) in the statement of the claim. Hence, we may assume that there is no green vertex, for otherwise the desired result holds.
Suppose that there is a white vertex, u, in the residual graph R. Suppose that u has no white neighbor. By our earlier observations, every neighbor of u is a blue leaf. Playing a neighbor of u therefore recolors all the neighbors of u from blue to red. Since the degree of a white vertex in the residual graph remains unchanged from its degree in the original graph, we note in particular that d G (u) = d R (u). If u is not a leaf in G, then playing a neighbor of u decreases the weight by at least 3 + d R (u) ≥ 5, a contradiction. Hence, u is a leaf, and the component containing u is a path isomorphic to P 2 with one leaf colored blue and the other colored white. We may therefore assume that the vertex u has exactly one white neighbor, for otherwise the component containing u satisfies condition (d) in the statement of the claim. Let x be the white neighbor of u. Every neighbor of u different from x is a blue leaf, and every neighbor of x different from u is a blue leaf. Suppose that u or x, say u, is a leaf. Since the degree sum of adjacent vertices in G is at least 4, and d G (u) = d R (u), the vertex x has degree at least 3. Playing the vertex u recolors u from white to green, recolors x from white to blue, and recolors all neighbors of x different from u from blue to red. Hence, playing u decreases the total weight by at least 5, a contradiction. Therefore, neither u nor x is a leaf, implying that both u and x have at least one blue leaf neighbor. Suppose that u or x, say u, has degree at least 3. Playing the vertex x recolors x from white to green, recolors u from white to blue, and recolors each neighbor of u different from x from blue to red, implying that the total weight decrease resulting from playing x is at least 5, a contradiction. Therefore, both u and x have degree 2. Thus, the component containing u and x is a path isomorphic to P 4 with both leaves colored blue and both internal vertices colored white, and therefore satisfies condition (a) in the statement of the claim. This completes the proof of Claim 2.2. Proof. Suppose that δ(G) ≥ 2 and Dominator follows a greedy strategy. Thus, at each stage of the game, Dominator plays a (greedy) move that decreases the weight by as much as possible. By Claim 2.1, every move of Staller's decreases the weight by at least 3. Hence, whenever Dominator plays a vertex that decreases the weight by at least 5, his move, together with Staller's response, decreases the weight by at least 8. Therefore, we may assume that at some stage the game enters Phase 2, for otherwise Inequality (1) is satisfied upon completion of the game and Dominator can achieve his 4-target.
Suppose that the first ℓ moves of Dominator each decrease the weight by at least 5, and that his (ℓ + 1)st move decreases the weight by at most 4. Thus, w(m 2i−1 ) + w(m 2i ) ≥ 8 for i ∈ [ℓ], and w(m 2ℓ+1 ) ≤ 4. Let R denote the residual graph immediately after Staller plays her ℓth move, namely the move m 2ℓ . Thus,
2ℓ i=1 w i = ℓ i=1 (w(m 2i−1 ) + w(m 2i )) ≥ 8 · ℓ = 4 · (2ℓ).
Since δ(G) ≥ 2, we note that R contains no green or white leaf. Hence, by Claim 2.2, every component C of R satisfies C ∼ = P 4 , with both leaves colored blue and both internal vertices colored white, or C ∼ = P 3 , with both leaves colored blue and with the central vertex colored green. If C ∼ = P 4 , then w(V (C)) = 8 and exactly two additional moves are required to totally dominate the vertices V (C), while if C ∼ = P 3 , then w(V (C)) = 4 and exactly one move is played in C to totally dominate the vertices V (C). Suppose that R has t components isomorphic to P 4 and s components isomorphic to P 3 . Thus, 2t + s additional moves are needed to complete the game once it enters Phase 2. Further, these remaining 2t + s moves satisfy 2ℓ+2t+s i=2ℓ+1 w i = 4 · (2t + s).
Hence,
2ℓ+2t+s i=1 w i = 2ℓ i=1 w i + 2ℓ+2t+s i=2ℓ+1 w i ≥ 4 · (2ℓ + 2t + s),
and so Inequality (1) is satisfied upon completion of the game. Thus, Dominator can achieve his 4-target simply by following a greedy strategy. This completes the proof of Claim 2.
(✷)
We now return to the proof of Theorem 2. By Claim 2.3, we may assume that G contains at least one leaf, for otherwise Dominator can achieve his 4-target (and he can do so by following a greedy strategy).
As the game is played, we introduce a new color, namely purple, which we use to recolor certain white support vertices. A purple vertex will have the same properties of a white vertex, except that the weight of a purple vertex is 4. The idea behind the re-coloring is that the additional weight of 1 assigned to a purple vertex will represent a surplus weight that we can "bank" and withdraw later. To formally define the recoloring procedure, we introduce additional terminology.
Consider a residual graph R that arises during the course of the game. Suppose that uvwx is an induced path in R, where v, w and x are all white vertices, and where w is a support vertex and x a leaf in R. We note that the vertex u is colored white or blue. Such a vertex u turns out to be problematic for Dominator, and we call such a vertex a problematic vertex. Further, we call the path uvwx a problematic path associated with u, and we call w a support vertex associated with u.
Suppose that Staller plays a problematic vertex, u. Suppose that there are exactly k support vertices, say w 1 , . . . , w k , associated with u. We note that k ≥ 1. For i ∈ [k], let uv i w i x i be a problematic path associated with u that contains w i . Thus, v i , w i and x i are all white vertices, w i is a support vertex, and x i a leaf in R. Since no two leaves are at distance 4 apart in G, we note that if k ≥ 2, then v i = v j for 1 ≤ i, j ≤ k and i = j.
Suppose first that k ≥ 2. In this case, playing the problematic vertex, u, decreases the total weight by at least 2k + 1, since u is recolored from blue to red or from white to green, while each neighbor v i , i ∈ [k], of u is recolored from white to blue. Thus, the current value of u is at least 2k + 1. We now discharge the value of u as follows. We discharge a weight of k from the value of u and add a weight of 1 to every support vertex w i , i ∈ [k]. Thus, by playing u the resulting decrease in total weight is the value of u in R minus k, which is at least k + 1 ≥ 3. Further, the weight of each (white) support vertex w i , i ∈ [k], increases from 3 to 4. We now re-color each support vertex w i , i ∈ [k], from white to purple.
Suppose secondly that k = 1 and the value of u is at least 4. In this case, we proceed exactly as before: We discharge a weight of k = 1 from the value of u, add a weight of 1 to the support vertex w 1 , and re-color w 1 from white to purple. Thus, by playing u the resulting decrease in total weight is at least 3.
In both cases, we note that the new weight of w i , i ∈ [k], is 4. Thus, every newly created purple vertex is a support vertex and has weight 4. We define a purple vertex to have the identical properties of a white vertex, except that its weight is 4. Thus, a purple vertex is not totally dominated by the vertices played to date and has not yet been played.
We note that if Staller plays a problematic vertex, u, whose current value is at least 4, then the above discharging argument recolors every support vertex associated with u from white to purple. Further, by playing u the resulting decrease in total weight is at least 3, and the weight of each newly created purple vertex is 4. We state this formally as follows.
Claim 2.4 If Staller plays a problematic vertex whose current value is at least 4, then the resulting decrease in total weight is at least 3.
We note further that if Staller plays a problematic vertex, u, whose current value is exactly 3, the two internal vertices of the problematic path associated with u are unique. In particular, the support vertex associated with u is unique.
We now introduce an additional new color, namely indigo, which we use to recolor certain purple vertices. An indigo vertex will have the same properties of a blue vertex, except that the weight of an indigo vertex is 2 (while the weight of a blue vertex is 1). The idea behind the re-coloring is that the additional weight of 1 assigned to an indigo vertex will represent a surplus weight that, as before, we can "bank" and withdraw later.
More formally, suppose that a white leaf, say z, adjacent to a purple vertex, say x, is played. When the leaf z is played, it changes color from white to green and its support neighbor, x, changes color from purple to blue (noting that a purple vertex has the same properties as a white vertex). Thus, when the leaf z is played, the weight of z decreased by 1 and the weight of x decreased by 3, implying that the value of z is at least 4. However, when z is played we discharge a weight of 1 from the value of z and add a weight of 1 to the vertex x, thereby increasing its weight to 2. Thus, by playing z the resulting decrease in total weight is one less than the value of z, and is therefore at least 4 − 1 = 3. Further, the weight of the resulting support vertex x increases from 1 to 2. We now re-color the vertex x from blue to indigo. We note the following. A white support vertex with a white leaf neighbor in R we call a targeted support vertex in R. Since no two leaves are at distance 4 apart in G, every pair of targeted support vertices in R are either adjacent or at distance at least 3 apart in R.
Dominator henceforth applies the following rules.
Dominator's strategy:
(R1) Whenever Staller plays a white leaf adjacent to a targeted support vertex, Dominator immediately responds by playing on the resulting (blue) support vertex.
(R2) Whenever Staller plays a problematic vertex, u, whose current value is exactly 3, Dominator immediately responds by playing the unique (targeted) support vertex associated with u. It remains for us to show that Dominator's strategy which applies rules (R1), (R2), (R3) and (R4) above, does indeed guarantee that on average the weight decrease resulting from each played vertex in the game is at least 4. We note that Dominator's strategy when playing according to (R1), (R2) and (R3) is to play a targeted support vertex or a blue support vertex with a green leaf neighbor. However, the order in which he plays such support vertices is important. Recall that by our earlier assumptions, G contains at least one leaf. The following claim will prove to be useful. Proof. We proceed by induction on the number, m ≥ 1, of moves played by Dominator whenever he plays according to rule (R1), (R2) and (R3). We note that every targeted support vertex has degree at least 3 in the residual graph. Further, we recall that the degree sum of adjacent vertices in G is at least 4 and the degree of a white vertex in the residual graph remains unchanged from its degree in the original graph. Since no two targeted support vertices are at distance 2 apart in R, when Dominator played a targeted support vertex, the white neighbors of every remaining targeted support vertex retain their color.
On Dominator's first move of the game, he plays a targeted support vertex of maximum value according to rule (R3). Such a (white) support vertex has degree at least 3 and all its neighbors are white, and therefore playing his first move decreases the weight by at least 7 and no green leaf is created. This establishes the base case when m = 1. Suppose that m ≥ 2 and that Dominator plays according to rule (R1), (R2) and (R3), and assume that after the first m − 1 moves, every targeted support vertex has at least three white neighbors, there is no green leaf adjacent to a blue vertex, and each of his first m − 1 moves has value at least 5. We show that after Dominator's mth move, the three properties (a), (b) and (c) hold.
Suppose that Staller's (m − 1)st move plays a white leaf x adjacent to a targeted support vertex y. Her move recolors x from white to green, and recolors y from white to blue. By the inductive hypothesis, before Staller played her move, the vertex y had at least three white neighbors. According to rule (R1), Dominator immediately responds to Staller's (m − 1)st move by playing on the resulting (blue) support vertex, y. Since the support vertex y has at least two white neighbors after Staller played her (m − 1)st move, his move decreases the weight by at least 7. Further, since the white neighbors of every remaining targeted support vertex retain their color, after Dominator's mth move the induction hypothesis implies that every targeted support vertex has at least three white neighbors and there is no green leaf adjacent to a blue vertex.
Suppose that Staller's (m − 1)st move plays neither a white leaf adjacent to a targeted support vertex nor a problematic vertex. In this case, the white neighbors of every remaining targeted support vertex retain their color after Staller's move. If there remains a targeted support vertex, then, according to rule (R3), Dominator's mth move plays a targeted support vertex. By induction, such a support vertex has at least three white neighbors, and therefore has value at least 7. Thus, as before, the desired properties (a), (b) and (c) follow by induction after Dominator's mth move.
Suppose that Staller's (m − 1)st move plays a problematic vertex, u, whose current value is at least 4. Applying our discharging arguments, every targeted support vertex associated with u is recolored from white to purple. The only targeted support vertices affected by Staller's move, in the sense that it or at least one of its white neighbors changes color, are targeted support vertices associated with u or adjacent to u. Thus, as before, the desired properties (a), (b) and (c) follow by induction after Dominator's mth move.
Suppose, finally, that Staller's (m − 1)st move plays a problematic vertex, u, whose current value is 3. In this case, either u is a blue leaf with a white neighbor or u is a white vertex with exactly one white neighbor. Further, the two internal vertices of a problematic path associated with u are unique. Let uvwx be such a problematic path associated with u, and so v and w are unique. In fact, v is the only white neighbor of u. By the inductive hypothesis, immediately before Staller played her (m − 1)st move, the targeted support vertex w has at least three white neighbors. Since the vertex v is the only such white neighbor of w that is adjacent to u, after Staller plays u, the (white) support vertex w has at least two white neighbors, including the white leaf neighbor x. According to rule (R2), Dominator immediately responds by playing as his mth move this unique targeted support vertex, w, associated with u. Since w has at least two white neighbors, it has value at least 5. As observed earlier, the white neighbors of every remaining targeted support vertex retain their color after Dominator's move. Therefore, after Dominator's mth move, the desired properties (a), (b) and (c) hold. (✷) By Claim 2.7, while Dominator plays according to rule (R1), (R2) and (R3), each move he plays has value at least 5. By Claim 2.1, Claim 2.4 and Claim 2.5, each move of Staller's decreases the weight by at least 3. Hence, each move Dominator plays during this stage of the game, together with Staller's response, decreases the weight by at least 8. Therefore, we may assume that at some stage the game, Dominator cannot play according to (R1), (R2) and (R3), for otherwise Inequality (1) is satisfied upon completion of the game and Dominator can achieve his 4-target. We note that at this stage of the game, there no longer exists a targeted support vertex. Further, there is no green leaf adjacent to a blue vertex. This implies that no green leaf adjacent to a blue vertex can be created in the remainder of the game.
According to rule (R4), Dominator now plays a greedy move and he continues to do so until the game is complete. We may assume that at some stage the game enters Phase 2, for otherwise once again Inequality (1) is satisfied upon completion of the game and Dominator can achieve his 4-target. By Claim 2.6 and our observation that there is no green leaf adjacent to a blue vertex when the game is in Phase 2, if C is an arbitrary component of the residual graph R at this stage of the game when Dominator cannot play according to (R1), (R2) and (R3), then C ≇ P 2 with one blue and one green vertex. That is, C satisfies one of (a), (b), (d) or (e) in the statement of Claim 2.6.
If C ∼ = P 4 , then C satisfies statement (a) of Claim 2.6, implying that w(V (C)) = 8 and exactly two additional moves are required to totally dominate the vertices V (C). If C ∼ = P 3 or if C ∼ = P 2 , then C satisfies statement (b), (d) or (e) of Claim 2.6, implying that w(V (C)) = 4 and exactly one move is played in C to totally dominate the vertices V (C). Analogously as in the proof of Claim 2.3, this implies that Dominator can achieve his 4-target. Thus, since G has n white vertices and has weight w(G) = 3n, Dominator can make sure that the average decrease in the weight of the residual graph resulting from each played vertex in the game is at least 4. Thus, in the colored-graph G, γ tg (G) ≤ w(G)/4 = 3n/4. ✷ As an immediate consequence of the proof of Theorem 2 (see Claim 2.3), we have the following result.
Corollary 2 If G is a colored-graph with δ(G) ≥ 2 and Dominator follows a greedy strategy, then he can achieve his 4-target.
Corollary 2 in turn implies Corollary 1. Recall its statement. Proof. Let G be a graph with δ(G) ≥ 2. Coloring the vertices of G with the color white we produce a colored-graph in which every vertex is colored white. In particular, we note that G has n white vertices and has weight w(G) = 3n. By Corollary 2, Dominator can achieve his 4-target by following a greedy strategy. Thus, Dominator can make sure that the average decrease in the weight of the residual graph resulting from each played vertex in the game is at least 4. Thus, in the colored-graph G, γ tg (G) ≤ w(G)/4 = 3n/4. ✷
Summary
As remarked earlier, the authors in [7] prove a stronger result than Corollary 1 by showing, using game transversals in hypergraphs, that if G is a graph on n vertices with minimum degree at least 2, then γ tg (G) ≤ 8 11 n. However, our result, namely Corollary 2, is surprising in that Dominator can complete the total domination game played in a graph with minimum degree at least 2 in at most 3 4 n moves by simply following a greedy strategy in the associated colored-graph in which every vertex is initially colored white. Our main result, namely Theorem 2, shows that the 3 4 -Game Total Domination Conjecture holds in a general graph G (with no isolated vertex) if we remove the minimum degree at least 2 condition, but impose the weaker condition that the degree sum of adjacent vertices in G is at least 4 and the requirement that no two leaves are at distance 4 apart in G.
Total Domination Conjecture ([14]) If G is a graph on n vertices in which every component contains at least three vertices, then γ tg (G) ≤ 3 4 n.
2.2, once the game enters Phase 2 the residual graph is determined and each component satisfies one of the conditions (a)-(d) in the statement of the claim.
Claim 2. 3
3If the minimum degree in G is at least 2, then Dominator can achieve his 4-target by following a greedy strategy.
Claim 2. 5
5If Staller plays a white leaf adjacent to a purple support vertex, then the resulting decrease in total weight is at least 3.An identical proof of Claim 2.2 proves the following result.
Claim 2. 6
6Let R be the residual graph. If the game is in Phase 2 and if C is an arbitrary component of R, then one of the following holds.(a) C ∼ = P 4 , with both leaves colored blue and both internal vertices colored white.(b) C ∼ = P 3 , with both leaves colored blue and with the central vertex colored green.(c) C ∼ = P 2 , with one leaf colored blue and the other colored green.(d) C ∼ = P 2 , with one leaf colored blue and the other colored white.(e) C ∼ = P 2 , with one leaf colored indigo and the other colored green.
( R3 )
R3If Dominator cannot play according to (R1) and (R2), he plays a targeted support vertex of maximum value. (R4) If Dominator cannot play according to (R1), (R2) and (R3), he plays a greedy move.
Claim 2. 7
7While Dominator plays according to rule (R1), (R2) and (R3), the following three statements hold. (a) After each move of Dominator, every targeted support vertex has at least three white neighbors. (b) After each move of Dominator, there is no green leaf adjacent to a blue vertex. (c) Each move that Dominator plays has value at least 5.
Corollary 1 (
1[7]). The 3 4 -Game Total Domination Conjecture is true over the class of graphs with minimum degree at least 2.
Table 1 .
1The weights of vertices according to their color.
AcknowledgementsResearch of both authors was supported by a grant from the Simons Foundation (#209654 to Douglas Rall). The first author is supported in part by the South African National Research Foundation and the University of Johannesburg.
Domination game: effect of edge-and vertex-removal. B Brešar, P Dorbec, S Klavžar, G Košmrlj, Discrete Math. 330B. Brešar, P. Dorbec, S. Klavžar, and G. Košmrlj, Domination game: effect of edge-and vertex-removal. Discrete Math. 330 (2014), 1-10.
Domination game and an imagination strategy. B Brešar, S Klavžar, D F , SIAM J. Discrete Math. 24B. Brešar, S. Klavžar, and D. F. Rall, Domination game and an imagination strategy. SIAM J. Discrete Math. 24 (2010), 979-991.
Domination game played on trees and spanning subgraphs. B Brešar, S Klavžar, D F , Discrete Math. 313B. Brešar, S. Klavžar, and D. F. Rall, Domination game played on trees and spanning subgraphs. Discrete Math. 313 (2013), 915-923.
Domination game: extremal families of graphs for the 3/5-conjectures. B Brešar, S Klavžar, G Košmrlj, D F , Discrete Appl. Math. 161B. Brešar, S. Klavžar, G. Košmrlj, and D. F. Rall, Domination game: extremal families of graphs for the 3/5-conjectures. Discrete Appl. Math. 161 (2013), 1308-1316.
Domination game played on trees and spanning subgraphs. B Brešar, S Klavžar, D Rall, Discrete Math. 313B. Brešar, S. Klavžar, and D. Rall, Domination game played on trees and spanning subgraphs. Discrete Math. 313 (2013), 915-923.
Domination game on trees without leaves at distance four. Cs, Bujtás, Proceedings of the 8th Japanese-Hungarian Symposium on Discrete Mathematics and Its Applications. A. Frank, A. Recski, G. Wienerthe 8th Japanese-Hungarian Symposium on Discrete Mathematics and Its ApplicationsVeszprém, HungaryCs. Bujtás, Domination game on trees without leaves at distance four, Proceedings of the 8th Japanese-Hungarian Symposium on Discrete Mathematics and Its Applications (A. Frank, A. Recski, G. Wiener, eds.), June 4-7, 2013, Veszprém, Hungary, 73-78.
Total domination game: A proof of the 3 4 -Conjecture for graphs with minimum degree at least two. Cs, M A Bujtás, Z Henning, Tuza, manuscriptCs. Bujtás, M. A. Henning, and Z. Tuza, Total domination game: A proof of the 3 4 - Conjecture for graphs with minimum degree at least two, manuscript.
The Disjoint Domination Game. Cs, Bujtás, Zs, Tuza, Discrete Math. to appearCs. Bujtás and Zs. Tuza, The Disjoint Domination Game. Discrete Math., to appear.
Domination game critical graphs. Cs, S Bujtás, G Klavžar, Košmrlj, Discuss. Math. Graph Theory. to appearCs. Bujtás, S. Klavžar, and G. Košmrlj, Domination game critical graphs. Discuss. Math. Graph Theory, to appear.
Game total domination for cycles and paths. P Dorbec, M A Henning, manuscriptP. Dorbec and M. A. Henning, Game total domination for cycles and paths, manuscript.
The domination game played on unions of graphs. P Dorbec, G Košmrlj, G Renault, Discrete Math. 338P. Dorbec, G. Košmrlj, and G. Renault, The domination game played on unions of graphs. Discrete Math. 338 (2015), 71-79.
Domination Game: A proof of the 3/5-Conjecture for graphs with minimum degree at least two. M A Henning, W B Kinnersley, SIAM J. Discrete Math. to appearM. A. Henning and W. B. Kinnersley, Domination Game: A proof of the 3/5-Conjecture for graphs with minimum degree at least two. SIAM J. Discrete Math., to appear.
Total version of the domination game. M A Henning, S Klavžar, D F , Graphs Combin. 315M. A. Henning, S. Klavžar, and D. F. Rall, Total version of the domination game. Graphs Combin. 31(5) (2015), 1453-1462.
. M A Henning, S Klavžar, D F , The 4/5 upper bound on the game total domination number. Combinatorica, to appearM. A. Henning, S. Klavžar, and D. F. Rall, The 4/5 upper bound on the game total domination number. Combinatorica, to appear.
Domination game: Extremal families for the 3/5-conjecture for forests. M A Henning, C Löwenstein, manuscriptM. A. Henning and C. Löwenstein, Domination game: Extremal families for the 3/5- conjecture for forests, manuscript.
Total domination in graphs (Springer Monographs in Mathematics). M A Henning, A Yeo, ISBN: 978-1-4614-6524-9OnlinePrintM. A. Henning and A. Yeo, Total domination in graphs (Springer Monographs in Math- ematics) 2013. ISBN: 978-1-4614-6524-9 (Print) 978-1-4614-6525-6 (Online).
Extremal problems for game domination number. W B Kinnersley, D B West, R Zamani, SIAM J. Discrete Math. 27W. B. Kinnersley, D. B. West, and R. Zamani, Extremal problems for game domination number. SIAM J. Discrete Math. 27 (2013), 2090-2107.
Realizations of the game domination number. G Košmrlj, J. Combin. Opt. 28G. Košmrlj, Realizations of the game domination number. J. Combin. Opt. 28 (2014), 447-461.
|
[] |
[
"Balanced reconstruction codes for single edits *",
"Balanced reconstruction codes for single edits *"
] |
[
"Rongsheng Wu ",
"Xiande Zhang "
] |
[] |
[] |
Motivated by the sequence reconstruction problem initiated by Levenshtein, reconstruction codes were introduced by Cai et al. to combat errors when a fixed number of noisy channels are available. The central problem on this topic is to design codes with sizes as large as possible, such that every codeword can be uniquely reconstructed from any N distinct noisy reads, where N is fixed. In this paper, we study binary reconstruction codes with the constraint that every codeword is balanced, which is a common requirement in the technique of DNA-based storage. For all possible channels with a single edit error and their variants, we design asymptotically optimal balanced reconstruction codes for all N , and show that the number of their redundant symbols decreases from 3 2 log 2 n + O(1) to 1 2 log 2 n + log 2 log 2 n + O(1), and finally to 1 2 log 2 n + O(1) but with different speeds, where n is the length of the code. Compared with the unbalanced case, our results imply that the balanced property does not reduce the rate of the reconstruction code in the corresponding codebook.
|
10.48550/arxiv.2207.00832
|
[
"https://arxiv.org/pdf/2207.00832v1.pdf"
] | 250,264,179 |
2207.00832
|
9096349e2873d25576452a82176ec832abe45208
|
Balanced reconstruction codes for single edits *
2 Jul 2022
Rongsheng Wu
Xiande Zhang
Balanced reconstruction codes for single edits *
2 Jul 2022Binary balanced codessequence reconstructionerror metricread cover- ageVarshamov-Tenengolts codes
Motivated by the sequence reconstruction problem initiated by Levenshtein, reconstruction codes were introduced by Cai et al. to combat errors when a fixed number of noisy channels are available. The central problem on this topic is to design codes with sizes as large as possible, such that every codeword can be uniquely reconstructed from any N distinct noisy reads, where N is fixed. In this paper, we study binary reconstruction codes with the constraint that every codeword is balanced, which is a common requirement in the technique of DNA-based storage. For all possible channels with a single edit error and their variants, we design asymptotically optimal balanced reconstruction codes for all N , and show that the number of their redundant symbols decreases from 3 2 log 2 n + O(1) to 1 2 log 2 n + log 2 log 2 n + O(1), and finally to 1 2 log 2 n + O(1) but with different speeds, where n is the length of the code. Compared with the unbalanced case, our results imply that the balanced property does not reduce the rate of the reconstruction code in the corresponding codebook.
Introduction
The sequence reconstruction problem has been extensively studied in the literature by many researchers since 2001 due to Levenshtein [26,27]. The original motivation was to combat errors by repeatedly transmitting a message without coding in situations when no other method is feasible. One of the central problems in this area is to determine the necessary number of transmissions for an arbitrary message, or equivalently, the maximum intersection size between the error-balls of two different words in a codebook. Each transmission is referred to as an independent noisy channel. Levenshtein [26,27] addressed this problem for combinatorial channels with several types of errors of most interest in the field of coding theory, namely, substitutions, insertions and deletions. Later, much work has been done concerning the sequence reconstruction problems for different error models, such as signed permutations distorted by reversal errors [13,14], and general error graphs [29].
Note that for all the works mentioned above, the transmitted sequences are selected from the entire space without coding. Recently due to applications in DNA-based storage, the sequence reconstruction problem was studied under the setting where the transmitted sequences are chosen from a given code with a certain error-correcting property. For example in [40], permutation codes with prescribed minimum Kendall's τ distances 1, 2 and 2r were considered. Gabrys and Yaakobi [18,19] studied the channels causing t deletions where the transmitted sequences belong to a binary single-deletion-correcting code. In 2017, Sala et al. [32] studied the insertion channels where the transmitted sequences have pairwise edit distance at least 2l, for any l ≥ 0, which generalizes the results of Levenshtein in [26,27].
Considering a fixed number of erroneous channels during the sequencing process of a DNA strand, Cai et al. [10] (see also [11]) proposed the dual problem of the sequence reconstruction as follows. In most sequencing platforms, multiple copies of the same DNA strand are created after undergoing polymerase chain reaction (PCR). The sequencer reads all copies and provides many possibly inaccurate reads to the user, who then needs to further reconstruct the original DNA strand from these noisy reads. When a fixed number of distinct noisy reads are provided, the main task is to design a codebook such that every codeword can be uniquely reconstructible from these distinct noisy reads. This problem has a quite different flavor from the original reconstruction problem, but can be viewed as an extension of the classical error-correcting codes. Leveraging on these multiple channels (or reads), one can increase the information capacity, or equivalently, reduce the number of redundant bits for these next-generation devices. In [10], Cai et al. almost completely determined the asymptotic optimal redundancy of the code when the channels are affected by a single edit. Chrisnata et al. [7,8] extended the case for t deletions, and provided an explicit code that is uniquely reconstructible with certain parameters in the two-deletion channel.
In this paper, we follow the framework initiated by Cai et al. [10], to study the so-called reconstruction codes with additional constraints required in DNA-based storage technique. The first interesting constraint is the balanced property of sequences. It has been shown that binary balanced error-correcting codes play a significant role in constructing GC-balanced error-correcting codes [35], which are widely used in the DNA coding theory [28,36] since they are more stable than unbalanced DNA strands and have better coverage during sequencing. Further, it is well known that balanced codes are DC-free [12] and have attractive applications in the encoding of unchangeable data on a laser disk [22,25]. Much efforts have been devoted to constructing binary balanced error-correcting codes in the literature [1,15,34].
By the above considerations, this paper focuses on the study of binary reconstruction codes able to uniquely recover a balanced sequence from a fixed number of erroneous channels affected by a single edit (a substitution, deletion, or insertion) and its variants. For all related errors, we determine the optimal redundancy and construct asymptotically optimal codes. In particular, these results show that the balanced property does not reduce the ratio of the reconstruction code to the corresponding codebook compared to the unbalanced one.
The rest of this paper is organized as follows. Section 2 introduces the main notation and provides the necessary background needed in the subsequent sections. In addition, some sufficient and necessary conditions for the intersection size of error-balls are provided. Then Sections 3-4 are devoted to characterizing the asymptotic optimal redundancy of a balanced (n, N ; B 2 )-reconstruction code with B 2 ∈ {B D , B I , B DI } and B 2 ∈ {B SD , B SI , B edit }, respectively. Finally, we conclude this paper in Section 5.
Preliminaries
Let F 2 denote the binary alphabet {0, 1}, and let F n 2 denote the set of all binary sequences of length n. The Hamming weight of x ∈ F n 2 , denoted by wt H (x), is the number of indices i where x i = 0, and the Hamming distance d H (x, y) between two words x, y ∈ F n 2 is defined to be the number of coordinates in which x and y differ. Assume that n is even throughout this paper for convenience. A word in F n 2 is balanced if it has exactly n/2 ones. Let U n be the set of all balanced words in F n 2 . A balanced code is a subset of U n . We introduce the concept of reconstruction codes as in [10]. First, we define the following seven error-ball functions for x ∈ F n 2 . Let B S (x), B D (x) and B I (x) denote the set of all words obtained from x via at most one substitution, one deletion, and one insertion, respectively. Combining these functions, we define further that Let B 2 be the noisy channel corresponding to any one of the above functions, that is,
B DI (x) := B D (x) ∪ B I (x), B SD (x) := B S (x) ∪ B D (x), B SI (x) := B S (x) ∪ B I (x), B edit (x) := B S (x) ∪ B D (x) ∪ B I (x).B 2 ∈ {B S , B D , B I , B DI , B SD , B SI , B edit }.
For any C ⊆ F n 2 , the read coverage of C for channel B 2 , denoted by ν(C; B 2 ), is defined to be the maximum intersection size between error-balls of any two different codewords in C. More specifically, ν(C; B 2 ) = max{|B 2 (x) ∩ B 2 (y)| : x, y ∈ C and x = y}.
The quantity ν(C; B 2 ) was introduced by Levenshtein [26], who showed that the number of channels required to reconstruct a codeword from C is at least ν(C; B 2 ) + 1. The problem of determining ν(C; B 2 ) is referred to as the sequence reconstruction problem.
For a fixed constant N , if a code C ⊆ F n 2 satisfies ν(C; B 2 ) < N , then we call C an (n, N ; B 2 )-reconstruction code. As mentioned in Introduction, we focus on balanced reconstruction codes in this paper. The fundamental problem on this topic is to estimate the minimum number of redundant bits for such a code. Define the redundancy of a code C ⊆ F n 2 to be the value n − log 2 |C|. Then we are interested in studying the following quantity,
ρ b (n, N ; B 2 ) = min{n − log 2 |C| : C ⊆ U n and ν(C; B 2 ) < N }.
Note that the case N = 1 is the classical model which has been studied for years in the design of balanced error-correcting codes [1,15,34].
An easy result
We determine the optimal redundancy for the channel causing a single substitution error in this subsection, that is ρ b (n, N ; B S ). We will apply the following useful estimation of binomials (see e.g., [20, Proposition 3.6.2]) frequently.
Lemma 2.1. [20] For all even n ≥ 2, we have
2 n √ 2n ≤ n n/2 ≤ 2 n √ n .
The case N = 1 is briefly explained in the next example, which is equivalent to the design of classical binary balanced error-correcting codes.
Example 2.2. Let C be a balanced (n, 1; B S )-reconstruction code with maximum size, then C is a binary code of length n with minimum Hamming distance 4 and constant weight n/2.
By the lower bound on the size of constant weight codes discovered by Graham
|C| ≤ n n/2 − 1 n/2 n/2 − 1 ≤ 2 n+1 n 3/2 ,
where the last inequality follows from Lemma 2.1 again.
A simple application of [26, Corollary 1] gives the following lemma.
Lemma 2.3
. Let x and y be different words in U n . Then
B S (x) ∩ B S (y) = 2, if d H (x, y) = 2, 0, if d H (x, y) ≥ 4.
Then it is ready to determine the optimal redundancy for the error-ball B S as follows.
Theorem 2.4. For the error-ball B S , we have ρ b (n, N ; B S ) = 3 2 log 2 n + Θ(1), N ∈ {1, 2}, ∆, N ≥ 3,
where ∆ := n − log 2 n n/2 = 1 2 log n + Θ(1) is the redundancy of U n .
Proof. The value ρ b (n, 1; B S ) follows immediately from Example 2.2. For the case N ≥ 2, we clearly have ρ b (n, 1; B S ) = ρ b (n, 2; B S ) from Lemma 2.3 and the fact that ν(U n ; B S ) = ∆.
The intersection size of various error-balls
This subsection deals with the intersection size between error-balls of any two different words in U n . We need the following notion of confusability which was introduced in [10] for general q-ary words. Here, we restrict the definition to balanced words.
Definition 2.5. Suppose x = ucv and y = uc ′ v are two distinct words in U n for some subwords u, v, c and c ′ . We say that x and y are
1. Type-A-confusable with m if {c, c ′ } is of the form {(10) m , (01) m } for m ≥ 1; and 2. Type-B-confusable with m if {c, c ′ } is either the form {01 m , 1 m 0} or {10 m , 0 m 1} for m ≥ 2.
Example 2.6. Let x = 11101000, y = 11010100 ∈ U 8 . Then x and y are Type-A-confusable with m = 2, u = 11 and v = 00. Similarly, x ′ = 111000 and y ′ = 101100 are Type-Bconfusable with m = 2, u = 1 and v = 0.
Let x and y be two distinct words in U n . For B 2 ∈ {B D , B I }, we know that B 2 (x) ∩ B 2 (y) ≤ 2 by observations in [26] or [27], and therefore we have ρ b (n, N ; B 2 ) = ∆ for N ≥ 3. Next, we characterize the intersection sizes of error-balls for different channels as in [10].
Proposition 2.7. Let B 2 ∈ {B D , B I },
and let x, y be two distinct words in U n .
(i) If d H (x, y) = 2, then |B 2 (x) ∩ B 2 (y)| = 1 if and only if x and y are Type-B-confusable. (ii) |B 2 (x) ∩ B 2 (y)| = 2 if and only if x and y are Type-A-confusable. (iii) |B D (x) ∩ B D (y)| = |B I (x) ∩ B I (y)|.
Proof. Since the words x and y belong to U n with the same Hamming weight n 2 , parts (i) and (ii) are true according to [10,Propositions 9 and 12]. The rest case (iii) then follows from part (ii) and the fact that
|B D (x) ∩ B D (y)| = 0 if and only if |B I (x) ∩ B I (y)| = 0.
A corollary of the above result is immediate.
Corollary 2.8. Let x and y be two distinct words in U n . Then |B DI (x)∩B DI (y)| ∈ {0, 2, 4}. In particular, |B DI (x)∩B DI (y)| = 4 if and only if x and y are Type-A-confusable. Moreover, we have ρ b (n, N ; B DI 2 ) = ∆ for N ≥ 5.
Combining Lemma 2.3, Proposition 2.7 and Corollary 2.8, we have the following two propositions for the intersection size of the error-balls which involve substitutions. The proof is straightforward and thus omitted. Proposition 2.9. Let B 2 ∈ {B SD , B SI }, and let x, y be two distinct words in U n .
(i) If d H (x, y) = 2, then |B 2 (x) ∩ B 2 (y)| ∈ {2, 3, 4}. In particular, |B 2 (x) ∩ B 2 (y)| = 4
if and only if x and y are Type-A-confusable with m = 1; and |B 2 (x) ∩ B 2 (y)| = 3 if and only if x and y are Type-B-confusable.
(ii) If d H (x, y) ≥ 4, then |B 2 (x) ∩ B 2 (y)| ≤ 2. In particular, |B 2 (x) ∩ B 2 (y)| = 2 if and only if x and y are Type-A-confusable with m ≥ 2. (iii) |B SD (x) ∩ B SD (y)| = |B SI (x) ∩ B SI (y)|. Moreover, we have ρ b (n, N ; B 2 ) = ∆ for N ≥ 5.(ii) If d H (x, y) ≥ 4, then |B edit (x) ∩ B edit (y)| ∈ {0, 2, 4}. In particular, |B edit (x) ∩ B edit (y)| = 4
if and only if x and y are Type-A-confusable with m ≥ 2.
Moreover, we have ρ b (n, N ; B edit 2 ) = ∆ for N ≥ 7.
The following result is an analogy to [10,Theorem 23], which presents the lower bounds for the redundancy of the code under certain conditions with respect to the notion of confusability. The proof is similar to the non-restricted case [11] and thus omitted.
Proposition 2.11. Let C ⊆ U n . Then the following hold.
(i) If every pair of distinct words in C are not Type-A-confusable, then the redundancy of C is at least 1 2 log 2 n + log 2 log 2 n − O(1).
(ii) If every pair of distinct words in C are not Type-B-confusable, then the redundancy of C is at least 1 2 log 2 n + log 2 log 2 n − O(1).
(iii) If every pair of distinct words in C are not Type-B-confusable with m = 1, then the redundancy of C is at least ∆ + 1 − o(1).
3 Reconstruction codes with error-balls B D , B I and B DI In this section, we determine the optimal redundancy of balanced reconstruction codes for the error-balls B D , B I and B DI . We first consider the case N = 1.
The case N = 1
It is known that any code can correct s deletions if and only if it can correct s insertions [24]. Thus, we only consider one of the error-balls B D or B I in the rest of this subsection. Let A D (n) denote the maximum size of a binary balanced single-deletion correcting code of length n. Then it suffices to estimate the value of A D (n).
Since the Varshamov-Tenengolts (VT) codes are the best known binary codes that can correct a single deletion [31], we define a balanced Varshamov-Tenengolts (BVT for short) code for our purpose. Obviously, the set BV T a (n) is a binary balanced single-deletion correcting code for any 0 ≤ a ≤ n. Hence, A D (n) ≥ n n/2 (n + 1) trivially. It should be noted that, several modifications of the VT-code have previously been proposed for different purposes, see e.g. [5,33].
Next, we seek an upper bound on the value of A D (n). We need a few preliminary results on hypergraphs, which are mainly from [3] and [23]. Let X be a finite set.
ν(H) = max m i=1 z i : Az ≤ 1 n and τ (H) = min n i=1 w i : A T w ≥ 1 m ,
where z = (z 1 , z 2 , . . . , z m ) T ∈ Z m ≥0 , w = (w 1 , w 2 , . . . , w n ) T ∈ Z n ≥0 , 1 n is the all one column vector, and the inequality means the components-wise inequality. In particular, ν(H) ≤ τ (H). If we relax the choice of z i and w i to any nonnegative reals in the above programming problems, we obtain the definitions of fractional matching number and fractional transversal number of H, denoted by ν * (H) and τ * (H), respectively.
We will apply the following lemma to give an upper bound of A D (n). consisting of all words with Hamming weights n 2 or n 2 − 1. Then |V n−1 | = n−1 n/2 + n−1 n/2−1 = n n/2 . Consider the following hypergraph:
H D n = V n−1 , {B D (x) : x ∈ U n } .
In w(x) ≥ 1, x ∈ V n−1 , y ∈ U n and w(x) ≥ 0 .
Next, we will give an upper bound of τ * (H D n ) by computing x∈V n−1 w(x) for the special function w(x) = 1 r(x) , where r(x) is the number of runs in x. Consequently, this will give an upper bound for A D (n). First, we need the following counting lemma.
i (2 ≤ i ≤ n) runs is 2 · n/2 − 1 ⌈i/2⌉ − 1 n/2 − 1 ⌊i/2⌋ − 1 .
Further, the number of words in V n−1 with exactly i (2 ≤ i ≤ n − 1) runs is
2 · n/2 − 1 ⌈i/2⌉ − 1 n/2 − 2 ⌊i/2⌋ − 1 + n/2 − 1 ⌊i/2⌋ − 1 n/2 − 2 ⌈i/2⌉ − 1 .
Proof. We only prove the case for U n , and a similar argument works for V n−1 . Let T = T 0 ∪T 1 be the subset in U n with exactly i runs, where T i consists of words in T with the first coordinate being i. Then it is easy to see that T 0 ∩ T 1 = ∅ and |T 0 | = |T 1 |. Hence, we only need to calculate the value |T 1 |. Let x be a word in T 1 , and then it is of the form:
x = (1010 . . . a i ),
where each boldface symbol represents a run of length at least one. Moreover, a = 1 if i is odd, otherwise a = 0. Then the size of T 1 equals the number of ways to distribute n 2 1s into ⌈ i 2 ⌉ blocks, and distribute n 2 0s into ⌊ i 2 ⌋ blocks, respectively, such that each block is nonempty.
Theorem 3.4. Let n ≥ 2 be even. Then the maximum size of a balanced (n, 1; B D )reconstruction code,
A D (n) ≤ 2 n n/2 − 2 n − 2 .
Consequently, the optimal redundancy ρ b (n, N ; B D ) is at least 3 2 log 2 n − O(1).
Proof. As indicated in Lemma 3.2, we have
|A D (n)| = ν(H D n ) ≤ τ * (H D n ). Let w(x) = 1 r(x)
, where x ∈ V n−1 , and r(x) is the number of runs in x. Then for any y ∈ U n ,
x∈B D (y) w(x) (a) ≥ |B D (y)| r(y) = 1,
where the inequality (a) follows from r(x) ≤ r(y) (see [23,Lemma 3.2]). By Lemma 3.3, the quantity x∈V n−1 w(x) equals
2 · n−1 i=1 n/2 − 1 ⌈i/2⌉ − 1 n/2 − 2 ⌊i/2⌋ − 1 + n/2 − 1 ⌊i/2⌋ − 1 n/2 − 2 ⌈i/2⌉ − 1 1 i = 4 n − 2 · n−1 i=1 n/2 − 1 ⌈i/2⌉ − 1 n/2 − 1 ⌊i/2⌋ − 1 n − i i .
Then the proof of the theorem needs the following combinatorial inequality.
Lemma 3.5. With the notation above, we have
n−1 i=1 n/2 − 1 ⌈i/2⌉ − 1 n/2 − 1 ⌊i/2⌋ − 1 n − i i ≤ n−1 i=1 n/2 − 1 ⌈i/2⌉ − 1 n/2 − 1 ⌊i/2⌋ − 1 .
Proof. To prove the above inequality, it is suffices to show that
n/2 − 1 ⌈i/2⌉ − 1 n/2 − 1 ⌊i/2⌋ − 1 n − i i + n/2 − 1 ⌈ n−i 2 ⌉ − 1 n/2 − 1 ⌊ n−i 2 ⌋ − 1 i n − i (a) ≤ n/2 − 1 ⌈i/2⌉ − 1 n/2 − 1 ⌊i/2⌋ − 1 + n/2 − 1 ⌈ n−i 2 ⌉ − 1 n/2 − 1 ⌊ n−i 2 ⌋ − 1 , for all 1 ≤ i ≤ n/2 − 1.
First assume that i is even. Then the inequality (a) is equivalent to
n/2 − 1 i/2 − 1 n/2 − 1 i/2 − 1 n − i i + n/2 − 1 i/2 n/2 − 1 i/2 i n − i (b) ≤ n/2 − 1 i/2 − 1 n/2 − 1 i/2 − 1 + n/2 − 1 i/2 n/2 − 1 i/2 ,w(x) ≤ 4 n − 2 · n−1 i=1 n/2 − 1 ⌈i/2⌉ − 1 n/2 − 1 ⌊i/2⌋ − 1 = 2 n n/2 − 2 n − 2 ,
which is an upper bound of A D (n). By Lemma 2.1, the redundancy ρ b (n, N ; B D ) ≥ n − log 2 ( n n/2 − 2) + log 2 (n − 2) − 1 = 3 2 log 2 n − O(1).
Remark 3.6. By the definition of BVT code, we have A D (n) ≥ n n/2 /(n + 1), and lim n→∞ A D (n) n n/2 /n ≥ 1.
Additionally, Theorem 3.4 says
A D (n) ≤ 2 n n/2 − 2 n − 2 , and lim n→∞ A D (n) 2 n n/2 /n ≤ 1.
Hence, n n/2 /n A D (n) 2 n n/2 /n.
In fact, following Levenshtein's method in [24] (or [31]), we can get a similar but implicit upper bound on A D (n). However, the constant factor in the estimation is, as yet, unknown. More precisely, determining the constant 1 ≤ t ≤ 2 such that lim n→∞
The case N ≥ 2
Chee et al. [9] defined a special class of binary codes in terms of the period of codewords, which can be used to correct deletions and sticky insertions when the two heads (in racetrack memory) are well separated. We will use this idea to construct balanced reconstruction codes. The following definition is necessary.
Definition 3.8. Let ℓ and m be two positive integers with ℓ < m. Let u = (u 1 , u 2 , . . . , u m ) ∈ F m 2 . We say that the word u has period ℓ if ℓ is the smallest integer such that u i = u i+ℓ for all 1 ≤ i ≤ m − ℓ. We are now ready to characterize the size of the set R b 2 (n, ℓ, m) in some particular cases. In particular, if m = ⌈log 2 n⌉ + 1, we have |R b 2 (n, 1, ⌈log 2 n⌉ + 1)| ≥ ( n n/2 ) 2 .
Proof. Let R b 2 (n, 1, m) denote the complementary set U n \R b 2 (n, 1, m), then |R b 2 (n, 1, m)| = |U n | − |R b 2 (n, 1, m)|. By definition, a word c ∈ U n belongs to R b 2 (n, 1, m) if and only if it contains a run with length m + 1, which implies the following upper bound on the size of R b 2 (n, 1, m)
|R b 2 (n, 1, m)| ≤ 2(n − m) n − m − 1 n/2 = (n − 2m) n − m n/2 .
Then it follows that
|R b 2 (n, 1, m)| ≥ n n/2 − (n − 2m) n − m n/2 (a) ≥ n n/2 1 − (n − 2m) 1 2 m ≥ n n/2 1 − n 1 2 m ,
where the inequality (a) follows from the fact that n−m n/2 ≤ n n/2
1 2 m .
Lemma 3.10. For all n, m and ℓ = 2, we have
|R b 2 (n, 2, m)| ≥ n n/2 1 − n 1 2 ⌈ m−1 2 ⌉ .
In particular, if n ≥ 12 and m = 2⌈log 2 n⌉ + 3, we have |R b 2 (n, 2, 2⌈log 2 n⌉ + 3)| ≥ ( n n/2 ) 2 .
Proof. Let B 2 (n, 2, m) denote the set of all binary words c in U n such that the length of any 2-periodic subword of c is at most m. Then R b 2 (n, 2, m) = B 2 (n, 2, m) ∩ R b 2 (n, 1, m), thus
|R b 2 (n, 2, m)| ≥ |R b 2 (n, 1, m)| − |B 2 (n, 2, m)|,
where B 2 (n, 2, m) denotes the complementary set U n \B 2 (n, 2, m). Note that a word u ∈ U n belongs to B 2 (n, 2, m) if and only if it contains a subword of length m + 1 with period 2. Hence,
|B 2 (n, 2, m)| ≤ 2(n − m) n − m − 1 n 2 − ⌈ m+1 2 ⌉ ≤ n n − m n 2 − ⌈ m+1 2 ⌉ .
By Lemma 3.9,
|R b 2 (n, 2, m)| ≥ n n/2 1 − n 1 2 m − n n − m n 2 − ⌈ m+1 2 ⌉ (a) ≥ n n/2 1 − n 1 2 m + 1 2 ⌈ m+1 2 ⌉ ≥ n n/2 1 − n 1 2 ⌈ m−1 2 ⌉ ,
where the inequality (a) follows from the fact that
n − m n 2 − ⌈ m+1 2 ⌉ ≤ n n/2 1 2 ⌈ m+1 2 ⌉ n 2 n − ⌈ m+1 2 ⌉ ⌊ m−1 2 ⌋ ≤ n n/2 1 2 ⌈ m+1 2 ⌉ .
If m = 2⌈log 2 n⌉ + 3, we have
|R b 2 (n, 2, 2⌈log 2 n⌉ + 3)| ≥ n n/2 1 − n 1 2 log 2 n+1 = n n/2 2 ,
and the condition m ≤ n holds as long as n ≥ 12.
For any x = (x 1 , x 2 , . . . , x n ) ∈ F n 2 , define the inversion number
Inv(x) = |{(i, j) : 1 ≤ i < j ≤ n, x i > x j }|.
For example, Inv(x) = 7 for x = (1010110) ∈ F 7 2 . Based on Lemma 3.10, we give the following estimate of the optimal redundancy of an (n, N ; B 2 )-reconstruction code, with B 2 ∈ {B D , B I } and N ≥ 2. where t ∈ Z 1+P/2 and P is even. It follows from Proposition 2.7(ii) and [10,Theorem 17] that |B 2 (x) ∩ B 2 (y)| < 2, for x = y ∈ C 2 (n, t, P ).
Assume that P = 2⌈log 2 n⌉+3. Then by Lemma 3.10, there exists an (n, 2; B 2 )-reconstruction code with suitable t such that |C 2 (n, t, 2⌈log 2 n⌉ + 3)| ≥ n n/2 /(2 + P ).
This implies by Lemma 2.1, that C 2 (n, t, 2⌈log 2 n⌉ + 3) has redundancy at most 1 2 log 2 n + log 2 (2 + P ) + 1 2 = 1 2 log 2 n + log 2 log 2 n + O(1). On the other hand, let C be any (n, 2; B 2 )-reconstruction code. By Proposition 2.7(ii), every pair of different words in C are not Type-A-confusable (see Definition 2.5). Then the desired result follows immediately from Theorem 2.11(i).
A further extension of Theorem 3.11 is given by the following theorem. We are now in a position to evaluate the value ρ b (n, N ; B 2 ) for B 2 ∈ {B SD , B SI } when N ≥ 3. Proof. Assume that
D 2 (n, t, P ) = {c ∈ R b 2 (n, 1, P ) : Inv(c) ≡ t (mod 1 + P )},
where t ∈ Z 1+P . It follows then from Proposition 2.9(i) that |B 2 (x) ∩ B 2 (y)| < 3, for x = y ∈ D 2 (n, t, P ).
If P = ⌈log 2 n⌉ + 1. Then we have |D 2 (n, t, ⌈log 2 n⌉ + 1)| ≥ n n/2 /(2 + 2P ).
This implies by Theorem 2.1 and Lemma 3.9, that D 2 (n, t, ⌈log 2 n⌉ + 1) has redundancy at most 1 2 log 2 n + log 2 log 2 n + O(1). On the other hand, let C be an (n, 3; B 2 )-reconstruction code. It is easy to check that every pair of different words in C are not Type-B-confusable (see Definition 2.5). Combining this and Theorem 2.11(ii), we give the result for N = 3.
In the case of N = 4. Define the set
C a = {(x 1 , x 2 , . . . , x n ) ∈ U n : n/2 i=1 x 2i ≡ a (mod 2)},
where a ∈ Z 2 . Then the pigeonhole principle implies that there is a choice of a ∈ Z 2 such that the set C a has size at lease half of U n . By Proposition 2.9(i),
B 2 (x) ∩ B 2 (y) < 4, for x = y ∈ C a .
Thus, C a is an (n, 4; B 2 )-reconstruction code with redundancy at most n−log 2 ( n n/2 ) 2 = ∆+1. On the other hand, every distinct pair of words in an (n, 4; B 2 )-reconstruction code are not Type-B-confusable with m = 1; otherwise the two words are Type-A-confusable with m = 1, and Proposition 2.9 indicates that the size of their error-balls equals 4, a contradiction. Then Theorem 2.11(iii) implies the desired result.
Next, we consider the error-ball B edit . First, we define the balanced version of the Levenshtein code proposed in [24] as follows, which is a generalization of Definition 3.1,
BLT a (n) = {(x 1 , x 2 , . . . , x n ) ∈ U n : n i=1 ix i ≡ a (mod 2n)}.
In [24], Levenshtein showed that the code BLT a (n) is capable of correcting one deletion, one insertion or one substitution. So there is a choice of a ∈ Z 2n such that |BLT a (n)| ≥ ( n n/2 ) 2n . This leads to half of the following theorem. Proof. Recall from Proposition 2.10 that ρ b (n, 1; B edit ) = ρ b (n, 2; B edit ). Additionally, by the code BLT a (n) defined above, it suffices to show that the value ρ b (n, N ; B edit ) is lower bounded by 3 2 log 2 n + Θ(1). Notice that an (n, 1; B edit )-reconstruction code is also an (n, 1; B 2 )-reconstruction code with B 2 ∈ {B S , B D , B I , B SD , B SI }, and then the theorem follows.
We now complete the the evaluation of of ρ b (n, N ; B edit ) with N ≥ 3. For N ∈ {3, 4}, suppose that E 2 (n, t, P ) = {c ∈ R b 2 (n, 2, P ) : Inv(c) ≡ t (mod 1 + P )},
where t ∈ Z 1+P . Clearly, we have |B edit (x) ∩ B edit (y)| < 4 (in fact ≤ 2), for x = y ∈ E 2 (n, t, P ).
Assume that P = 2⌈log 2 n⌉ + 3. Then we have |E 2 (n, t, ⌈2 log 2 n⌉ + 3)| ≥ n n/2 /(2 + 2P ). This implies by Lemmas 2.1 and 3.10, that E 2 (n, t, 2⌈log 2 n⌉ + 3) has redundancy at most 1 2 log 2 n + log 2 log 2 n + O(1). On the other hand, let C be an (n, N ; B edit )-reconstruction code. It is easy to check that every pair of different words in C are not Type-B-confusable. Consequently, we have ρ b (n, N ; B edit ) = 1 2 log 2 n + log 2 log 2 n + Θ(1). In the case of N ∈ {5, 6}, the code C a defined in the proof of Theorem 4.2 is exactly an (n, N ; B edit )-reconstruction code with redundancy at most n−log 2 ( n n/2 ) 2 = ∆+1. Combining this and Theorem 2.11(iii), we obtain the value ρ b (n, N ; B edit ) immediately.
Conclusion
In this paper, we completely determine the asymptotic optimal redundancy for balanced binary reconstruction codes which are affected by single edits, i.e., one substitution, one deletion, one insertion and their combinations. It is interesting to notice that for all possible single edits, the redundancy of an asymptotically optimal balanced reconstruction code gradually decreases from 3 2 log 2 n + O(1) to 1 2 log 2 n + log 2 log 2 n + O(1), and finally to 1 2 log 2 n + O(1) but with different speeds. Because of the balanced property, the optimal redundancy is not surprisingly bigger than the unbalanced one studied in [10] with the same noisy channel. However, if we define ρ ′ b (n, N ; B 2 ) = min{(n − ∆) − log 2 |C| : C ⊆ U n and ν(C; B 2 ) < N },
where ∆ is the redundancy of U n , then the asymptotical redundancy here is consistent with [10] in binary case. This in turn implies that the balanced constraint does not reduce the proportion of the (n, N ; B 2 )-reconstruction code in the corresponding codebook.
Moreover, it would be interesting to investigate the sequence reconstruction problem constrained in the balanced quaternary sequences, for instance, see [35,41] for a description of this family of sequences. The case of noisy channel with t-deletion (insertion) error-balls will be considered as another potential path for further work.
Data availibility Not applicable.
Code Availability Not applicable.
Declarations
Conflict of interest The authors have no conflicts of interest to declare that are relevant to the content of this paper.
For example, let x = 1010 ∈ U 4 . Then B S (x) = {1010, 0010, 1110, 1000, 1011} ⊆ F 4 2 , B D (x) = {010, 110, 100, 101} ⊆ F 3 2 , B I (x) = {01010, 11010, 10010, 10110, 10100, 10101} ⊆
F 5 2
2, and the remaining error-balls for x are the corresponding union between B S (x), B D (x) and B I (x) above.
Proposition 2 . 10 .
210Let x and y be two distinct words in U n . Then we have |B edit (x) ∩ B edit (y)| ∈ {0, 2, 4, 6} and the following hold.(i) If d H (x, y) = 2, then |B edit (x) ∩ B edit (y)| ∈ {2, 4, 6}. In particular, |B edit (x) ∩ B edit (y)| = 6 if and only if x and y are Type-A-confusable with m = 1; and |B edit (x)∩ B edit (y)| = 4 if and only if x and y are Type-B-confusable.
Definition 3.1. (BVT code) For any 0 ≤ a ≤ n, the balanced Varshamov-Tenengolts code BV T a (n) is defined as follows:BV T a (n) = (x 1 , x 2 , . . . , x n ) ∈ U n : n i=1 ix i ≡ a (mod n + 1) .
A hypergraph H = (X, H) on X is a family H of nonempty subsets of X, where elements of X are called vertices, and elements of H are called hyperedges. A matching of a hypergraph is a collection of pairwise disjoint hyperedges, and the matching number of H, denoted by ν(H), is the largest number of edges in a matching of H. A transversal of a hypergraph H = (X, H) is a subset T ⊂ X that intersects every hyperedge in H, and the transversal number of H, denoted by τ (H), is the smallest size of a transversal. Suppose that H has n vertices and m edges, let A n×m be the incidence matrix of H. Kulkarni et al. [23] proved that the matching number and transversal number of a hypergraph H are solutions of the following integer linear programming problems:
Lemma 3.2. [23] For any hypergraph H, we have ν(H) ≤ ν * (H) = τ * (H) ≤ τ (H). Let V n−1 be the subset of F n−1 2
H D n , the vertices are words in V n−1 , and the hyperedges are single-deletion balls of words in U n . Then the value of A D (n) is equal to the matching number ν(H D n ) of H D n . By Lemma 3.2, we have ν(H D n ) ≤ τ * (H D n ), where τ * (H D n ) is the fractional transversal number of H D n . By definition, τ * (H D n ) = min x∈V n−1 w(x) : x∈B D (y)
Lemma 3. 3 .
3The number of words in U n with exactly
.
Since M > 0, the inequality (c) holds by letting x = n−i i in x 2 − 2x + 1 ≥ 0. A similar argument works in the case when i is odd. By Lemma 3.5 we have x∈V n−1
3.6, we can determine the value of ρ b (n, N ; B 2 ) for N = 1 as follows.
Theorem 3 . 7 .
37Consider the error-ball B 2 ∈ {B D , B I , B DI }. Then ρ b (n, 1; B 2 ) = 3 2 log 2 n + Θ(1).
Let R b 2
2(n, ℓ, m) denote the set of all binary words c in U n such that the length of any ℓ ′ -periodic (ℓ ′ ≤ ℓ) subword of c is at most m. For example,
Lemma 3. 9 .
9For all n, m and ℓ = 1, we have |R b 2 (n, 1, m)| ≥
Theorem 3 . 11 .
311Consider the error-ball B 2 ∈ {B D , B I }. Then ρ b (n, N ; B 2 2 n + log 2 log 2 n + Θ(1), N = 2, ∆, N ≥ 3.Proof. The case N ≥ 3 is trivial. Let
C 2
2(n, t, P ) = {c ∈ R b 2 (n, 2, P ) : Inv(c) ≡ t (mod 1 + P/2)},
Theorem 3 . 12 .
312Consider the error-ball B DI . Then ρ b (n, N ; B DI ) = 2 n + log 2 log 2 n + Θ(1), N ∈ {3, 4}, ∆, N ≥ 5. Proof. The case N ≥ 5 is trivial. Since |B DI (x) ∩ B DI (y)| ∈ {0, 2, 4} for distinct words x, y ∈ U n by Corollary 2.8, we have ρ b (n, 2; B DI ) = ρ b (n, 1; B DI ) and ρ b (n, 4; B DI ) = ρ b (n, 3; B DI ) directly. Thus, the value ρ b (n, 2; B DI ) follows from Theorem 3.7. In addition, the proof of Theorem 3.11 shows that the code C 2 (n, t, 2⌈log 2 n⌉ + 3) is in fact an (n, 4; B DI )-reconstruction code, and the proof is complete. 4 Reconstruction codes with error-balls B SD , B SI and B editIn connection of the preceding discussion, for instance, see Theorems 2.4 and 3.7, we mention without proof the following result for the optimal redundancy of an (n, N ; B 2 )reconstruction code, where B 2 ∈ {B SD , B SI }.
Corollary 4. 1 .
1Consider the error-ball B 2 ∈ {B SD , B SI }. Then ρ b (n, N ; B 2 ) = 3 2 log 2 n + Θ(1) for N ∈ {1, 2}.
Theorem 4 . 2 .
42Consider the error-ball B 2 ∈ {B SD , B SI }. Then ρ b (n, N ; B 2
Theorem 4 . 3 .
43Consider the error-ball B edit . We have that ρ b (n, N ; B edit ) = 3 2 log 2 n + Θ(1) for N ∈ {1, 2}.
Theorem 4. 4 .
4Consider the error-ball B edit . Then ρ b (n, N ; B edit ) = 2 n + log 2 log 2 n + Θ(1), N ∈ {3, The case N ≥ 7 is trivial. By Proposition 2.10, ρ b (n, 4; B edit ) = ρ b (n, 3; B edit ) and ρ b (n, 6; B edit ) = ρ b (n, 5; B edit ).
Design of efficient error-correcting balanced codes. S Al-Bassam, B Bose, IEEE Trans. Comput. 4210S. Al-Bassam, B. Bose, Design of efficient error-correcting balanced codes, IEEE Trans. Comput., 1993, 42(10):1261-1266.
Upper bounds for constant-weight codes. E Agrell, A Vardy, K , Zeger , IEEE Trans. Infor. Theory. 467E. Agrell, A. Vardy, K, Zeger, Upper bounds for constant-weight codes, IEEE Trans. Infor. Theory, 2000, 46(7):2373-2395.
C Berge, Hypergraphs, ser. Combinatorics of Finite Sets. Amsterdam, The NetherlandsNorth Holland1st ed.C. Berge, Hypergraphs, ser. Combinatorics of Finite Sets, 1st ed. Amsterdam, The Netherlands:North Holland, 1989.
Etzion, Constructions for optimal constant-weight cyclically permutable codes and difference families. S Bitan, T , IEEE Trans. Infor. Theory. 411S. Bitan, T. Etzion, Constructions for optimal constant-weight cyclically permutable codes and difference families, IEEE Trans. Infor. Theory, 1995, 41(1):77-87.
Explicit formulas for the weight enumerators of some classes of deletion correcting codes. K Bibak, O Milenkovic, IEEE Trans. Infor. Theory. 673K. Bibak, O. Milenkovic, Explicit formulas for the weight enumerators of some classes of deletion correcting codes, IEEE Trans. Infor. Theory, 2019, 67(3):1809-1816.
S Blake-Wilson, K T Phelps, Constant weight codes and group divisible designs. 16S. Blake-Wilson, K. T. Phelps, Constant weight codes and group divisible designs, Des. Codes Cryptogr., 1999, 16(1):11-27.
Correcting two deletions with more reads. J Chrisnata, H M Kiahy, Proc. IEEE Int. Symp. Inf. Theory. IEEE Int. Symp. Inf. TheoryMelbourne, AustraliaJ. Chrisnata, H.M. Kiahy, Correcting two deletions with more reads, in Proc. IEEE Int. Symp. Inf. Theory, Melbourne, Australia, 2021, 2666-2671.
Correcting deletions with multiple reads. J Chrisnata, H M Kiahy, E Yaakobi, 10.1109/TIT.2022.3184868IEEE Trans. Infor. Theory. J. Chrisnata, H.M. Kiahy, E. Yaakobi, Correcting deletions with multiple reads, IEEE Trans. Infor. Theory, 2022, DOI: 10.1109/TIT.2022.3184868.
Coding for racetrack memories. Y M Chee, H M Kiah, A Vardy, V K Vu, E Yaakobi, IEEE Trans. Infor. Theory. 6411Y.M. Chee, H.M. Kiah, A. Vardy, V.K. Vu, E. Yaakobi, Coding for racetrack memories, IEEE Trans. Infor. Theory, 2018, 64(11):7094-7112.
Coding for sequence reconstruction for single edits. K Cai, H M Kiah, T T Nguyen, E Yaakobi, IEEE Trans. Infor. Theory. 681K. Cai, H.M. Kiah, T.T. Nguyen, E. Yaakobi, Coding for sequence reconstruction for single edits, IEEE Trans. Infor. Theory, 2021, 68(1):66-79.
Optimal reconstruction codes for deletion channels. J Chrisnata, H M Kiah, E Yaakobi, Proc. IEEE Int. Symp. Inf. Theory. IEEE Int. Symp. Inf. TheoryKapolei, USAJ. Chrisnata, H.M. Kiah, E. Yaakobi, Optimal reconstruction codes for deletion chan- nels, in Proc. IEEE Int. Symp. Inf. Theory, Kapolei, USA, 2020, 279-283.
DC-free coset codes. R H Deng, M A Herro, IEEE Trans. Infor. Theory. 344R.H. Deng, M.A. Herro, DC-free coset codes, IEEE Trans. Infor. Theory, 1988, 34(4):786-792.
Reconstruction of signed permutations from their distorted patterns. E Konstantinova, Proc. IEEE Int. Symp. Inf. Theory. IEEE Int. Symp. Inf. TheoryAdelaide, AustraliaE. Konstantinova, Reconstruction of signed permutations from their distorted patterns, in Proc. IEEE Int. Symp. Inf. Theory, Adelaide, Australia, 2005, 474-477.
On reconstruction of signed permutations distorted by reversal errors. E Konstantinova, Discrete Math. 3085-6E. Konstantinova, On reconstruction of signed permutations distorted by reversal er- rors, Discrete Math., 2008, 308(5-6):974-984.
Self-complementary balanced codes and quasi-symmetric designs. F W Fu, K W Wei, Des. Codes Cryptogr. 273F.W. Fu, K.W. Wei, Self-complementary balanced codes and quasi-symmetric designs, Des. Codes Cryptogr., 2002, 27(3):271-279.
Constructions of binary constant-weight cyclic codes and cyclically permutable codes. N Q A L Györfi, J L Massey, IEEE Trans. Infor. Theory. 383N.Q.A.L. Györfi, J.L. Massey, Constructions of binary constant-weight cyclic codes and cyclically permutable codes, IEEE Trans. Infor. Theory, 1992, 38(3):940-949.
Lower bounds for constant weight codes. R L Graham, N J A Sloane, IEEE Trans. Infor. Theory. 261R.L. Graham, N.J.A. Sloane, Lower bounds for constant weight codes, IEEE Trans. Infor. Theory, 1980, 26(1):37-43.
Sequence reconstruction over the deletion channel. R Gabrys, E Yaakobi, Proc. IEEE Int. Symp. Inf. Theory. IEEE Int. Symp. Inf. TheoryBarcelona, SpainR. Gabrys and E. Yaakobi, Sequence reconstruction over the deletion channel, in Proc. IEEE Int. Symp. Inf. Theory, Barcelona, Spain, 2016, 1596-1600.
Sequence reconstruction over deletion channel. R Gabrys, E Yaakobi, IEEE Trans. Infor. Theory. 644R. Gabrys, E. Yaakobi, Sequence reconstruction over deletion channel, IEEE Trans. Infor. Theory, 2018, 64(4):2924-2931.
J Matoušek, J Nešetřil, Invitation to Discrete Mathematics. Oxford University Press2nd edJ. Matoušek, J. Nešetřil, Invitation to Discrete Mathematics, 2nd ed. Oxford University Press, 2009.
Asymptotic improvement of the Gilbert-Varshamov bound on the size of binary codes. T Jiang, A Vardy, IEEE Trans. Infor. Theory. 508T. Jiang, A. Vardy, Asymptotic improvement of the Gilbert-Varshamov bound on the size of binary codes, IEEE Trans. Infor. Theory, 2004, 50(8):1655-1664.
Efficient balanced codes. D Knuth, IEEE Trans. Infor. Theory. 321D. Knuth, Efficient balanced codes, IEEE Trans. Infor. Theory, 1986, 32(1):51-53.
Nonasymptotic upper bounds for deletion correcting codes. A A Kulkarni, N Kiyavash, IEEE Trans. Infor. Theory. 598A.A. Kulkarni, N. Kiyavash, Nonasymptotic upper bounds for deletion correcting codes, IEEE Trans. Infor. Theory, 2013, 59(8):5115-5130.
English translation in Sov. V L Levenshtein, Binary codes capable of correcting deletions, insertions and reversals. 163Phys. Dokl.V.L. Levenshtein, Binary codes capable of correcting deletions, insertions and reversals, Dokl. Akad. Nauk SSSR, 1965, 163(4):845-848, 1965. English translation in Sov. Phys. Dokl., 1966, 10(8):707-710.
Data integrity in digital optical disks. E L Leiss, IEEE Trans. Cornput. 9E.L. Leiss, Data integrity in digital optical disks, IEEE Trans. Cornput., 1984, c- 33(9):818-827.
Efficient reconstruction of sequences. V L Levenshtein, IEEE Trans. Infor. Theory. 471V.L. Levenshtein, Efficient reconstruction of sequences, IEEE Trans. Infor. Theory, 2001, 47(1):2-22.
Efficient reconstruction of sequences from their subsequences or supersequences. V L Levenshtein, J. Combinat. Theory, A. 932V.L. Levenshtein, Efficient reconstruction of sequences from their subsequences or su- persequences, J. Combinat. Theory, A, 2001, 93(2):310-332.
Reconstruction of a graph from 2-vicinities of its vertices. V I Levenshtein, E Konstantinova, E Konstantinov, S Molodtsov, Discrete Appl. Math. 1569V.I. Levenshtein, E. Konstantinova, E. Konstantinov, S. Molodtsov, Reconstruction of a graph from 2-vicinities of its vertices, Discrete Appl. Math., 2008, 156(9):1399-1406.
Error graphs and the reconstruction of elements in groups. V I Levenshtein, J Siemons, J. Combinat. Theory, A. 1164V.I. Levenshtein, J. Siemons, Error graphs and the reconstruction of elements in groups, J. Combinat. Theory, A, 2009, 116(4):795-815.
Coding over sets for DNA storage. A Lenz, P H Siegel, A Wachter-Zeh, E Yaakobi, IEEE Trans. Infor. Theory. 664A. Lenz, P.H. Siegel, A. Wachter-Zeh, E. Yaakobi, Coding over sets for DNA storage, IEEE Trans. Infor. Theory, 2020, 66(4):2331-2351.
On single-deletion-correcting codes. N J A Sloane, Codes and Designs: Proc. Conf. Honoring Professor D.K. Ray-Chaudhuri on the Occasion of his 65th Birthday. N.J.A. Sloane, On single-deletion-correcting codes, in Codes and Designs: Proc. Conf. Honoring Professor D.K. Ray-Chaudhuri on the Occasion of his 65th Birthday, 2000.
Exact reconstruction from insertions in synchronization codes. F Sala, R Gabrys, C Schoeny, L Dolecek, IEEE Trans. Infor. Theory. 634F. Sala, R. Gabrys, C. Schoeny, L. Dolecek, Exact reconstruction from insertions in synchronization codes, IEEE Trans. Infor. Theory, 2017, 63(4):2428-2445.
Codes correcting a burst of deletions or insertions. C Schoeny, A Wachter-Zeh, R Gabrys, E Yaakobi, IEEE Trans. Infor. Theory. 634C. Schoeny, A. Wachter-Zeh, R. Gabrys, E. Yaakobi, Codes correcting a burst of deletions or insertions, IEEE Trans. Infor. Theory, 2017, 63(4):1971-1985.
On error-correcting balanced codes. H V Tilborg, M Blaum, IEEE Trans. Infor. Theory. 355H.V. Tilborg, M. Blaum, On error-correcting balanced codes, IEEE Trans. Infor. The- ory, 1989, 35(5):1091-1095.
New classes of balanced quaternary and almost balanced binary sequences with optimal autocorrelation value. X Tang, C Ding, IEEE Trans. Infor. Theory. 5612X. Tang, C. Ding, New classes of balanced quaternary and almost balanced bi- nary sequences with optimal autocorrelation value, IEEE Trans. Infor. Theory, 2010, 56(12):6398-6405.
Sequemce reconstruction for limited-magnitude errors. H J Wei, M Schwartz, 10.1109/TIT.2022.3159736IEEE Trans. Infor. Theory. 2022H.J. Wei, M. Schwartz, Sequemce reconstruction for limited-magnitude errors, IEEE Trans. Infor. Theory, 2022, doi:10.1109/TIT.2022.3159736.
On the undetected error probability of nonlinear binary constant weight codes. X M Wang, Y X Yang, IEEE Trans. Commun. 427X.M. Wang, Y.X. Yang, On the undetected error probability of nonlinear binary con- stant weight codes, IEEE Trans. Commun., 1994, 42(7):2390-2394.
Mutually uncorrelated primers for DNA-based data storage. S M H T Yazdi, H M Kiah, R Gabrys, O Milenkovic, IEEE Trans. Infor. Theory. 649S.M.H.T. Yazdi, H.M. Kiah, R. Gabrys, O. Milenkovic, Mutually uncorrelated primers for DNA-based data storage, IEEE Trans. Infor. Theory, 2018, 64(9):6283-6296.
DNAbased storage: Trends and methods. S M H T Yazdi, H M Kiah, E Garcia-Ruiz, J Ma, H Zhao, O Milenkovic, IEEE Trans. Mol., Biol. Multi-Scale Commun. 13S.M.H.T. Yazdi, H.M. Kiah, E. Garcia-Ruiz, J. Ma, H. Zhao, O. Milenkovic, DNA- based storage: Trends and methods, IEEE Trans. Mol., Biol. Multi-Scale Commun., 2015, 1(3):230-248.
Sequence reconstruction for Grassmann graphs and permutations. E Yaakobi, M Schwartz, M Langberg, J Bruck, Proc. IEEE Int. Symp. Inf. Theory. IEEE Int. Symp. Inf. TheoryIstanbul, TurkeyE. Yaakobi, M. Schwartz, M. Langberg, J. Bruck, Sequence reconstruction for Grass- mann graphs and permutations, in Proc. IEEE Int. Symp. Inf. Theory, Istanbul, Turkey, 2013, 874-878.
Balanced quaternary sequences pairs of odd period with (almost) optimal autocorrelation and cross-correlation. Y Yang, X H Tang, IEEE Commun. Lett. 188Y. Yang, X.H. Tang, Balanced quaternary sequences pairs of odd period with (almost) optimal autocorrelation and cross-correlation, IEEE Commun. Lett., 2014, 18(8):1327- 1330.
|
[] |
[
"Radiative seesaw in left-right symmetric model",
"Radiative seesaw in left-right symmetric model"
] |
[
"Pei-Hong Gu \nThe Abdus Salam International Centre for Theoretical Physics\nStrada Costiera 1134014TriesteItaly\n",
"Utpal Sarkar \nPhysical Research Laboratory\n380009AhmedabadIndia\n"
] |
[
"The Abdus Salam International Centre for Theoretical Physics\nStrada Costiera 1134014TriesteItaly",
"Physical Research Laboratory\n380009AhmedabadIndia"
] |
[] |
There are some radiative origins for the neutrino masses in the conventional left-right symmetric models with the usual bi-doublet and triplet Higgs scalars. These radiative contributions could dominate over the tree-level seesaw and could explain the observed neutrino masses.Introduction: Strong evidence from the neutrino oscillation experiments has confirmed the tiny but nonzero neutrino masses. This phenomenon is elegantly explained by the seesaw mechanism [1] in some extensions of the standard model (SM). The seesaw scenario can be naturally embedded into the left-right symmetric models[2]and also the grand unified theories (GUTs).In this paper, we discuss the neutrino mass generation in a general class of left-right symmetric models with the Higgs sector including one bi-doublet, one left-handed triplet and one right-handed triplet Higgs fields. Our analysis shows that the neutrino masses can originate from some loop diagrams in addition to the tree-level seesaw. We also demonstrate that the radiative neutrino masses could explain the experimental results for some choice of parameters.The left-right symmetric model: We consider the leftright symmetric extension of the SM with the gauge group SU (2) L × SU (2) R × U (1) B−L and the following Higgs content:
|
10.1103/physrevd.78.073012
|
[
"https://arxiv.org/pdf/0807.0270v2.pdf"
] | 53,618,181 |
0807.0270
|
22f2686c865db958c7087c94f9f27381efd56968
|
Radiative seesaw in left-right symmetric model
9 Oct 2008
Pei-Hong Gu
The Abdus Salam International Centre for Theoretical Physics
Strada Costiera 1134014TriesteItaly
Utpal Sarkar
Physical Research Laboratory
380009AhmedabadIndia
Radiative seesaw in left-right symmetric model
9 Oct 2008numbers: 1460Pq1260Cn1260Fr
There are some radiative origins for the neutrino masses in the conventional left-right symmetric models with the usual bi-doublet and triplet Higgs scalars. These radiative contributions could dominate over the tree-level seesaw and could explain the observed neutrino masses.Introduction: Strong evidence from the neutrino oscillation experiments has confirmed the tiny but nonzero neutrino masses. This phenomenon is elegantly explained by the seesaw mechanism [1] in some extensions of the standard model (SM). The seesaw scenario can be naturally embedded into the left-right symmetric models[2]and also the grand unified theories (GUTs).In this paper, we discuss the neutrino mass generation in a general class of left-right symmetric models with the Higgs sector including one bi-doublet, one left-handed triplet and one right-handed triplet Higgs fields. Our analysis shows that the neutrino masses can originate from some loop diagrams in addition to the tree-level seesaw. We also demonstrate that the radiative neutrino masses could explain the experimental results for some choice of parameters.The left-right symmetric model: We consider the leftright symmetric extension of the SM with the gauge group SU (2) L × SU (2) R × U (1) B−L and the following Higgs content:
There are some radiative origins for the neutrino masses in the conventional left-right symmetric models with the usual bi-doublet and triplet Higgs scalars. These radiative contributions could dominate over the tree-level seesaw and could explain the observed neutrino masses.
PACS numbers: 14.60. Pq,12.60.Cn,12.60.Fr Introduction: Strong evidence from the neutrino oscillation experiments has confirmed the tiny but nonzero neutrino masses. This phenomenon is elegantly explained by the seesaw mechanism [1] in some extensions of the standard model (SM). The seesaw scenario can be naturally embedded into the left-right symmetric models [2] and also the grand unified theories (GUTs).
In this paper, we discuss the neutrino mass generation in a general class of left-right symmetric models with the Higgs sector including one bi-doublet, one left-handed triplet and one right-handed triplet Higgs fields. Our analysis shows that the neutrino masses can originate from some loop diagrams in addition to the tree-level seesaw. We also demonstrate that the radiative neutrino masses could explain the experimental results for some choice of parameters.
The left-right symmetric model: We consider the leftright symmetric extension of the SM with the gauge group SU (2) L × SU (2) R × U (1) B−L and the following Higgs content:
φ(2, 2 * , 0) , ∆ L (3, 1, −2) , ∆ R (1, 3, −2) .(1)
A convenient representation of these fields is given by the 2 × 2 matrices:
φ = φ 0 1 φ + 2 φ − 1 φ 0 2 , ∆ L,R = δ + / √ 2 δ ++ δ 0 −δ + / √ 2 L,R .(2)
As for the fermion sector, it includes the left-and righthanded quarks:
q L (2, 1, 1 3 ) = u d L , q R (1, 2, 1 3 ) = u d R(3)
and the left-and right-handed leptons:
ψ L (2, 1, −1) = ν ℓ L , ψ R (1, 2, −1) = ν ℓ R .(4)
Under the left-right (parity) symmetry, we have φ ↔ φ † , ∆ L ↔ ∆ R , q L ↔ q R and ψ L ↔ ψ R . The parity * Electronic address: [email protected] † Electronic address: [email protected] invariant Yukawa couplings are then given by
L ⊃ −ỹ q ij q L iφ q R j − y q ij q L i φq R j −ỹ ψ ij ψ L iφ ψ R j −y ψ ij ψ L i φψ R j − 1 2 f ij ψ c L i iτ 2 ∆ L ψ L j +ψ c R i iτ 2 ∆ R ψ R j + h.c. ,(5)whereφ = τ 2 φ * τ 2 , y q = y † q ,ỹ q =ỹ † q , y ψ = y † ψ ,ỹ ψ = y † ψ and f = f T .
For simplicity, we do not present the most general renormalizable and parity invariant scalar potential which can be found in many early works [3,4].
The seesaw mechanism: We now review the seesaw mechanism of the neutrino masses in the left-right symmetric model with the choice of Higgs scalars we considered. In this case, the left-right symmetry is broken down to the SM SU (2) L × U (1) Y symmetry after the right-handed triplet Higgs scalar develops its vacuum expectation value (vev) v R ≡ ∆ R . From Eq. (5), we thus have the following Yukawa couplings and Majorana mass term:
L ⊃ −y d ij q L iφ d R j − y u ij q L i ϕu R j − y ℓ ij ψ L iφ ℓ R j −y ν ij ψ L i ϕν R j − 1 2 f ij v R ν c R i ν R j − 1 2 f ij ψ c L i iτ 2 ∆ L ψ L j + h.c. .(6)
Here we have defined
φ 1 = φ 0 1 φ − 1 , φ 2 = φ 0 * 2 −φ + * 2 ,(7)
and then
ϕ = φ 1 cos β + φ 2 sin β = ϕ 0 ϕ − ,(8)y d = −y q sin β −ỹ q cos β ,(9)
y u = y q cos β +ỹ q sin β , (10) y ℓ = −y ψ sin β −ỹ ψ cos β , (11) y ν = y ψ cos β +ỹ ψ sin β ,
where β = arctan v 2 v 1 with v 1 ≡ φ 1 and v 2 ≡ φ 2 .(12)
Obviously, the doublet scalar ϕ is the SM Higgs. Note that y u = −y d and y ν = −y ℓ for β = π 4 from v 1 = v 2 . So we will not consider the case of v 1 = v 2 due to the mass differences between the up and down type quarks.
The first line in Eq. (6) will give the masses to the charged fermions after the electroweak symmetry is broken by v ≡ ϕ ≃ 174 GeV. As for the second line, the first and the second terms generate the Dirac masses of the neutrinos and the Majorana masses of the righthanded neutrinos, respectively:
m D = y ν v ,(13)M R = f v R .(14)
For M R ≫ m D , the left-handed neutrinos can naturally acquire the small Majorana masses,
m I tree = −m * D 1 M † R m † D ∼ O y 2 ν f v 2 v R ,(15)
i.e. the type-I seesaw formula. The third line will also give the left-handed neutrinos a Majorana mass term,
m II tree = f v L with v L ≡ ∆ L .(16)
Here v L can be determined by minimizing the complete scalar potential [3],
v L ≃ − µ v 2 M 2 δ 0 L ∼ − µv 2 v 2 R ,(17)
where µ v R is a product of v R and some combination of couplings entering in the parity invariant scalar potential. So, we have
m II tree ∼ −f µv 2 v 2 R ,(18)
which can be comparable to the type-I seesaw contribution. The generation of the small v L (17) and then the tiny neutrino masses (18) is named as the type-II seesaw.
The radiative generation of neutrino masses: The bidoublet Higgs scalar contains two iso-doublet scalars: the SM Higgs ϕ (8) and
η = −φ 1 sin β + φ 2 cos β = η 0 η − .(19)
η couples to the fermions, but it cannot contribute to any fermion mass at the tree level since it has no vev. We shall show that η gives new radiative seesaw contribution to the neutrino masses through some loop diagrams.
For the purpose of demonstration, we deduce the Yukawa couplings of η to the leptons from Eq. (5),
L ⊃ −h ν ij ψ L i ην R j − h ℓ ij ψ L iη ℓ R j + h.c. ,(20)
where
h ν = −y ψ sin β +ỹ ψ cos β = − y ℓ sec 2β + 1 2 y ν tan 2β for β = π 4 ,(21)h ℓ = −y ψ cos β +ỹ ψ sin β = − y ν sec 2β + 1 2 y ℓ tan 2β for β = π 4 .(22)
As shown in Fig. 1, the quartic coupling between ϕ and η,
L ⊃ −λ(ϕ † η) 2 + h.c. ,(23)
where λ O (1) is some combination of couplings entering in the parity invariant scalar potential, will generate the radiative neutrino masses associated with the first term in Eq. (20) and the Majorana masses of the right-handed neutrinos. We choose the basis in which the Majorana mass matrix (14) of the right-handed neutrinos are real and diagonal and then explicitly calculate the radiative neutrino masses, which have the same forms with those in the two Higgs doublet model [5],
m I 1−loop ij = 3 k=1 h * ν ik h * ν jk 16π 2 M R k × M 2 η 0 R M 2 η 0 R − M 2 R k ln M 2 η 0 R M 2 R k − M 2 η 0 I M 2 η 0 I − M 2 R k ln M 2 η 0 I M 2 R k .(24)
Here η 0 R and η 0 I are defined by η 0 = 1 √ 2 η 0 R + iη 0 I . Note the quartic coupling (23) guarantees the mass difference between η 0 R and η 0 I ,
M 2 η 0 R − M 2 η 0 I = 4λv 2 ,(25)
where λ has been chosen to be real by the proper phase rotation, so that the radiative neutrino masses (24) will not vanish. The mass splitting between the two doublet scalars φ and η are of the order of v R , since they belong to the same representation of SU (2) R . Therefore, M η 0 R,I are of the order of v R for the mass of φ is much below v R . In general, M R k is smaller than v R , so we can take
M 2 R k ≪ M 2 η 0 R,I
and then simplify the mass formula (24) as
m I 1 -loop ij ≃ 3 k=1 h * ν ik h * ν jk 16π 2 M R k ln M 2 η 0 R M 2 η 0 I ≃ 3 k=1 h * ν ik h * ν jk 16π 2 M R k ln 1 + O λ v 2 v 2 R ∼ 1 16π 2 O λh 2 ν f v 2 v R . (26) ψ L ν R ν c R ψ c L η η ϕ ϕ FIG. 1:
The one-loop diagram mediated by the right-handed neutrinos for generating the radiative neutrino masses.
ψ L ℓ R ψ L ψ c L η ∆ L ϕ ϕ FIG. 2:
The one-loop diagram mediated by the left-handed triplet Higgs for generating the radiative neutrino masses.
Now we discuss the contribution from the left-handed triplet Higgs to the radiative neutrino masses. There is a cubic coupling among ∆ L , ϕ and η,
L ⊃ −µ ′ η T iτ 2 ∆ L ϕ + h.c. ,(27)
where µ ′ v R is a product of v R and some combination of couplings entering in the parity invariant scalar potential. Therefore, associated with the third term in Eq. (6) and the second term in Eq. (20), the cubic coupling (27) can generate the radiative neutrino masses as shown in Fig. 2. For illustration, we write down the mass matrix of δ + L and η − ,
L ⊃ − δ + * L , η − M 2 δ + L − 1 √ 2 µ ′ v − 1 √ 2 µ ′ v M 2 η − δ + L η − * ,(28)
Here µ ′ has been chosen to be real by the proper phase rotation. There are two mass eigenstates S 1 and S 2 ,
S 1 S 2 = cos ϑ − sin ϑ sin ϑ cos ϑ δ + L η − * ,(29)
with the mixing angle
tan 2ϑ = √ 2µ ′ v M 2 δ + L − M 2 η − ,(30)
and the masses
M 2 S1,2 = 1 2 M 2 δ + L + M 2 η − ± M 2 δ + L − M 2 η − 2 + 2µ ′2 v 2 . (31) For M 2 δ + L ∼ M 2 η − ∼ M 2 δ + L − M 2 η − = O(v 2 R ) and µ ′ v ≪ v 2 R , we have ϑ = O( µ ′ v v 2 R ) ,(32)M 2 S 1,2 ∼ M 2 S 2 − M 2 S 1 = O(v 2 R ) .(33)
We then calculate the formula of the radiative neutrino masses induced by Fig. 2,
m II 1−loop ij = 1 16π 2 sin 2ϑ √ 2 k=e,µ,τ f ik h † ℓ kj m k × M 2 S 1 M 2 S 1 − m 2 k ln M 2 S 1 m 2 k − M 2 S 2 M 2 S 2 − m 2 k ln M 2 S 2 m 2 k .(34)
Here we have chosen the basis in which the Yukawa couplings (11) of the charged leptons are real and diagonal and have referred m k to the masses of the charged leptons. Note the above mass formula is different from that of the Zee model [6] becaue f is symmetric. For M 2 S 1,2 ≫ m 2 k , we simplify Eq. (34) as
m II 1−loop ij ≃ 1 16π 2 sin 2ϑ √ 2 k=e,µ τ f ik h † ℓ kj m k ln M 2 S 2 M 2 S 1 ≃ 1 16π 2 O µ ′ v v 2 R ln [1 + O (1)] k=e,µ,τ f ik h † ℓ kj m k ∼ 1 16π 2 µ ′ v 2 v 2 R O (y ℓ h ℓ f )(35)
by using Eqs. (32) and (33). In addition to the two one-loop diagrams, i.e. Figs. 1 and 2, there is a two-loop diagram as shown in Fig. 3 contributing to the radiative neutrino masses due to the following cubic coupling,
L ⊃ −µ ′′ η T iτ 2 ∆ L η + h.c. ,(36)
where µ ′′ v R is a product of v R and some combination of couplings entering in the parity invariant scalar potential. For simplicity, we choose µ ′′ to be real by the proper phase rotation. Similar to the Zee-Babu model [7], we calculate the two-loop induced neutrino masses as below,
ψ L ℓ R ψ L ψ c L ℓ c R ψ c L η η ∆ L ϕ ϕm 2−loop ij = 1 64π 4 k,n=e,µ,τ h * ℓ ik f kn h † ℓ nj µ ′′ m k m n M 2 δ ++ L × sin 4 ϑ ln 1 + M 2 δ ++ L M 2 S 1 2 + 1 2 sin 2 2ϑ ln 1 + M 2 δ ++ L M 2 S 1 ln 1 + M 2 δ ++ L M 2 S 2 + cos 4 ϑ ln 1 + M 2 δ ++ L M 2 S 2 2 ≃ 1 64π 4 k,n=e,µ,τ h * ℓ ik f kn h † ℓ nj µ ′′ m k m n M 2 δ ++ L O (1) ∼ 1 64π 4 µ ′′ v 2 v 2 R O(y 2 ℓ h 2 ℓ f ) .(37)
Here we have taken M 2
δ ++ L ∼ M 2 S 1,2 ∼ v 2 R ≫ m 2
e,µ,τ and ϑ ≪ 1 into account. Unlike the Zee-Babu model [7], we needn't constrain the Yukawa couplings h ℓ to be asymmetric.
The radiative neutrino masses versus the tree-level neutrino masses: Now the complete neutrino masses should include five parts,
m ν = m I tree + m II tree + m I 1−loop + m II 1−loop + m 2−loop ∼ v 2 v R O y 2 ν f + µ v R O (f ) + 1 16π 2 O λh 2 ν f + 1 16π 2 µ ′ v R O (y ℓ h ℓ f ) + 1 64π 4 µ ′′ v R O(y 2 ℓ h 2 ℓ f ) ,(38)
where v ≃ 174 GeV, v R > O(TeV) and y ℓ O(10 −2 ) have been known. In the following, we shall show that the five parts can have different weight depending on the choice of the parameters. In particular, for some choice, the radiative contributions could dominate over the tree-level seesaw and could explain the observed neutrino masses.
We now demonstrate that the loop-induced neutrino masses could dominate over the tree-level seesaw for some choice of the parameters. For naturalness, we further assume that there is no cancelation in Eqs. (9) and (10) to generate a quark mass hierarchy so that v 1 and v 2 should not be at the same order since the top quark is much heavier than the bottom quark. For example, we will take v 2 v 1 = O(10 −2 ) and hence h ν ∼ y ℓ + 10 −2 y ν and h ℓ ∼ y ν + 10 −2 y ℓ in the quantitative estimation. We then find: (a) m I In the last case, we need the fine-tuned cancelation of m I tree and m II tree to ensure the complete neutrino masses below the experimental limit.
Summary: We find the new radiative seesaw mechanism for the neutrino masses in the conventional left-right symmetric model with one bi-doublet, one left-handed triplet and one right-handed triplet Higgs scalars. Specifically the neutrino masses can be generated not only by the tree-level seesaw but also by two one-loop diagrams and one two-loop diagram. For some choice of the parameters, the observed neutrino masses can be explained by these loop contributions.
FIG. 3 :
3The two-loop diagram for generating the radiative neutrino masses.
tree for f = O(0.1), λ O(1) and y ν O(10 −4 ); (b) m II 1−loopm I tree for f = O(0.1), µ ′ v R and y ν O(10 −5 ); (c) m 2−loop m I tree for f = O(0.1), µ ′′ v R and y ν O(10 −9). Obviously, we can have m I,II 1−loop m II tree and m 2−loop m II tree for the proper µ and other parameters. We then choose some values of the unknown parameters to show that the loop contributions can match the observed neutrino masses:(i) m I 1−loop ∼ O(10 −2 eV) ≫ m I tree ∼ m II tree ∼ m II 1−loop ≫ m 2−loop for v R = O(10 8 GeV), µ = O(GeV), µ ′ v R , µ ′′ v R , λ = O(1), f = O(0.1) and y ν = O(10 −5 ); (ii) m I tree ∼ m II tree ∼ m I 1−loop ∼ m II 1−loop ∼ O(10 −2 eV) ≫ m 2−loop for v R = O(10 4 GeV), µ = O(10 −6 GeV), µ ′ = O(10 2 GeV), µ ′′ v R , λ = O(10 −4 ), f = O(0.1) and y ν = O(10 −6 ); (iii) m I 1−loop ∼ m II 1−loop ∼ m 2−loop ∼ O(10 −2 eV) ≪ m I tree ∼ m II tree for v R = O(10 4 GeV), µ ∼ v R , µ ′ = O(0.1 GeV), µ ′′ ∼ v R , λ = O(10 −4 ), f = O(0.1) and y ν = O(0.1).
Acknowledgments:We thank Goran Senjanović for helpful discussions.
. P Minkowski, Phys. Lett. B. 67421P. Minkowski, Phys. Lett. B 67, 421 (1977);
T Yanagida, Proc. of the Workshop on Unified Theory and the Baryon Number of the Universe. O. Sawada and A. Sugamoto (KEKof the Workshop on Unified Theory and the Baryon Number of the UniverseTsukuba95T. Yanagida, in Proc. of the Workshop on Unified Theory and the Baryon Number of the Universe, ed. O. Sawada and A. Sugamoto (KEK, Tsukuba, 1979), p. 95;
M Gell-Mann, P Ramond, R L Slansky ; S, Glashow, Quarks and Leptons. M. Lévy et al.New YorkPlenum707M. Gell-Mann, P. Ramond, and R. Slansky, in Supergravity, ed. F. van Nieuwenhuizen and D. Freedman (North Holland, Ams- terdam, 1979), p. 315; S.L. Glashow, in Quarks and Lep- tons, ed. M. Lévy et al. (Plenum, New York, 1980), p. 707;
. R N Mohapatra, G Senjanović, Phys. Rev. Lett. 44912R.N. Mohapatra and G. Senjanović, Phys. Rev. Lett. 44, 912 (1980);
. J Schechter, J W F Valle, Phys. Rev. D. 222227J. Schechter and J.W.F. Valle, Phys. Rev. D 22, 2227 (1980).
. J C Pati, A Salam, Phys. Rev. D. 10275J.C. Pati and A. Salam, Phys. Rev. D 10, 275 (1974);
. R N Mohapatra, J C Pati, Phys. Rev. D. 11566R.N. Mohapatra and J.C. Pati, Phys. Rev. D 11, 566 (1975);
. R N Mohapatra, J C Pati, Phys. Rev. D. 112558R.N. Mohapatra and J.C. Pati, Phys. Rev. D 11, 2558 (1975);
. R N Mohapatra, G Senjanović, Phys. Rev. D. 121502R.N. Mohapatra and G. Senjanović, Phys. Rev. D 12, 1502 (1975).
. R N Mohapatra, G Senjanović, Phys. Rev. D. 23165R.N. Mohapatra and G. Senjanović, Phys. Rev. D 23, 165 (1981).
. N G Deshpande, J F Gunion, B Kayser, F Olness, Phys. Rev. D. 44837and references thereinN.G. Deshpande, J.F. Gunion, B. Kayser, and F. Olness, Phys. Rev. D 44, 837 (1991); and references therein.
. E Ma, Phys. Rev. D. 7377301E. Ma, Phys. Rev. D 73, 077301 (2006).
. A Zee, Phys. Lett. B. 93389A. Zee, Phys. Lett. B 93, 389 (1980).
. A Zee, Phys. Lett. B. 161141A. Zee, Phys. Lett. B 161, 141 (1985);
. K S Babu, Phys. Lett. B. 203132K.S. Babu, Phys. Lett. B 203, 132 (1988).
|
[] |
[
"On the Stability of a N -class Aloha Network",
"On the Stability of a N -class Aloha Network"
] |
[
"Plínio S Dester ",
"Paulo Cardieri ",
"José M C Brito "
] |
[] |
[] |
Necessary and sufficient conditions are established for the stability of a high-mobility N -class Aloha network, where the position of the sources follows a Poisson point process, each source has an infinity capacity buffer, packets arrive according to a Bernoulli distribution and the link distance between source and destination follows a Rayleigh distribution. It is also derived simple formulas for the stationary packet success probability and mean delay.
| null |
[
"https://arxiv.org/pdf/1711.07116v1.pdf"
] | 6,263,542 |
1711.07116
|
7f4f5ad8fa585aacbfa79be122c382ca9ed01efb
|
On the Stability of a N -class Aloha Network
20 Nov 2017
Plínio S Dester
Paulo Cardieri
José M C Brito
On the Stability of a N -class Aloha Network
20 Nov 20171
Necessary and sufficient conditions are established for the stability of a high-mobility N -class Aloha network, where the position of the sources follows a Poisson point process, each source has an infinity capacity buffer, packets arrive according to a Bernoulli distribution and the link distance between source and destination follows a Rayleigh distribution. It is also derived simple formulas for the stationary packet success probability and mean delay.
I. INTRODUCTION
Since the first deployments of large-scale wireless communications systems based on cellular technology in the mid-1970s, there has been an increasing demand for wireless communication services, which has led to the permanent search for more efficient use of radio resources. This situation is now more exacerbated, with applications that require higher data rates, such as those based on video streaming services, or scenarios with a larger number of terminals, as in the situations envisaged by the Internet of Things. In this sense, next-generation system developers and service providers are facing perhaps unthinkable challenges in the 1970s. Challenges, such as data rates of up to tens of Gb/s, latency in the order of milliseconds, and reduced energy consumption to 10% of current consumption, are set as targets for the fifth-generation cellular system (5G System) [1].
In one scenario envisioned for the 5G systems, a number of subnetworks will co-exist in the same geographic area, sharing radio resources. Each of these subnetworks will be dedicated to serve a particular type of application and/or scenario, with its own requirements, such as coverage, transmission rates and maximum acceptable latency [2]. In fact, it seems to be a consensus in the academic and industrial communities that the goals imposed on 5G systems will only be achieved through the use of heterogeneous networks. Thus, the evaluation of the performance of such systems and the design of techniques that efficiently exploit radio resources require a better understanding of the mechanisms involved in the transmission of a message through the wireless medium.
In this work, we are interested in studying the performance of a heterogeneous network in which N classes of users share the same radio resources, i.e., radio spectrum and transmission power. Each of these user classes has its own characteristics, such as terminal density, transmit power and traffic intensity, and quality of service requirements, namely, communication link quality and maximum tolerable delay. Scenarios like this one are expected to be found in 5G systems, involving, for example, applications of Internet of Things, in which thousands of wireless terminals connected to sensors access the wireless network to transmit their messages.
These wireless connections may involve an access point, in a cellular mode, or, alternatively, terminals may communicate directly with each other, in the so-called device-to-device mode (D2D). Terminals may be associated with different applications, with different quality of service requirements, such as maximum acceptable delay and minimum transmission rate.
The scenario studied in this paper has been investigated in several studies found in the literature. A particular interest has been observed in the situation where packets arrive at the terminals randomly, and packets waiting for transmission are stored in queues. In such a situation, mutual interference among terminals makes the queues of the terminals coupled, since the transmission success probability of a terminal (i.e., the service rate of the queue associated with that terminal) depends on the state of the queues of other terminals (if their queues are either empty or non-empty). The analysis of networks with coupled queues is known to be difficult, especially when the capture model 1 is adopted. To overcome this difficulty, several authors have used the concept of stochastic dominance (see, for instance, [3], [4]), which allows to determine the conditions for queue stability. Stamatiou and Haenggi [5] combined the use of the stochastic dominance technique with stochastic geometry results to study the stability of random networks, where terminals are located according to Poisson point processes. Conditions for queue stability were determined in [5] for a network with one and two classes of users.
The present work extends the results shown in [5], expanding the formulation that describes the behavior of users in a random network with N classes. We derive expressions in closed and simple forms for the necessary and sufficient conditions for the stability of the queues at the terminals of each class. More specifically, we have established the necessary and sufficient 1 According to the capture model, a packet is successfully received if the corresponding signal to interference plus noise ratio (SINR) at the receiver is above a certain threshold. In contrast, the collision model states that a packet transmission is successful only if there are no concurrent transmissions. conditions relating user densities, transmission power levels and traffic intensities, that ensure the terminal queues of all classes will be stable. In addition, we show that, in the case of stable networks, the portion of the radio resource allocated to each class is well defined by a simple expression relating its average delay, intensity of traffic, density of terminals, and the minimum acceptable signal-to-interference ratio (i.e., link quality).
The network model adopted in the analysis presented here is based on a model used in [5]- [8], but with a key difference: while in these papers the separation distance between TX and RX terminals is assumed fixed, in our work we assume that this distance follows a Rayleigh distribution. The Rayleigh distribution assumption for the link distance was also used in other works, e.g., [9]. This assumption allowed us to obtain simple mathematical expressions relating traffic intensity, average delay, density of terminals and the required link quality of each class, when the network is stable. While the existence of an interplay among these parameters in a scenario where terminals share radio resources is intuitive, the formulation proposed here unveils this relationship, showing it in a simple way, allowing for insights into the trade-offs amongst key network parameters.
Based on the formulation proposed here, we numerically evaluate the performance of a heterogeneous network with N = 2 classes of terminals that share the same channel: cellular terminals, which access a base station or an access point, and D2D terminals, which communicate directly with each other. In particular, we consider the scenario where D2D terminals can access the channel used by cellular transmissions, but without causing excessive degradation to the performance of the cellular terminals. Using the formulation proposed here, we determine the maximum acceptable traffic intensity of D2D users that guarantees the average delay of cellular users does not exceed a given threshold.
The rest of the article is organized as follows: Section II describes the model used throughout the paper; in Section III we derive stability conditions and the mean delay for a simplified network, where all but one traffic class transmit dummy packets; Section IV presents the main results of the paper, i.e., necessary and sufficient conditions for stability when we have N interacting traffic classes, it also shows a simple expression for the stationary mean delay and the packet success probability; Section V applies the obtained results in two simplified scenarios: one scenario optimizes the transmission power of different traffic classes of D2D with different delay requirements sharing the same channel and the other analyses the performance of a D2D class sharing a channel with a cellular class (uplink), where we set some delay requirements. 4 The notations used in the paper are summarized in Table I. of the source is reallocated following the high-mobility random walk model presented in [10]. The i-th source of traffic class n communicates with a destination located at Y i,n (t). Thus, the distance between the i-th source of class n and its destination is given by R i,n (t) = ||X i,n (t)−Y i,n (t)||. The random variables {Y i,n (t)} t are defined such that {R i,n (t)} t are i.i.d. and distributed as Rayleigh, with mean transmission distance represented by R n . We have chosen this distribution, because it leads to simple results and it has a physical interpretation 2 . The occupation of the buffer at each source is represented by its queue length {Q i,n (t)} of infinite capacity. The probability of a packet arrival at each queue is denoted by a n and the medium access probability by p n .
II. SYSTEM MODEL
Within each slot, the first event to take place for each source with a non-empty queue is the medium access decision with probability p n . If it is granted access and the SIR 3 is greater than a threshold θ n > 0, a packet is successfully transmitted and leaves the queue. Then, we have the arrival of the next packet with probability a n . The last event to take place is the displacement of the sources and destinations. For more details about the order in which these events occur, see [5]. The main difference between this model and the one presented in [5] is that R follows a Rayleigh distribution, instead of being constant.
The queue lengths of the source i, traffic class n are Markov Chains represented by
Q i,n (t + 1) = (Q i,n (t) − D i,n (t)) + + A i,n (t), t ∈ N,(1)
where (·) + = max{·, 0}, A i,n (t) are i.i.d. Bernoulli random variables of parameter a n and
D i,n (t) = e i,n (t) 1 SIR i,n >θn ,
where e i,n (t) are i.i.d. Bernoulli random variables of parameter p n , θ n represents the SIR threshold for successful communication and the SIR of user i, traffic class n is given by
SIR i,n (t) = P n h i,n,i,n (t) R i,n (t) −α (j,k) =(i,n) P k h j,k,i,n (t) ||X j,k (t) − Y i,n (t)|| −α ,(2)
where h j,k,i,n (t) are i.i.d. exponential distributed random variables of parameter 1 and represent the Rayleigh fading, α > 2 is the path loss fading parameter.
III. SINGLE USER CLASS NETWORK ANALYSIS
In this section, we analyze the behavior of one traffic class, given that all the other traffic classes transmit dummy packets, i.e., their users always have packets to transmit. We are considering the buffer of only one traffic class. Without loss of generality let us study the first traffic class.
From now on, for this section, whenever the subscript regarding the traffic class is omitted, we are referring to the first traffic class, i.e., n = 1. This section is a stepping stone for the next and main section of the paper. It also compares the results of the modified model with the results of the original model [5], where R is constant. 2 Let Π ⊂ R 2 be a PPP of density κ and R be the euclidean distance between the origin and the closest point of Π. Then,
the p.d.f. of R is fR(r) = 2κπre −κπr 2 , which is the Rayleigh density function. Furthermore, E[R] = 1/ √ 4κ.
A. Stability Conditions and Stationary Analysis
Sufficient and necessary conditions for stability of the buffers are shown in the following proposition.
Proposition 1. The queueing system {Q i (t)} is stable in the sense defined by [11] if and only
if a < p 1 + φ (λ p + ζ) ,(3)
where
φ 4 Γ(1 + 2/α) Γ(1 − 2/α) R 2 θ 2/α and ζ N n=2 (P n /P 1 ) 2/α λ n p n .
Then, the closure of arrival rates is given by
a ≤ 1 1 + φ (λ + ζ) .(4)
Proof. Using the same arguments as in the proof of [5, Proposition 1], we have stability if and
only if E[A i (t)] < E[D i (t)]
for the case where the first traffic class also transmits dummy packets (dominant network). Then, the effective PPP density of active sources from the n-th traffic class is p n λ n and the result showed in [12,Eq. (9)] gives that
P(SIR i (t) > θ | R i (t) = r) = exp −πΓ(1 + 2/α)Γ(1 − 2/α) θ 2/α r 2 (λ p + ζ) .(5)
The p.d.f. of R i (t) is given by f R (r) = 2κπ r e −κπ r 2 , where κ = 1/4R 2 . Then, it is easy to calculate E[D i (t)] by deconditioning Eq. (5), which results in the right-hand side of (3). The left-hand side and the closure of (3) is immediate.
In the case where the system is stable, we can calculate the stationary probabilities, as showed in the following proposition.
Proposition 2. When the system is stable, the stationary packet success probability is given by
p s = 1 − φ λ a 1 + φ ζ .
Proof. At steady state, the load at each queue is given by ρ = a/(p p s ) and the effective PPP density of active sources is given by λ ρ p. Following the same steps as in the proof of Proposition 1, we have that the stationary packet success probability is given by
p s = P(SIR i > θ) = ∞ 0 P(SIR i > θ | R i = r) f R (r)dr.
Solving the above integral, we find that
p s = 1 1 + φ (λ ρ p + ζ) = 1 1 + φ λ a ps + ζ .
Solving the above equation for p s ends the proof.
Proposition 3. The stationary mean packet delay is given by
D = (1 − a)(1 + φ ζ) p − (1 + φ (λ p + ζ)) a ,
which attains a minimum for a medium access probability p = 1.
Proof. From the proof of [5, Proposition 3] we know that the stationary mean delay is given by
D = (1 − a)/(p p s − a)
. Then, the result follows directly from Proposition 2. For comparison, D is plotted as a function of a in Fig. 1 for the following parameters ζ = 0 (only interference among users of the same class), φ λ = 0.5, 1, 2, the medium access probability is chosen such that the delay is minimized (p = 1) and we also plot the corresponding curves (dashed) for the model where R is constant [5]. All the physical parameters λ, α, θ, R are set to be the same for each couple of curves. It is interesting to notice that, for low values of φ λ the variance from R helps in the performance of the system, this is due to the fact that in an overcrowded system the possibility of R having a probability to be small guarantees some successful transmissions, while in the other model, where R is constant, this does not happen.
We also performed simulations to ascertain our results (crossed points). The average of links in the simulation were set to be 400.
IV. MULTIPLE-CLASS NETWORK
From now on, we consider a network with N classes of users, all with buffer and we assume that the medium access probability for all traffic classes is equal to 1, to simplify the analysis.
The motivation for this assumption, as can be seen in Section III, is that it minimizes the delay and maximizes the stability region 4 for that case.
The following proposition presents the stationary success probability and delay, when transmitting a packet in a stable network. The results that guarantee stability are presented later in the paper.
Proposition 4.
If the network is stable, then the stationary success probability and mean delay for each traffic class n ∈ N are given by
p s,n = 1 + φ n P δ n j P δ j λ j a j 1 − j φ j λ j a j −1 , D n =
1 − a n p s,n − a n ,
where the sums are taken over the set N , φ n Γ(1 + δ) Γ(1 − δ) 4 R 2 n θ 2/α n and δ 2/α.
Proof. The delay follows directly from the proof of [5,Proposition 3]. At steady state, we have that the effective PPP density of active sources for each traffic class is λ n a n /p s,n . Then, using the result from [12,Eq. (9)], one can show that p s,n = P(SIR i,n > θ n )
= ∞ 0 P(SIR i,n > θ n | R i,n (t) = r) f R (r) dr = 1 + φ n P δ n j P δ j λ j a j p s,j −1 ,(6)
which can be rearranged as
Note that the right-hand side does not depend on n. Then, for all j, we can write
P δ j φ j 1 − p s,j p s,j = P δ n φ n 1 − p s,n p s,n .(8)
For each j, we can solve the above equation for p s,j and plug into the sum of Eq. (6). Then, we can solve it for p s,n , which ends the proof.
Lemma 1. If the network is stable, then the following identity holds (at steady state), n∈N φ n λ n D n D n − 1 a n 1 − a n = 1.
Furthermore,
φ j P δ j D j D j − 1 1 1 − a j − 1 = φ k P δ k D k D k − 1 1 1 − a k − 1 ∀ j, k ∈ N .
Proof. We start with the terms of the sum, φ n λ n D n D n − 1 a n 1 − a n (i) = φ n λ n a n 1 − p s,n = P δ n λ n a n p s,n φ n P δ n p s,n 1 − p s,n 4 λ n R 2 n θ 2/α n D n D n − 1 a n 1 − a n = sin(2π/α) 2π/α ,
where we used Euler's reflection formula. Note that an 1−an and sin(2π/α) 2π/α are monotonic increasing functions and Dn Dn−1 is a monotonic decreasing function. The right hand-side of Eq. (9) can be seen as a resource available to the users of the channel. The larger the path loss exponent α, the larger (smaller) the terms λ n , R n , θ n , a n (D n ) can be. A possible modification is to make a direct exchange between decreasing the delay D n and decreasing the arrival rate of packets a n (by controlling the ratio of transmit power levels as it is showed in Section V-B), such that the term Dn Dn−1 an 1−an remains constant; or else increase the arrival rate of packets and decrease the number of users, such that the term λ n an 1−an remains constant; we can also exchange quantities among the terms of traffic class k and ℓ, such that the sum λ k R
2 k θ 2/α k D k D k −1 a k 1−a k + λ ℓ R 2 n θ 2/α n D ℓ D ℓ −1 a ℓ 1−a ℓ
remains constant; and so on.
Lemma 2.
A necessary and sufficient condition for the network stability is that a ∈ ν∈P S ν , where P is the space of all bijective functions from N to N and
S ν = a ∈ [0, 1] N φ ν(n) P δ ν(n) a ν(n) 1 − a ν(n) < 1 − n−1 k=1 φ ν(k) λ ν(k) a ν(k) n−1 k=1 P δ ν(k) λ ν(k) a ν(k) + N k=n P δ ν(k) λ ν(k) ∀n ∈ N ,
with the convention 0 k=1 · = 0.
Proof. See Appendix A.
The following theorem presents a simple form of stating Lemma 2 and relates (in the proof) the stability condition with the stationary mean delay in Lemma 1.
Theorem 1. The system network is stable if and only if a ∈ R, where R a ∈ [0, 1] N φ n P δ n a n 1 − a n < 1 − k φ k λ k a k k P δ k λ k a k ∀n ∈ N = a ∈ [0, 1] N φ n P δ n a n 1 − a n < 1 − k =n φ k λ k a k P δ n λ n + k =n P δ k λ k a k ∀n ∈ N .
Proof. See Appendix B. The proof simply shows that R = ν∈P S ν .
From the proof of Theorem 1, it is clear that for an arbitrary choice of the stationary mean delays D ∈ (1, ∞) N , it is possible to determine a vector of arrival rates a ∈ R, such that the specified mean delays are achieved.
The following corollary establishes a simple stability result, which will be useful in the Section V, where we deal with an optimization problem regarding the transmit powers.
Corollary 1. There exist P 1 , P 2 , . . . , P N ∈ R + such that the network is stable if and only if n∈N φ n λ n a n 1 − a n < 1.
Proof. If for some P 1 , P 2 , . . . , P N the system is stable, then from Theorem 1, a ∈ R and from Lemma 1 we have that 1 = n∈N φ n λ n D n D n − 1 a n 1 − a n > n∈N φ n λ n a n 1 − a n .
On the other hand, if we know that n∈N φ n λ n an 1−an < 1, then we can choose D 1 , D 2 , . . . , D N ∈ (1, ∞) such that n∈N φ n λ n Dn Dn−1 an 1−an = 1. It is easy to see that we can find P 1 , P 2 , . . . , P N such that a ∈ R in (12) and, again from Theorem 1, the system is stable.
V. INTERPRETATION AND APPLICATION OF THE RESULTS
In this section, we present some numerical results using the proposed formulation, applied to scenarios of different classes of D2D terminals and cellular terminals sharing a radio channel.
A. Optimization problem
Let us consider the scenario with N classes of D2D terminals sharing a channel. Each class may represent a particular user application, with each application having a different delay requirement in the network. For instance, applications such as Tactile Internet [13] or V2V have a more restrictive delay requirement than video streams. Let us suppose we are interested in adjusting the transmit power of each traffic class, such that the weighted average delay among all classes is minimized. This problem may be addressed as follows. For fixed arrival rates a that satisfies Corollary 1, let us minimize the delays D by controlling the ratio between the transmission powers P . Since each traffic class may require different response times, let us weight the optimization problem with the vector (c 1 , c 2 , . . . , c N ) ∈ R N + . The larger the coefficient of a class, the smaller the mean delay to deliver packets for that class. Then, we have
min P ∈R N + n∈N c n D n ,(10)
where D n is given by Proposition 4. Note that as thermal noise is not considered in our model,
we have a degree of freedom for the solution P * .
P * n δ = φ n a n 1 − a n 1 − k∈N φ k λ k a k 1 − a k + c n φ n λ n a n (1 − a n ) k∈N c k φ k λ k a k 1 − a k .
Proof. This can be proved by using Karush-Kuhn-Tucker conditions [14, Section 3.3.1].
It is interesting to note that if c n = φ n λ n an 1−an , then the optimum is attained when
D 1 = D 2 = · · · = D N = 1 − k φ k λ k a k 1−a k −1 .
As an example, let us consider a two-class D2D network, where Class 1 has a more restrictive delay requirement than Class 2, such that we choose c 1 ≥ c 2 . Figure 2 show the delays D n , n = 1, 2, for 1 10 ≤ c 2 ≤ 1, c 1 = 1 and a 1 = a 2 = 0.7. As expected, due to the stricter delay requirement of Class 1 and the symmetry between both classes, the optimization resulted in D 1 < D 2 and P * 1 > P * 2 . Another interesting example is to consider two classes with the same λ φ parameter, but with different arrival rates. Figures 3(a) and 3(b) show the delays and the transmit power ratio as a function of the arrival rate of the second class, respectively. As expected, in Fig. 3(a), when a 2 increases, both delays increase and when c 2 ≥ c 1 , D 2 (full curve) tends to remain below D 1 (dashed curve). It is worth noting that the curve of the transmit powers in Fig. 3(b) has a maximum. A possible explanation of this interesting behavior is that when a 2 is small, the second class rarely causes interference in the first class, then P 1 /P 2 ≈ 0 is the best choice to minimize the delays. As a 2 increases, it is necessary to increase the relative transmit power of Class 1, since the interference of Class 2 in Class 1 increases. However, when a 2 is large enough, the packet success probability of Class 2 becomes a concern, therefore it is necessary to decrease the interference from Class 1, thus decreasing the ratio P 1 /P 2 .
B. Cellular and D2D
Let the first and second traffic classes represent the D2D and the cellular, respectively. For the cellular, we consider the uplink transmission, which is closer to the proposed model, since the base stations do not move, only the users move and the model uses a high-mobility PPP.
Furthermore, we must disregard temporal correlation to adequate to the model assumptions.
The maximum arrival rate for the D2D user, when we are able to control the transmission power is given by Proposition 6 and a numerical example is showed in Fig. 4(a), where the quantity Ψ n φ n λ n Dn Dn−1 an 1−an ≥ 0 measures the use of the channel by the n-th traffic class in the sense presented by Lemma 1, where we have that n∈N Ψ n = 1. Then, it is natural to think that the n-th traffic class uses a percentage Ψ n of the channel. Proposition 6. Given the arrival rate a 2 and the constraints D 1 ∈ (1, D * 1 ] and D 2 ∈ (1, D * 2 ], the possible arrival rates for the first traffic class, over all P 1 , P 2 ∈ R + , such that the system is stable, is given by
a 1 ≤ 1 + φ 1 λ 1 D * 1 D * 1 −1 1 − Ψ * 2 −1 , when Ψ * 2 < 1, where Ψ * 2 φ 2 λ 2 D * 2 D * 2 −1 a 2 1−a 2 .
We can achieve equality with
φ 1 φ 2 P δ 2 P δ 1 = Ψ * 2 φ 2 λ 2 + 1 D * 2 −1 1−Ψ * 2 φ 1 λ 1 + 1 D * 1 −1 .
Proof. Follows from Lemma 1 and Theorem 1. As expected, Fig. 4(a) shows that as the use of the channel by the cellular increases or as the maximum delay constraint of the D2D decreases, the maximum arrival rate for the D2D decreases. Furthermore, as the D2D delay constraint increases, the smaller the impact of this change over the maximum arrival rate permitted. This analysis agrees with the simple equation deduced in Lemma 1 that at steady state Ψ 1 + Ψ 2 = 1, i.e., we may divide a percentage Ψ 1 of the use of the channel for the cellular and the other percentage Ψ 2 for the D2D. Then, for the Ψ available, we may choose the parameters of performance a and D, such that we preserve the identity φ λ D D−1 a 1−a = Ψ. The quantity φ is a constant related to the mean link distance of transmission and the SIR threshold for successful communication. Therefore, the relation by a and D is determined by the term φ λ, which is proportional to the quantities λ R 2 θ 2/α .
In order to attain the maximum arrival rate for the D2D, it is necessary to have the transmission power ratios presented in Proposition 6. In Fig. 4(b), it is shown this ratio as a function of the maximum delay for the D2D for some values of Ψ 2 , which is the percentage of the channel used by the cellular. As expected, as we increase the maximum delay for the D2D or as we decrease the use of the channel by the D2D, the smaller the relative power transmission required. It is remarkable that, again, as the maximum delay constraint increases, the smaller the impact over the power transmission ratio. Differently from the quantities a and D, the power transmission ratio is not simply determined by the product λ R 2 θ 2/α , we need to know the values of R 2 θ 2/α and λ separately.
VI. CONCLUSIONS
In this paper, it is proposed a modified model to study the stability and delay of slotted Aloha in Poisson networks. The main modification of the model presented in the paper with respect to other models presented in the literature is to consider an i.i.d. Rayleigh distribution for the distance of the link between source and destination. This provided tractability to the model:
we derived necessary and sufficient conditions for stability in a network with N user-classes;
we also provided simple closed-form expressions for the packet success probability and mean delay. As shown by the results in the paper, the advantage of using this model as a base to model other network effects is its analytical tractability. For example, we were able to derive simple conditions to verify the stability of a network with undetermined transmit powers (see Corollary 1). We also solved (analytically) an optimization problem regarding the minimization of the delays in a network (see Proposition 5); this result was applied to a numerical example involving a D2D network and it showed interesting insights about the optimum transmit power of the user-classes. Let us start with D = N , i.e., all users transmit dummy packages. For each step of the verification, we remove the stable traffic class from the set D. This procedure repeats until the set D becomes empty. In order to attain stability of the dominant network we must have an incoming packet probability smaller than the success probability [16]. A sufficient condition for the first traffic class stability is, for any queue i of this class (by symmetry),
a 1 < P( SIR i,1 > θ 1 ) = 1 + φ 1 P δ 1 N k=1 P δ k λ k −1 ,s,1 = 1 + φ 1 P δ 1 P δ 1 λ k a 1 p (1) s,1 + N k=2 P δ k λ k −1 ,
which can be solved for p (1) s,1 ,
p (1) s,1 = 1 − φ 1 λ 1 a 1 1 + φ 1 P δ 1 N k=2 P δ k λ k .
The next step is to verify the conditions of stability for the second traffic class, when the first traffic class is at steady state. After that, we remove the second traffic class from the set D the j-th traffic class is stable, given that all the traffic classes in N \ D are stable, when
a j < P( SIR i,j > θ j ) = 1 + φ j P δ j j−1 k=1 P δ k λ k a k p (j) s,k + N k=j P δ k λ k −1 ,(11)
where p (j) s,k is the k-th traffic class success probability (1 ≤ k < j) at steady state in the dominant network at the j-th step. To calculate this probability, we must solve the following system of equations. For k ∈ {1, 2, . . . , j − 1} p (j)
s,k = 1 + φ k P δ k j−1 ℓ=1 P δ ℓ λ ℓ a ℓ p (j) s,ℓ + N ℓ=j P δ ℓ λ ℓ −1 .
Using an analogous approach as the one presented in the proof of Proposition 4, we have that
p (j) s,k = 1 + φ k P δ k j−1 ℓ=1 P δ ℓ λ ℓ a ℓ + N ℓ=j P δ ℓ λ ℓ 1 − j−1 ℓ=1 φ ℓ λ ℓ a ℓ −1 , k ∈ {1, 2, . . . , j − 1}.
Comparing the last two equations, it is easy to see that
j−1 ℓ=1 P δ ℓ λ ℓ a ℓ p (j) s,ℓ + N ℓ=j P δ ℓ λ ℓ = j−1 ℓ=1 P δ ℓ λ ℓ a ℓ + N ℓ=j P δ ℓ λ ℓ 1 − j−1 ℓ=1 φ ℓ λ ℓ a ℓ .
Finally, we can use this result to rewrite Eq. (11) as
φ j P δ j a j 1 − a j < 1 − j−1 k=1 φ k λ k a k j−1 k=1 P δ k λ k a k + N k=j P δ k λ k , j ∈ N .
This concludes the proof, since the extension for the other partitions of N is analogous.
APPENDIX B
PROOF OF THEOREM 1
Proof. First, let us show that the set related to Lemma 1 (by taking all the possible delays)
corresponds to the region R defined in the statement of the theorem, i.e., let us show that
R = R ′ , where R ′ D∈(1,∞) N a ∈ [0, 1] N n∈N φ n λ n D n D n − 1 a n 1 − a n = 1, φ j P δ j D j D j − 1 1 1 − a j − 1 = φ k P δ k D k D k − 1 1 1 − a k − 1 ∀ j, k ∈ N .(12)
This can be seen by manipulating the equations that define the set in (12). Let us start with
k φ k λ k D k D k − 1 a k 1 − a k = 1,
which can be rewritten as
k P δ k λ k a k φ k P δ k D k D k − 1 1 1 − a k − 1 = 1 − k φ k λ k a k
Then, using the other equations in (12), we have that
φ n P δ n D n D n − 1 1 1 − a n − 1 k P δ k λ k a k = 1 − k φ k λ k a k .
Since Dn Dn−1 ∈ (1, ∞), then for each n ∈ N , we must have φ n P δ n a n 1 − a n < φ n P δ n D n D n − 1 1 1 − a n − 1 = 1 − k φ k λ k a k k P δ k λ k a k .
Therefore, R ′ ⊂ R. On the other hand, when D n varies continually from 1 to ∞, a n varies continually from 0 to the maximum value respecting Eq. (13), which means that R ⊂ R ′ . Then,
R ′ = R.
Now, let us prove that R ′ ⊂ ν∈P S ν . Note that the stability region of Lemma 2 demands that at least one a n (n ∈ N ) satisfies φ n P δ n a n 1 − a n < 1 N k=1 P δ n λ n .
Let us show that R ′ requires the same restriction by contradiction. Suppose that there exist a ∈ R ′ such that φ n P δ n a n 1 − a n > 1 N k=1 P δ k λ k ∀n ∈ N .
If a ∈ R ′ , then using (12) and the above inequality, we have that 1 = N n=1 P δ n λ n D n D n − 1 φ n P δ n a n 1 − a n > N n=1 P δ n λ n D n D n − 1 N k=1 P δ k λ k > 1.
The last inequality comes from the fact that D n /(D n − 1) > 1, if D n ∈ (1, ∞). Clearly we have a contradiction, since R ′ is a non-empty set. Therefore, if a ∈ R ′ we must have at least one a n that satisfies Eq. (14). For simplicity of exposition, let us suppose that the a n that satisfies this restriction is from the first traffic class (n = 1). The next step is to show that as in the set ν∈P S ν , the set R ′ also requires that we have at least one a n , aside from a 1 , that satisfies φ n P δ n a n 1 − a n < 1 − φ 1 λ 1 a 1 P δ 1 λ 1 a 1 + N k=2 P δ k λ k .
We can also prove this by contradiction and then, for simplicity, suppose that a 2 is the one that satisfies this restriction. We repeat this procedure until we reach all the N traffic classes. Let us show the j-th step for completeness. Suppose that for all n ∈ {j, j + 1, . . . , N}, φ n P δ n a n 1 − a n > 1 − j−1 k=1 φ k λ k a k j−1 k=1 P δ k λ k a k + N k=j P δ k λ k .
If a ∈ R ′ , ℓ ∈ N , then by (12) and the above inequality we have that φ ℓ P δ ℓ D ℓ D ℓ − 1 1 1 − a ℓ − 1 = φ n P δ n D n D n − 1 1 1 − a n − 1 > φ n P δ n a n 1 − a n > 1 − j−1 k=1 φ k λ k a k j−1 k=1 P δ k λ k a k + N k=j P δ k λ k .
Again, we use (12) and the above inequalities to write that
1 = j−1 ℓ=1 P δ ℓ λ ℓ a ℓ φ ℓ P δ ℓ D ℓ D ℓ − 1 1 1 − a ℓ − 1 + j−1 ℓ=1
φ ℓ λ ℓ a ℓ + N n=j P δ n λ n D n D n − 1 φ n P δ n a n 1 − a n > 1 − j−1 k=1 φ k λ k a k j−1 k=1 P δ k λ k a k + N k=j P δ k λ k j−1 ℓ=1 P δ ℓ λ ℓ a ℓ + N n=j P δ n λ n + j−1 ℓ=1 φ ℓ λ ℓ a ℓ = 1.
As expected, we have a contradiction. Then, we must have at least one a n , n ∈ {j, j + 1, . . . , N} such that φ n P δ n a n 1 − a n < 1 − j−1 k=1 φ k λ k a k j−1 k=1 P δ k λ k a k + N k=j P δ k λ k .
We choose a j to satisfy the restriction. It is possible to do this for a 1 , a 2 , . . . , a N . For simplicity of exposition, we showed the procedure in the order a 1 , a 2 , . . . , a N , however it is easy to see that it can be done for all possible permutations. Therefore, R ′ ⊂ ν∈P S ν . However, since we included all possible delays in Eq. (12), we must also have that ν∈P S ν ⊂ R ′ . Therefore,
R = R ′ = ν∈P S ν .
time slot t ∈ N and each traffic class n ∈ N = {1, 2, . . . , N}, we have a homogeneous Poisson point process (PPP) denoted by Φ n (t) ⊂ R 2 of density λ n , which represent the position of the sources. These PPP are independent from each other and from the past. Each source of traffic class n transmits with power P n . The position of the sources are given by
{X i,n (t)} i , i ∈ N, i.e., Φ n (t) = {X i,n (t)} i .More precisely, for each time slot the position X i,n (t)
Figure 1 .
1Delay D as a function of the arrival rate of packets a per time slot, at the optimum medium access probability (p = 1) and ζ = 0. Simulation results are shown in crosses. Dashed curves correspond to the model presented in[5], where R is constant.
a j p s,j .
i) comes from Proposition 4 and (ii) comes from Eq. (7). Summing over N ends the proof of the first identity. For the second relation, we use Proposition 4 once again to find that Comparing this expression with Eq. (8) ends the proof. Lemma 1 is an elegant form to see that a channel is a limited resource regarding traffic density and delay. Let us rewrite the identity in terms of physical parameters, N n=1
Figure 2 .
2Delays of two-class D2D network, for a1 = a2 = 0.7, φ1 λ1 = φ2 λ2 = 0.15, and c1 = 1. Proposition 5. The minimum of the optimization problem (10) is attained by βP * , where β is any positive real constant and for n ∈ N ,
Figure 3 .
3These figures represent the optimization of a 2-class D2D network with the following parameters: a1 = 0.7, φ1 λ1 = φ2 λ2 = 0.15 and c1 = 1. In the left figure, the dashed curve is D1 and the full curve is D2.
Figure 4 .
4Left figure shows the maximum arrival rate achievable for the first traffic class (D2D), such that the constraints of Proposition 6 are satisfied; Ψ2 represents the use of the channel by the second traffic class (cellular); we used φ1 λ1 = 1. Right figure shows the transmit power ratio to achieve the maximum a1; we used φ1 λ1 = φ2 λ2 = 1 and D * 2 = 3 [slots].
and calculate the stationary success probability of the two stable traffic classes in the dominant network. We repeat these steps until we remove all traffic classes, i.e, D = {}. We show this by induction; we suppose stability of the traffic classes 1, 2, . . . , j − 1. Let D = {j, j + 1, . . . N};
Table I
INOTATIONS AND SYMBOLS USED IN THE PAPERSymbol
Definitions/explanation
α ∈ (2, ∞)
path loss exponent
δ ∈ (0, 1)
= 2/α
N
number of traffic classes
N
the set {1, 2, . . . , N }
n ∈ N
refers to the n-th traffic class
pn
medium access probability
an ∈ (0, 1)
packet arrival rate per time slot
ps,n
packet success probability
θn
SIR threshold for successful communication
Dn ∈ (1, ∞) average packet transmission delay
Rn
mean transmission distance
Pn
transmission power
Φn
Poisson point process for the sources
λn
density of Φn
φn
4 Γ(1 + 2/α) Γ(1 − 2/α) R
where SIR represents the signal-interference ratio in the dominant network. This guarantees stability for the first traffic class. Let us remove it from the set D. Then, we calculate the stationary success probability of the first traffic class p(1)
s,1 for this dominant network. At steady
state, we have
p
(1)
We assume that thermal noise is negligible.
Stability Region represents all the possible arrival rates, for which the system is stable.
Five disruptive technology directions for 5g. F Boccardi, R W Heath, A Lozano, T L Marzetta, P Popovski, IEEE Communications Magazine. 522F. Boccardi, R. W. Heath, A. Lozano, T. L. Marzetta, and P. Popovski, "Five disruptive technology directions for 5g," IEEE Communications Magazine, vol. 52, no. 2, pp. 74-80, February 2014.
. A Ghosh, N Mangalvedhe, R Ratasuk, B Mondal, M Cudak, E Visotsky, T A Thomas, J G Andrews, P Xia, H , A. Ghosh, N. Mangalvedhe, R. Ratasuk, B. Mondal, M. Cudak, E. Visotsky, T. A. Thomas, J. G. Andrews, P. Xia, H. S.
Heterogeneous cellular networks: From theory to practice. H S Jo, T D Dhillon, Novlan, IEEE Communications Magazine. 506Jo, H. S. Dhillon, and T. D. Novlan, "Heterogeneous cellular networks: From theory to practice," IEEE Communications Magazine, vol. 50, no. 6, pp. 54-64, June 2012.
On the stability of interacting queues in a multiple-access system. R R Rao, A Ephremides, IEEE Transactions on Information Theory. 345R. R. Rao and A. Ephremides, "On the stability of interacting queues in a multiple-access system," IEEE Transactions on Information Theory, vol. 34, no. 5, pp. 918-930, Sep 1988.
Stability and delay of finite-user slotted aloha with multipacket reception. V Naware, G Mergen, L Tong, IEEE Transactions on Information Theory. 517V. Naware, G. Mergen, and L. Tong, "Stability and delay of finite-user slotted aloha with multipacket reception," IEEE Transactions on Information Theory, vol. 51, no. 7, pp. 2636-2656, July 2005.
Random-access poisson networks: stability and delay. K Stamatiou, M Haenggi, IEEE Communications Letters. 141120K. Stamatiou and M. Haenggi, "Random-access poisson networks: stability and delay," IEEE Communications Letters, vol. 14, no. 11, pp. 1035-1037, 2010. 20
On the stability of a full-duplex aloha network. A Munari, F Rossetto, P Mähönen, M Petrova, IEEE Communications Letters. 2012A. Munari, F. Rossetto, P. Mähönen, and M. Petrova, "On the stability of a full-duplex aloha network," IEEE Communications Letters, vol. 20, no. 12, pp. 2398-2401, 2016.
Throughput optimization in wireless networks under stability and packet loss constraints. P H Nardelli, M Kountouris, P Cardieri, M Latva-Aho, IEEE Transactions on Mobile Computing. 138P. H. Nardelli, M. Kountouris, P. Cardieri, and M. Latva-Aho, "Throughput optimization in wireless networks under stability and packet loss constraints," IEEE Transactions on Mobile Computing, vol. 13, no. 8, pp. 1883-1895, 2014.
Performance analysis of cooperative communication in decentralized wireless networks with unsaturated traffic. Y Zhou, W Zhuang, IEEE Transactions on Wireless Communications. 155Y. Zhou and W. Zhuang, "Performance analysis of cooperative communication in decentralized wireless networks with unsaturated traffic," IEEE Transactions on Wireless Communications, vol. 15, no. 5, pp. 3518-3530, 2016.
Spectrum sharing for device-to-device communication in cellular networks. X Lin, J G Andrews, A Ghosh, IEEE Transactions on Wireless Communications. 1312X. Lin, J. G. Andrews, and A. Ghosh, "Spectrum sharing for device-to-device communication in cellular networks," IEEE Transactions on Wireless Communications, vol. 13, no. 12, pp. 6727-6740, 2014.
Stochastic geometry and wireless networks: Volume II applications. F Baccelli, B Błaszczyszyn, Foundations and Trends R in Networking. 41-2F. Baccelli, B. Błaszczyszyn et al., "Stochastic geometry and wireless networks: Volume II applications," Foundations and Trends R in Networking, vol. 4, no. 1-2, pp. 1-312, 2010.
Stability conditions for some distributed systems: Buffered random access systems. W Szpankowski, Advances in Applied Probability. 262W. Szpankowski, "Stability conditions for some distributed systems: Buffered random access systems," Advances in Applied Probability, vol. 26, no. 2, pp. 498-515, 1994.
Stochastic geometry and random graphs for the analysis and design of wireless networks. M Haenggi, IEEE Journal on Selected Areas in Communications. 277M. Haenggi et al., "Stochastic geometry and random graphs for the analysis and design of wireless networks," IEEE Journal on Selected Areas in Communications, vol. 27, no. 7, 2009.
The tactile internet: Applications and challenges. G P Fettweis, IEEE Vehicular Technology Magazine. 91G. P. Fettweis, "The tactile internet: Applications and challenges," IEEE Vehicular Technology Magazine, vol. 9, no. 1, pp. 64-70, 2014.
Nonlinear programming. Athena scientific Belmont. D P Bertsekas, D. P. Bertsekas, Nonlinear programming. Athena scientific Belmont, 1999.
Stable throughput regions in wireless networks. S Kompella, A Ephremides, Foundations and Trends in Networking. 74S. Kompella, A. Ephremides et al., "Stable throughput regions in wireless networks," Foundations and Trends in Networking, vol. 7, no. 4, pp. 235-338, 2014.
The stability of a queue with non-independent inter-arrival and service times. R M Loynes, Mathematical Proceedings of the Cambridge Philosophical Society. Cambridge Univ Press58R. M. Loynes, "The stability of a queue with non-independent inter-arrival and service times," in Mathematical Proceedings of the Cambridge Philosophical Society, vol. 58, no. 03. Cambridge Univ Press, 1962, pp. 497-520.
|
[] |
[
"Scheduling Rigid Demands on Continuous-Time Linear Shift- Invariant Systems",
"Scheduling Rigid Demands on Continuous-Time Linear Shift- Invariant Systems"
] |
[
"Farhad Farokhi ",
"Michael Cantoni ",
"Iman Shames "
] |
[] |
[] |
We consider load scheduling on constrained continuous-time linear dynamical systems, such as automated irrigation and other distribution networks. The requested loads are rigid, i.e., the shapes cannot be changed. Hence, it is only possible to shift the order back-and-forth in time to arrive at a feasible schedule. We present a numerical algorithm based on using log-barrier functions to include the state constraints in the social cost function (i.e., an appropriate function of the scheduling delays). This algorithm requires a feasible initialization. Further, in another algorithm, we treat the state constraints as soft constraints and heavily penalize the constraint violations. This algorithm can even be initialized at an infeasible point. The applicability of both these numerical algorithms is demonstrated on an automated irrigation network with two pools and six farms.
|
10.1109/cdc.2015.7403058
|
[
"https://arxiv.org/pdf/1509.05499v1.pdf"
] | 8,339,140 |
1509.05499
|
94988f8a77db7ce6ab477ecd07cb84ce5449535b
|
Scheduling Rigid Demands on Continuous-Time Linear Shift- Invariant Systems
Farhad Farokhi
Michael Cantoni
Iman Shames
Scheduling Rigid Demands on Continuous-Time Linear Shift- Invariant Systems
We consider load scheduling on constrained continuous-time linear dynamical systems, such as automated irrigation and other distribution networks. The requested loads are rigid, i.e., the shapes cannot be changed. Hence, it is only possible to shift the order back-and-forth in time to arrive at a feasible schedule. We present a numerical algorithm based on using log-barrier functions to include the state constraints in the social cost function (i.e., an appropriate function of the scheduling delays). This algorithm requires a feasible initialization. Further, in another algorithm, we treat the state constraints as soft constraints and heavily penalize the constraint violations. This algorithm can even be initialized at an infeasible point. The applicability of both these numerical algorithms is demonstrated on an automated irrigation network with two pools and six farms.
I. INTRODUCTION
Scheduling problems arise in a variety of contexts. A peculiar scheduling problem is studied in this paper. It involves a constrained dynamical system and the processing of request to apply load, with a fixed but shiftable profile, on this system across time. The goal is to optimize a social measure of sensitivity to scheduling delay while satisfying hard constraints. This problem is motivated by an aspect of demand management in automated irrigation networks [1], [2], [3], and may arise in other areas. The main challenge associated with this problem relates to the rigidity of the load request, whereby the construction of a feasible schedule can only involve shifting requests back-and-forth in time. Relaxation of the rigidity requirement can lead to a formulation as a (large) linear program [1].
The formulation of the rigid load scheduling problem here distinguishes itself in the following ways. By contrast with [2], [3], a dynamics relationship between the load and the constrained system states are modelled. In [2], [3], only static capacity constraints are considered. The load scheduling problem considered in [1] does include dynamics, however this is modelled in discrete time. By contrast a continuous-time setting is employed in this paper. The discrete time formulation in [1] leads to a mixed-integer program, which is difficult to solve [4]. The continuoustime formulation here, on the other hand, gives rise to two gradient based numerical algorithms. The first involve logbarrier functions and thus a feasible initial point. The other uses a soft encoding of the state constraints, with heavy penalty on constraint violation, which does not require a feasible initial point. Both algorithms lead to only locally This optimal solutions due to the non-convexity of the scheduling problem.
The rest of the paper is organized as follows. We first formulate the problem in Section II. The numerical algorithms are presented in Sections V and Section IV. In Section V, the applicability of the developed algorithms is numerically studied on an automated irrigation network with two pools and six farms. Finally, Section VI concludes the paper.
II. PROBLEM FORMULATION
Consider the continuous-time linear time-invariant dynamical systeṁ
x(t) = Ax(t) + Bu(t) + m i=1 E i w i (t), x(0) = x 0 , (1)
where x(t) ∈ R nx is the state of the system, u(t) ∈ R nu is the control input (e.g., the water-level references in automated irrigation networks), and w i (t) ∈ R nw,i , 1 ≤ i ≤ m, is a profile-constrained (e.g. on-off) input signal representing the scheduled load on the system corresponding to the supply of resources to customer i (e.g., the flow of the supplied water to each farmer in an irrigation network). Throughout this paper, we assume that the control signal u(t) over the planning horizon [0, T ] with T ∈ R >0 is a discrete-time signal passed through a zero-order hold, that is, u i (t) = α i,k for all 1 ≤ i ≤ n u and all k∆ ≤ t < (k + 1)∆ with a given sampling time 0 < ∆ ≤ T . Although slightly conservative, this assumption allows us to work with finite-dimensional optimization problems instead of more complicated optimal control problems. For all integers 0 ≤ k ≤ K := T /∆ − 1
and 1 ≤ i ≤ n u , we define ξ i,k (t) = e i [step(t − k∆) − step(t − (k + 1)∆)], ∀t ∈ [0, T ],
where the mapping step : R → {0, 1} denotes the Heaviside step function, i.e., step(t) = 1 if t ≥ 0 and step(t) = 0 otherwise. Moreover, e i ∈ R nu is the column-vector with all entries equal to zero except the i-th entry which is equal to one. Therefore, we get
u(t) = u 0 + K k=1 nu i=1 α i,k ξ i,k (t), ∀t ∈ [0, T ], where u 0 ∈ R nu is the steady-state control input.
The customers submit demands (v i (t)) t∈R , 1 ≤ i ≤ m. These demands are rigid (i.e., their shape cannot be changed). Hence, our decision variables are the delays that correspond to shifting the requested demand across the planning horizon, i.e., we select τ i > 0 so that w i (t) = v i (t − τ i ) for each 1 ≤ i ≤ m. In doing so, the goal is to ensure that that the state of the network x(t) stays inside the feasible set X = {x ∈ R nx | Cx ≤ d}. We can write this scheduling problem as
min (τi) m i=1 ,((αi) nu i=1 ) K k=1 m i=1 h i (τ i ), (2a) s.t. τ i ≤ τ i ≤ τ i , ∀i ∈ {1, . . . , m} (2b) x(t) = Ax(t) + Bu(t) + m i=1 E i v i (t − τ i ), x(0) = x 0 , (2c) Cx(t) ≤ d, ∀t ∈ [0, T ],(2d)u(t)= K k=1 nu i=1 α i,k ξ i,k (t), ∀t ∈ [0, T ], (2e) u 0 +u≤u(t)≤u 0 +u, ∀t ∈ [0, T ], (2f)
where τ i and τ i are the bounds on the scheduling delay for demand i, u and u are the bounds on the control signal deviations u(t) − u 0 , and the continuously differentiable mapping h i : R → R captures the sensitivity of customer i to the delay for scheduling its demand. Throughout the next section, we implicitly assume that T is long enough so that the optimization problem in (2) becomes feasible with a constant nominal control input (i.e., if the demands are separated from each other "to some degree", the state of the system stays feasible without any effort). This assumption is made to make sure that we can always find a feasible initial condition for the numerical algorithm, proposed in the next section, by simply separating the demands from each other. Towards the end of this paper, we present another approach for solving our scheduling problem that avoids requiring a feasible initial condition by treating the constraints on the state as soft constraints.
III. NUMERICAL ALGORITHM In this section, we present a numerical algorithm for solving (2) by adding the state constraints in (2d) to the cost function using log-barrier functions. Let us definē
x 0 (t) = exp(At)x 0 + t 0 exp(A(t − β))Bu 0 dβ, x u i,k (t) = t 0 exp(A(t − β))Bξ i,k (β)dβ, ∀i ∈ {1, . . . , n u }, ∀k ∈ {1, . . . , K}, x v i (t) = t 0 exp(A(t − β))E i v i (β)dβ, ∀i ∈ {1, . . . , m}.
Since the underlying system in (1) is linear and time invariant, the solution of the ordinary differential equation (1) can be written explicitly as
x(t) =x 0 (t) + K k=1 nu i=1 α i,kx u i,k (t) + m i=1x v i (t − τ i ). Now, we can rewrite the optimization problem in (2) as min (τi) m i=1 ,((α i,k ) nu i=1 ) K k=1 m i=1 h i (τ i ), (3a) s.t. x(t) =x 0 (t) + K k=1 nu i=1 α i,kx u i,k (t) + m i=1x v i (t − τ i ),(3b)Cx(t) ≤ d, ∀t ∈ [0, T ], (3c) τ i ≤ τ i ≤ τ i , ∀i ∈ {1, . . . , m}, (3d) u i ≤ α i,k ≤ u i , ∀i ∈ {1, . . . , n u }, ∀k ∈ {1, . . . , K}. (3e)
This optimization problem is still difficult to solve as we have to check infinitely many constraints; see (3c). Let us use the notation C j , 1 ≤ j ≤ p, to denote the rows of the matrix C ∈ R p×nx . We add the state constraints in (3c) to the cost function using log-barrier functions. This transforms the optimization problem in (3) to
min (τi) m i=1 ,((α i,k ) nu i=1 ) K k=1 J((τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) (4a) s.t. τ i ≤ τ i ≤ τ i , ∀i ∈ {1, . . . , m}, (4b) u i ≤ α i,k ≤ u i , ∀i ∈ {1, . . . , n u }, ∀k ∈ {1, . . . , K}, (4c) where J((τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) = m i=1 h i (τ i ) − p z=1 T 0 log − C z x 0 (t) + K k=1 nu i=1 α i,kx u i,k (t) + m i=1x v i (t − τ i ) + d z dt
in which ∈ R >0 is an appropriately selected parameter. Remark 1: With increasing , the optimal solution is pushed further from the boundary of the feasible set. Therefore, to recover the optimal scheduling, we need to sequentially reduce and employ the solution of each step as the initialization of the next step. This would result in a more numerically stable algorithm; see the log barrier methods in [5].
Lemma 1: J((τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) is a continuously differentiable function. Moreover, ∂ ∂τ J((τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) = d dτ h (τ ) + p z=1 T 0 −C z (Ax v (t − τ ) + E v (t − τ )) −C z x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) + d z dt ∂ ∂α j, J((τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) = p z=1 T 0 C zx u j, (t) −C z x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) + d z dt where x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) =x 0 (t) + K k=1 nu i=1 α i,kx u i,k (t) + m i=1x v i (t − τ i ).
Proof: First note that
∂ ∂τ T 0 log(−C z x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) + d z )dt = T 0 −C z ∂x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 )/∂τ −C z x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) + d z dt where ∂ ∂τ x(t; (τ i ) m i=1 ,((α i,k ) nu i=1 ) K k=1 ) = ∂ ∂τ x v (t − τ ) = −ẋ v (t − τ ) = −(Ax v (t − τ ) + E v (t − τ )).
Similarly, we have
∂ ∂α j, T 0 log(−C z x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) + d z )dt = T 0 −C z ∂x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 )/∂α j, −C z x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) + d z dt where ∂ ∂α j, x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) =x u j, (t).
The rest of the proof follows from simple algebraic manipulations. Now, we can use Algorithm 1 (overleaf) to recover a local solution of (4). We can select the step sizes µ τi l and µ α j, l using backtracking line search algorithm [5, p. 464] and terminate the algorithm whenever the improvements in the cost function becomes negligible. Unfortunately, this algorithm requires a feasible starting point (because the argument of the logarithmic functions cannot become negative). We remove this assumption in the next section by proposing a numerical procedure that treats the state constraints as soft constraints.
IV. SOFT CONSTRAINTS ON STATES
In the previous section, we were required to find a feasible initialization to be able to run Algorithm 1. Here, we take a different approach by solving the optimization problem
min (τi) m i=1 ,((αi) nu i=1 ) K k=1 m i=1 h i (τ i ) + p z=1 T 0 e ϑ(Czx(t)−dz) dt, (5a) s.t. τ i ≤ τ i ≤ τ i , ∀i ∈ {1, . . . , m} (5b) x(t) = Ax(t) + Bu(t) + m i=1 E i v i (t − τ i ), (5c) x(0) = x 0 ,(5d)u(t)= K k=1 nu i=1 α i,k ξ i,k (t), ∀t ∈ [0, T ], (5e) u ≤ u(t) ≤ u, ∀t ∈ [0, T ],(5f)
where ϑ ∈ R >0 is an appropriately selected constant. In this problem, we may violate the constraints Cx(t) − d ≤ 0, however, the term p z=1
T 0 e ϑ(Czx(t)−dz) dt heavily penalizes such violations. For small values of ϑ, this term also penalizes the states being close to the boundary (of the feasible set), however, as we increase ϑ, this term approaches zero inside the feasible set and infinity outside of the feasible set.
Note that similar to the previous section, we can transform (5) into
min (τi) m i=1 ,((α i,k ) nu i=1 ) K k=1 m i=1 h i (τ i ) + p z=1 T 0 e ϑ(Czx(t)−dz) dt,(6a)s.t. x(t) =x 0 (t) + K k=1 nu i=1 α i,kx u i,k (t) + m i=1x v i (t − τ i ),(6b)u i ≤ α i,k ≤ u i , ∀i ∈ {1, . . . , n u }, ∀k ∈ {1, . . . , K}. (6c)
Let us define
J ((τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) = m i=1 h i (τ i ) + p z=1 T 0 exp ϑ C z x 0 (t) + K k=1 nu i=1 α i,kx u i,k (t) + m i=1x v i (t − τ i ) − d z dt.
Hence, we may rewrite (6) as
min (τi) m i=1 ,((α i,k ) nu i=1 ) K k=1 J ((τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ),(7a)s.t. τ i ≤ τ i ≤ τ i , ∀i ∈ {1, . . . , m}, (7b) u i ≤ α i,k ≤ u i , ∀i ∈ {1, . . . , n u }, ∀k ∈ {1, . . . , K}. (7c)
Similarly, we can prove the following result regarding the augmented cost function. Update
Lemma 2: J ((τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) is a continuously differentiable function. Moreover, ∂ ∂τ J ((τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) = d dτ h (τ ) − p z=1 T 0 ϑ exp(ϑ(C z x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) − d z )) × C z (Ax v (t − τ ) + E v (t − τ ))dt, ∂ ∂α j, J ((τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) = p z=1 T 0 ϑ exp(ϑ(C z x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) − d z )) × C zx u j, (t)dt, where x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) is defined as in Lemma 1. Proof: First, note that ∂ ∂τ T 0 exp(ϑ(C z x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) − d z ))dt = T 0 ϑ exp(ϑ(C z x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) − d z )) × C z ∂ ∂τ x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) dt.τ [l] = P τ τ τ [l − 1] − µ τi l ∂J((τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 )) ∂τ i (τi) m i=1 = (τi[l − 1]) m i=1 ((α i,k ) nu i=1 ) K k=1 = ((α i,k [l − 1]) nu i=1 ) K k=1 , ∀ ∈ {1, . . . , m}, and α j, [l] =P uj u j α j, [l − 1] − µ α j, l ∂J((τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 )) ∂α j, (τi) m i=1 = (τi[l − 1]) m i=1 ((α i,k ) nu i=1 ) K k=1 = ((α i,k [l − 1]) nu i=1 ) K k=1 , ∀j ∈ {1, . . . , n u }, ∀ ∈ {1, . . . , K},
where, for constants β < γ, P γ Similarly, we have
β [x] = β if x < β, P γ β [x] = x if β ≤ x ≤ γ, and P γ β [x] = γ if x > γ. 3: end forc in,i c out,i t d,i κ i φ i ρ i i = 1∂ ∂α j, T 0 exp(ϑ(C z x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) − d z ))dt = T 0 ϑ exp(ϑ(C z x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) − d z )) × C z ∂ ∂α j, x(t; (τ i ) m i=1 , ((α i,k ) nu i=1 ) K k=1 ) dt.
The rest of the proof follows from simple algebraic manipulations. Algorithm 1 may be used with the gradients in Lemma 2 to find a local solution of the optimization problem in (7). Moreover if, after finding the optimal solution for a given ϑ, the state constraints were violated at an intolerable level, we may sequentially increase ϑ and solve the problem until we get acceptable performance. The shifted demands at the initialization of the algorithm. The dotted curve demonstrates the requests and the solid curve demonstrates their shifted counterpart.
V. NUMERICAL EXAMPLE In this section, we illustrate the applicability of the algorithms on a water channel with two pools. The numerical example is borrowed from [1]. Each pool is modelled as
y i (s) = c in,i s e −t d,i s q i (s) − c out,i s q i+1 (s) − c out,i s ζ i (s),
where c in,i and c out,i are discharge rates determined by the physical characteristics of the gates used to set the flow between neighbouring pools, and t d,i is the delay associated with the transport of water along the pool. Here, ζ i (s) denotes the overall off-take flow load on pool i, that is, all the water supplied to the farms connected to this pool. Moreover, q i (s) is the flow of water from pool i − 1 to pool i and y i (s) denotes the water level in pool i. For the purpose of this example, we replace the delays with their first-order Padé approximation 1 . Each pool is controlled, locally, by where κ i , φ i , and ρ i are appropriately selected control parameters. Furthermore, u i (s) denotes the water-level reference signal of pool i. Table I shows the parameters used in this example. The state constraints are as follows 9.4 ≤ y 1 (t) ≤ 9.7 and 9.5 ≤ y 2 (t) ≤ 9.7. Finally, throughout this example, we fix u 0 = [9.50, 9.55] . Figure 1 illustrates the requested demands of the farms. Here, (v i (t)) 3 i=1 and (v i (t)) 6 i=4 , respectively, denote demands for pool 1 and 2. Let us select linear penalty functions h i (τ i ) = τ i for all i. Moreover, assume that the reference signal should belong to a bounded region captured by
q i (s) = κ i (φ i s + 1) s(ρ i s + 1) (u i (s) − y i (s)),u 0 − 0.05 0.05 ≤ u(t) ≤ u 0 + 0.05 0.05 .
Note that without these control input constraints, one can schedule all the loads without any delay but with large control input deviations. In this example, we select τ i = 0, ∀i, which means that we can only shift demands forward. Let us also select τ i = 300 min for all i. First, we use Algorithm 1 to extract a reasonable schedule by shifting these demands. This algorithm requires a feasible starting point, which can be constructed by shifting the The shifted demands for the local solution recovered by the proposed algorithm in Section IV with ϑ = 100. The dotted curve demonstrates the shifted demands at the initialization and the solid curve demonstrates the shifted demands at the suboptimal solution. demands (to be somewhat distant from each other). Figure 2 illustrates the shifted demands at this initialization (solid curve) as well as the original requests (dotted curves) for comparison. Let us fix = 0.1. Figure 3 shows the shifted demands for the local solution recovered by the proposed algorithm. As we expect, the delays are significantly smaller in comparison to the initialization. Figure 4 portrays the reference signals for the solution and Figure 5 illustrates the outputs. Evidently, the output stays in the desired region. Although powerful, Algorithm 1 requires a feasible initial condition that may not be easy to find. Therefore, in the rest of this section, we study the method presented in Section IV.
Let us select ϑ = 100. Figure 7 illustrates the shifted demands for the locally optimal solution recovered by the proposed algorithm. At the starting point (fed to the algorithm), all the decision variables are selected to be equal to zero for which the state of the system does not stay in the feasible region; see Figure 6. Figures 8 and 9 show the output and the control for the local solution recovered by the proposed algorithm in Section IV. Interestingly, all the constraints are satisfied, which is because of the large value of ϑ. If we reduce ϑ to be equal to 10, the outputs violate the constraints on the state as shown in Figure 10. Evidently, by The outputs for the locally optimal solution recovered by the proposed algorithm in Section IV with ϑ = 10. The red lines show the boundary of the feasible region.
comparing Figures 6, 8, and 10, we can see that by increasing ϑ, the constraint violations are becoming more infrequent (until they do not occur at all).
VI. CONCLUSIONS
In this paper, we presented numerical algorithms for scheduling demands on continuous-time linear time-invariant systems. The rigidity of the demands dictated that we can only shift them back-and-forth in time (and cannot change their shapes). The first algorithm used log-barrier functions to include the state constraints in the cost function. The second algorithm considered the state constraints as soft constraints and added a penalty function for the constraint violations to the cost function. Future research can focus on constructing a market mechanism for achieving the optimal schedule based on the customers preferences. We can also compute an optimality gap via finding a lower-bound for the solution of the dual of the problem corresponding to (2).
Fig. 1 .
1The demand by the farms as requested (without the scheduling delays).
Fig. 2. The shifted demands at the initialization of the algorithm. The dotted curve demonstrates the requests and the solid curve demonstrates their shifted counterpart.
.Fig. 4 .
4The dotted curve demonstrates the shifted demands at the initialization and the solid curve demonstrates the shifted demands at the locally optimal solution. The reference signal for the local solution recovered by Algorithm 1 with = 0.1. The red lines show the boundary of the feasible region.
Fig. 5 .Fig. 6 .
56The outputs for the local solution recovered by Algorithm 1 with = 0.1. The red lines show the boundary of the feasible region. The output of the system when all the decision variables (the control inputs and scheduling delays) are set equal to zero.
Fig. 8 .Fig. 9 .
89The outputs for the locally optimal solution recovered by the proposed algorithm in Section IV with ϑ = 100. The red lines show the boundary of the feasible region. The reference signal for the locally optimal solution recovered by the proposed algorithm in Section IV with ϑ = 100. The red lines show the boundary of the feasible region.
Fig. 10. The outputs for the locally optimal solution recovered by the proposed algorithm in Section IV with ϑ = 10. The red lines show the boundary of the feasible region.
work was supported by the Australian Research Council (LP130100605), Rubicon Water Pty Ltd, and a McKenzie Fellowship. The authors are with the Department of Electrical and Electronic Engineering, The University of Melbourne, Parkville, Victoria 3010, Australia. Emails:{ffarokhi,cantoni,ishames}@unimelb.edu.au
TABLE I NUMERICAL
IPARAMETERS USED IN THE SIMULATION.
Note that the choice of a first-order Padé approximation is justifiable as the pool delays are all parts of closed-loops (with local controllers), with loop-gain cross-overs that are sufficiently small to make the overall closedloop behaviour insensitive to the approximation error[6].
A {0, 1} linear program for fixedprofile load scheduling and demand management in automated irrigation channels. J Alende, Y Li, M Cantoni, Proceedings of the 48th IEEE Conference on Decision and Control held jointly with the 28th Chinese Control Conference. the 48th IEEE Conference on Decision and Control held jointly with the 28th Chinese Control ConferenceJ. Alende, Y. Li, and M. Cantoni, "A {0, 1} linear program for fixed- profile load scheduling and demand management in automated irrigation channels," in Proceedings of the 48th IEEE Conference on Decision and Control held jointly with the 28th Chinese Control Conference, 2009, pp. 597-602.
Optimization of irrigation scheduling for complex water distribution using mixed integer quadratic programming (MIQP). S Hong, P.-O Malaterre, G Belaud, C Dejean, Proceedings of the 10th International Conference on Hydroinformatics (HIC 2012). the 10th International Conference on Hydroinformatics (HIC 2012)S. Hong, P.-O. Malaterre, G. Belaud, and C. Dejean, "Optimization of irrigation scheduling for complex water distribution using mixed integer quadratic programming (MIQP)," in Proceedings of the 10th International Conference on Hydroinformatics (HIC 2012), 2012.
Optimal scheduling of irrigation for lateral canals. J M Reddy, B Wilamowski, F Cassel-Sharmasarkar, ICID Journal on Irrigation and Drainage. 483J. M. Reddy, B. Wilamowski, and F. Cassel-Sharmasarkar, "Optimal scheduling of irrigation for lateral canals," ICID Journal on Irrigation and Drainage, vol. 48, no. 3, pp. 1-12, 1999.
Computers and Intractability: A Guide to the Theory of NP-Completeness, ser. Series of Books in the Mathematical Sciences. M R Garey, D S Johnson, W. H. FreemanM. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, ser. Series of Books in the Mathematical Sciences. W. H. Freeman, 1979.
S P Boyd, L Vandenberghe, Convex Optimization. Cambridge University PressS. P. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004.
Control of large-scale irrigation networks. M Cantoni, E Weyer, Y Li, S K Ooi, I Mareels, M Ryan, Proceedings of the IEEE. 951M. Cantoni, E. Weyer, Y. Li, S. K. Ooi, I. Mareels, and M. Ryan, "Control of large-scale irrigation networks," Proceedings of the IEEE, vol. 95, no. 1, pp. 75-91, 2007.
|
[] |
[
"An exceptionally bright flare from SGR 1806−20 and the origins of short-duration γ-ray bursts",
"An exceptionally bright flare from SGR 1806−20 and the origins of short-duration γ-ray bursts"
] |
[
"K Hurley \nUC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA\n",
"S E Boggs \nUC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA\n\nDepartment of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA\n",
"D M Smith \nPhysics Department and Santa Cruz Institute for Particle Physics\nUniversity of California\n95064Santa Cruz, Santa CruzCaliforniaUSA\n",
"R C Duncan \nDepartment of Astronomy\nUniversity of Texas\n78712AustinTexasUSA\n",
"R Lin \nUC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA\n",
"A Zoglauer \nUC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA\n",
"S Krucker \nUC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA\n",
"G Hurford \nUC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA\n",
"H Hudson \nUC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA\n",
"C Wigger \nPaul Scherrer Institute\n5232Villigen PSISwitzerland\n",
"W Hajdas \nPaul Scherrer Institute\n5232Villigen PSISwitzerland\n",
"C Thompson \nCanadian Institute of Theoretical Astrophysics\n60 St George StreetM5S 3H8TorontoOntarioCanada\n",
"I Mitrofanov \nSpace Research Institute (IKI)\nGSP7, 117997MoscowRussia\n",
"A Sanin \nSpace Research Institute (IKI)\nGSP7, 117997MoscowRussia\n",
"W Boynton \nDepartment of Planetary Sciences\nUniversity of Arizona\n85721TucsonArizonaUSA\n",
"C Fellows \nDepartment of Planetary Sciences\nUniversity of Arizona\n85721TucsonArizonaUSA\n",
"A Von Kienlin \nMax-Planck-Institut für extraterrestrische Physik\n1312, 85748 (85741)Giessenbachstrasse, GarchingPostfachGermany\n",
"G Lichti \nMax-Planck-Institut für extraterrestrische Physik\n1312, 85748 (85741)Giessenbachstrasse, GarchingPostfachGermany\n",
"A Rau \nMax-Planck-Institut für extraterrestrische Physik\n1312, 85748 (85741)Giessenbachstrasse, GarchingPostfachGermany\n",
"T Cline \nNASA Goddard Space Flight Center\nCode 66120771GreenbeltMarylandUSA\n"
] |
[
"UC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA",
"UC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA",
"Department of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"Physics Department and Santa Cruz Institute for Particle Physics\nUniversity of California\n95064Santa Cruz, Santa CruzCaliforniaUSA",
"Department of Astronomy\nUniversity of Texas\n78712AustinTexasUSA",
"UC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA",
"UC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA",
"UC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA",
"UC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA",
"UC Berkeley Space Sciences Laboratory\n94720-7450BerkeleyCaliforniaUSA",
"Paul Scherrer Institute\n5232Villigen PSISwitzerland",
"Paul Scherrer Institute\n5232Villigen PSISwitzerland",
"Canadian Institute of Theoretical Astrophysics\n60 St George StreetM5S 3H8TorontoOntarioCanada",
"Space Research Institute (IKI)\nGSP7, 117997MoscowRussia",
"Space Research Institute (IKI)\nGSP7, 117997MoscowRussia",
"Department of Planetary Sciences\nUniversity of Arizona\n85721TucsonArizonaUSA",
"Department of Planetary Sciences\nUniversity of Arizona\n85721TucsonArizonaUSA",
"Max-Planck-Institut für extraterrestrische Physik\n1312, 85748 (85741)Giessenbachstrasse, GarchingPostfachGermany",
"Max-Planck-Institut für extraterrestrische Physik\n1312, 85748 (85741)Giessenbachstrasse, GarchingPostfachGermany",
"Max-Planck-Institut für extraterrestrische Physik\n1312, 85748 (85741)Giessenbachstrasse, GarchingPostfachGermany",
"NASA Goddard Space Flight Center\nCode 66120771GreenbeltMarylandUSA"
] |
[] |
Soft-γ-ray repeaters (SGRs) are galactic X-ray stars that emit numerous shortduration (about 0.1 s) bursts of hard X-rays during sporadic active periods. They are thought to be magnetars: strongly magnetized neutron stars with emissions powered by the dissipation of magnetic energy. Here we report the detection of a long (380 s) giant flare from SGR 1806−20, which was much more luminous than any previous transient event observed in our Galaxy. (In the first 0.2 s, the flare released as much energy as the Sun radiates in a quarter of a million years.) Its power can be explained by a catastrophic instability involving global crust failure and magnetic reconnection on a magnetar, with possible large-scale untwisting of magnetic field lines outside the star. From a great distance this event would appear to be a short-duration, hard-spectrum cosmic γ-ray burst. At least a significant fraction of the mysterious short-duration γ-ray bursts therefore may come from extragalactic magnetars.In the magnetar model, SGRs are isolated neutron stars with teragauss exterior magnetic fields 1-4 and even stronger fields within 5,6 , making them the most strongly-Page 2 of 21 magnetized objects in the Universe. Four SGRs are known. Three of them have now emitted giant flares 7,8 . These exceptionally energetic outbursts begin with a brief (~ 0.2 s) spike of γ-rays with energies up to several MeV, containing most of the flare energy. The spikes are followed by tails lasting minutes, during which hard-X-ray emissions gradually fade while oscillating at the rotation period of the neutron star.The first-known giant flare, observed on 5 March 1979, came from SGR 0525−66 in the Large Magellanic Cloud. Its fluence implied an energy >~6×10 44 erg (ref. 9). The second-known giant flare came from an SGR in our Galaxy, SGR 1900+14, on 27 August 1998. Its energy, in hard X-rays and γ−rays, was ~2×10 44 erg (refs 8, 10). Here we describe a third giant flare, which came from the galactic SGR 1806−20 on 27 December 2004. Particle and γ-ray detectors onboard the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), and particle detectors aboard the Wind spacecraft, indicate that this event was ~100 times more energetic than the 27 August flare. Its initial γ-ray spike had a quasi-blackbody spectrum, characteristic of a relativistic pair/photon outflow with an energetically small contamination of baryons. This is consistent with the catastrophic release of (nearly) pure magnetic energy from a magnetar 3 . The tremendous luminosity of the initial spike means that similar events could be detected from distant galaxies. This could account for some, and perhaps all, of the mysterious short-duration, hard-spectrum cosmic γ-ray bursts (GRBs).
|
10.1038/nature03519
|
[
"https://arxiv.org/pdf/astro-ph/0502329v2.pdf"
] | 4,424,508 |
astro-ph/0502329
|
fe2ce264ad316768f80cf149dfd1e00d25003388
|
An exceptionally bright flare from SGR 1806−20 and the origins of short-duration γ-ray bursts
K Hurley
UC Berkeley Space Sciences Laboratory
94720-7450BerkeleyCaliforniaUSA
S E Boggs
UC Berkeley Space Sciences Laboratory
94720-7450BerkeleyCaliforniaUSA
Department of Physics
University of California
94720BerkeleyCaliforniaUSA
D M Smith
Physics Department and Santa Cruz Institute for Particle Physics
University of California
95064Santa Cruz, Santa CruzCaliforniaUSA
R C Duncan
Department of Astronomy
University of Texas
78712AustinTexasUSA
R Lin
UC Berkeley Space Sciences Laboratory
94720-7450BerkeleyCaliforniaUSA
A Zoglauer
UC Berkeley Space Sciences Laboratory
94720-7450BerkeleyCaliforniaUSA
S Krucker
UC Berkeley Space Sciences Laboratory
94720-7450BerkeleyCaliforniaUSA
G Hurford
UC Berkeley Space Sciences Laboratory
94720-7450BerkeleyCaliforniaUSA
H Hudson
UC Berkeley Space Sciences Laboratory
94720-7450BerkeleyCaliforniaUSA
C Wigger
Paul Scherrer Institute
5232Villigen PSISwitzerland
W Hajdas
Paul Scherrer Institute
5232Villigen PSISwitzerland
C Thompson
Canadian Institute of Theoretical Astrophysics
60 St George StreetM5S 3H8TorontoOntarioCanada
I Mitrofanov
Space Research Institute (IKI)
GSP7, 117997MoscowRussia
A Sanin
Space Research Institute (IKI)
GSP7, 117997MoscowRussia
W Boynton
Department of Planetary Sciences
University of Arizona
85721TucsonArizonaUSA
C Fellows
Department of Planetary Sciences
University of Arizona
85721TucsonArizonaUSA
A Von Kienlin
Max-Planck-Institut für extraterrestrische Physik
1312, 85748 (85741)Giessenbachstrasse, GarchingPostfachGermany
G Lichti
Max-Planck-Institut für extraterrestrische Physik
1312, 85748 (85741)Giessenbachstrasse, GarchingPostfachGermany
A Rau
Max-Planck-Institut für extraterrestrische Physik
1312, 85748 (85741)Giessenbachstrasse, GarchingPostfachGermany
T Cline
NASA Goddard Space Flight Center
Code 66120771GreenbeltMarylandUSA
An exceptionally bright flare from SGR 1806−20 and the origins of short-duration γ-ray bursts
Page 1 of 21
Soft-γ-ray repeaters (SGRs) are galactic X-ray stars that emit numerous shortduration (about 0.1 s) bursts of hard X-rays during sporadic active periods. They are thought to be magnetars: strongly magnetized neutron stars with emissions powered by the dissipation of magnetic energy. Here we report the detection of a long (380 s) giant flare from SGR 1806−20, which was much more luminous than any previous transient event observed in our Galaxy. (In the first 0.2 s, the flare released as much energy as the Sun radiates in a quarter of a million years.) Its power can be explained by a catastrophic instability involving global crust failure and magnetic reconnection on a magnetar, with possible large-scale untwisting of magnetic field lines outside the star. From a great distance this event would appear to be a short-duration, hard-spectrum cosmic γ-ray burst. At least a significant fraction of the mysterious short-duration γ-ray bursts therefore may come from extragalactic magnetars.In the magnetar model, SGRs are isolated neutron stars with teragauss exterior magnetic fields 1-4 and even stronger fields within 5,6 , making them the most strongly-Page 2 of 21 magnetized objects in the Universe. Four SGRs are known. Three of them have now emitted giant flares 7,8 . These exceptionally energetic outbursts begin with a brief (~ 0.2 s) spike of γ-rays with energies up to several MeV, containing most of the flare energy. The spikes are followed by tails lasting minutes, during which hard-X-ray emissions gradually fade while oscillating at the rotation period of the neutron star.The first-known giant flare, observed on 5 March 1979, came from SGR 0525−66 in the Large Magellanic Cloud. Its fluence implied an energy >~6×10 44 erg (ref. 9). The second-known giant flare came from an SGR in our Galaxy, SGR 1900+14, on 27 August 1998. Its energy, in hard X-rays and γ−rays, was ~2×10 44 erg (refs 8, 10). Here we describe a third giant flare, which came from the galactic SGR 1806−20 on 27 December 2004. Particle and γ-ray detectors onboard the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), and particle detectors aboard the Wind spacecraft, indicate that this event was ~100 times more energetic than the 27 August flare. Its initial γ-ray spike had a quasi-blackbody spectrum, characteristic of a relativistic pair/photon outflow with an energetically small contamination of baryons. This is consistent with the catastrophic release of (nearly) pure magnetic energy from a magnetar 3 . The tremendous luminosity of the initial spike means that similar events could be detected from distant galaxies. This could account for some, and perhaps all, of the mysterious short-duration, hard-spectrum cosmic γ-ray bursts (GRBs).
A ~1-s-long precursor was observed 142 s before the flare, with a roughly flattopped profile (Fig. 1 inset). Its spectrum can be fitted with an optically thin thermal bremsstrahlung function with kT≈15 keV. The precursor's >3-keV fluence was 1.8×10 −4 erg cm −2 , implying an energy of 4.8×10 42 owing to the apparent association of the SGR with a compact (~10 arcsec) stellar cluster 16,17 . The large energy and unusual light curve of the precursor distinguish it from most common SGR bursts. This and its proximity in time to the giant flare suggest that it is causally related.
The initial spike of the giant flare lasted for ~0.2 s. Its rise and fall times were τ rise ≤1 ms and τ decay ≈65 ms, similar to those of the other giant flares 8,18 . The spike's intensity drove all X-and γ-ray detectors into saturation, but particle detectors aboard RHESSI and
Wind made reliable measurements. (The Supplementary Information describes our extensive Monte Carlo simulations of these particle detectors and has a full discussion of systematic uncertainties.) The RHESSI particle detector data imply a spike fluence in photons >30 keV of (1.36±0.35) erg cm −2 , making this the most intense cosmic or solar transient ever observed (in terms of photon energy flux at Earth). The time-resolved energy spectrum, as measured by the Wind particle detectors, is consistent with a cooling blackbody ( Fig. 2) with average temperature T spike =(175±25) keV. The spike energy is thus E spike =(3.7±0.9)×10 46 d 15 2 erg, assuming isotropic emission. The peak flux in the first 0.125 s was L spike =2×10 47 d 15 2 erg s −1 . Evidently, this event briefly outshone all the stars in the Galaxy put together by a factor of ~10 3 .
The spike was followed by a hard-X-ray tail modulated with a period of 7.56 s, detected by the RHESSI γ-ray detectors, which were by this time unsaturated, for 380 s. This period agrees with the neutron star rotation period as inferred from cyclic modulations of its quiescent soft-X-ray counterpart 2 . The fluence in 3-100-keV photons during the tail phase is 4.6×10 −3 erg cm −2 or E tail ≈1.2×10 44 d 15 2 erg.
Physical interpretation
This event can be understood as a result of a catastrophic instability in a magnetar. Strong shearing of the neutron star's magnetic field, combined with growing thermal pressure, appears to have forced an opening of the field outward, launching a hot fireball. The release of energy above a rate of ~10 42 erg s −1 (less than one part in 10 4 of the peak flare luminosity) into the magnetosphere leads to the formation of a hot, thermal pair plasma (kT≈0.1-1 MeV) 5 . The fast initial rise τ rise ≤1 ms is consistent with a magnetospheric instability with characteristic time τ mag ≈(R/0.1V A )≈0.3 ms, where R≈10 km and V A ≈c is the Alfven velocity in the magnetosphere, and c is the speed of light 3 . This process must have occurred repeatedly, given that the hard initial spike persisted for a duration ~10 3 τ mag . Indeed, there is evidence for spike variability in this and other giant flares 8,19,20 . The resulting outflow emitted a quasi-blackbody spectrum as it became optically thin, with spectral temperature comparable to the temperature at its base, because declining temperature in the outflow is compensated by the relativistic blueshift 21 . For luminosity L spike =10 47 L 47 erg s −1 emerging from a zone with radius R≈10 km, the expected spectral temperature is T spike =(L spike /4πacR 2 ) 0.25 =200 L 47 0.25 keV, neglecting complications of magnetospheric stresses and intermittency. Almost all the pairs annihilated, and the outflow was only weakly polluted by baryons, as is clear from the extended, weak radio afterglow that followed the flare 22 . Note that we do not expect significant beaming of such powerful emissions from such a slowly rotating star.
When the outflow ceased, a trapped fireball was evidently left behind: an optically thick photon-pair plasma confined by closed field lines near the star 3,23 . The luminosity and lifetime of the tail (see the fitted curve in Fig. 3) are consistent with a fireball cooling rate that is limited by the transparency of the surface layers, where the temperature is ~10 keV and the plasma is dominated by ions and electrons 3,23,24 . The condition that the magnetic field must be strong enough to confine energy E tail within a distance R ∆ ≈10 km of the star yields a rough bound on the dipole field,
B dipole >2×10 14 (∆R/10 km) 3 2 / 3 ] 2 / ) / 1 [( R R ∆ + −
G, similar to bounds implied by the previous giant flares 3,8 .
A clue to the nature of the instability comes from the spike's ~0.2-s duration, which is similar to the durations of other giant flare spikes 7,8,18 and of most other SGR bursts 25 . In the magnetar model, SGR activity results from the unwinding of a strong, toroidal magnetic field inside the star, and the transfer of magnetic helicity across the surface 23,26 . Such a twist propagates along the poloidal magnetic field B p =10 15 B p15 G with a speed V A ≈B p /(4πρ) −0.5 that is weakly dependent on the twist amplitude. The time to cross the neutron star interior (density 15 15 10
ρ g cm −3 ) is ∆t≈2R/V A ≈0.2B p15 −1 s.
Thus the 27 December event could have been a crustal instability that drove helicity from the star 23,26 . The unwinding of a toroidal magnetic field embedded in the crust is strongly impeded by the stable stratification and near-incompressibility of the crust 23 . Because of the energetic cost of forming isolated dislocation surfaces that cross the magnetic flux surfaces, the crust must undergo smooth and vertically differential torsional motion when it fails, which requires a fundamental breakdown of its solid structure. The maximum field energy which can be released is estimated by balancing elastic and magnetic stresses in the crust: Since March 2004, SGR 1806−20 has been very burst-active 27 , while its quiescent X-ray brightness has increased by a factor of 2 to 3, and its spectrum has hardened dramatically 28 . Evidently, crust failure has enhanced the twist in the external magnetic field, with growing magnetospheric currents 26 . The free energy of such an exterior magnetic twist can reach a modest fraction ) 10 (~1 − of the untwisted exterior dipole field energy, E twist ≈10 −2 B dip 2 R 3 ≈10 46 B p15 2 erg, with more energy in the non-potential components of higher multipoles. Some of this energy could be catastrophically released via reconnective simplification of the magnetosphere 26,29 . An extreme possibility, consistent with the flare energy, is a global magnetospheric untwisting. This would predict a dramatic post-flare drop in the stellar spin-down rate, as well as greatly diminished, softened and less strongly pulsed X-ray emissions. However, a pure magnetospheric instability would proceed much faster than ~0.2 s. Note also that the detection of accelerated spin-down 30 several months after previous active periods of SGRs 1806−20 and 1900+14 betrays a net increase in the magnetospheric twist during the X-ray bursts, and in the 27 August 1998 giant flare. Observations of SGR 1806-20's spin-down over the coming year will provide important constraints on the location of the non-potential magnetic field that was dissipated during the flare.
Short-duration GRBs and magnetars
If observed from a great distance, only the brief, initial hard spike of the 27 December flare would be evident. Thus distant extragalactic magnetar flares (MFs) would resemble the mysterious short-duration GRBs 31,32 . These hard-spectrum events have long been recognized as a separate class of GRBs 33-37 but have never been identified with any counterparts 38 .
The Burst and Transient Source Experiment (BATSE) on the Compton Gamma-Ray Observatory was a landmark experiment of the 1990s that produced a catalogue 39 of more than 2,000 GRBs. How many of these bursts were MFs? Taking the 27 December event as our prototype and adopting the 50% trigger-efficiency flux 40 Of course, all of the above estimates idealize MFs as 'standard candles' defined by the 27 December prototype. The actual luminosity function of MFs is unknown. It is possible that some MFs are significantly brighter than the 27 December event. For example, a magnetic instability on a rare magnetar with B dipole ≈10 16 G could release 10 48 erg, and be detected by Swift out to ~1 Gpc. Nevertheless, we suspect that MFs constitute only a substantial subset of BATSE Class II GRBs, not all of them. The 175-keV blackbody spectrum would probably result in a significantly higher hardness ratio than that of the average short-duration burst 37 . The fact that Class II GRBs have
5 . 0 / max < V V
does not seem consistent with all these events being local 23 . Moreover, no galaxies at D<100 Mpc were found for the Interplanetary Network positions of four short-duration GRBs 38 .
Studying extragalactic magnetars
Swift can identify MFs via their positional correlations with galaxies, allowing the source distances from Earth to be inferred. A spiral galaxy of size ~30 kpc at distance D Swift spans ~3.4 arcmin, comparable to the Swift BAT location accuracy of ∆θ BAT ≈1- MFs spotted by Swift will have detectable tails. We have verified that the soft X-rays are strongly pulsed (Fig. 5). For events within about 8 Mpc, simulations indicate that the magnetar's rotation period can be reliably determined. For more distant sources, the spectrum and the rapid flux decay will distinguish magnetar tail emissions from cosmic GRB afterglows.
The prospects of detecting extragalactic MFs with the Swift Ultra-Violet and Optical Telescope (UVOT) or ground-based optical telescopes are not wholly bleak. The trapped fireball is too tiny to emit detectably in this waveband. However, we can scale directly from the observed radio afterglow 22 Prospects are even better for the detection of X-ray afterglows 32 . SGR 1900+14 emitted strong nonthermal X-rays in the aftermath of the 27 August 1998 event 46 , thought to be due to a heated magnetar crust 47 . If afterglow energy scales linearly with flare energy, as found in less energetic events 48 , then a MF like the 27 December event would glow brighter by a factor of f≈100, suggesting L X ≈2×10 39 (f/100)(t/1 h) −0.7 erg s −1 .
This could be detected by the Chandra X-ray Telescope within D Chandra ≈30(f/100) 0.5 (∆t obs /10 4 s) 0.5 (t/10 h) −0.35 Mpc in an observation of duration ∆t obs <<t in seconds.
New horizons and speculations
The detection of extragalactic magnetars, if achieved by Swift, will open up a new field of astronomy. A catalogue of giant flare spikes, once assembled, will contain a wealth of information about magnetic instabilities in neutron stars. Information about the luminosity function of MFs, their range of durations, and possible spectral diversity (suggested by measurements of the 27 August event 8,49 ; note that less compact flows than that of the 27 December event could show nonthermal spectra) will constrain magnetar physics and population diversity. Unusually bright flares may be detected from very young magnetars with rapid rotation periods and stronger fields than are observed in galactic SGRs. (The birthrate of SGRs is evidently so low that no stars younger than 4 3 10 10
− yr are observed in our galaxy.) MFs from very young magnetars may be disproportionately common in extragalactic studies because of their greater brightness and higher flare rate. More frequent cataclysms are expected in younger magnetars because magnetic diffusion slows down as stars age and cool 6 .
We emphasize that most SGR activity is ultimately powered by the strong toroidal interior field of a magnetar, φ B , which is the remnant of the rapid differential rotation which the neutron star experienced at birth 1,5 . The energy of this field,
10 2 ) 6 / 1 ( φ φ φ B R B E × erg,2 16 , 49 3 2
SUPPLEMENTARY INFORMATION
RHESSI and Wind particle detector data analysis
During the intense initial spike, all X-and γ-ray detectors experienced some degree of saturation, making reliable reconstruction of the time history and energy spectrum difficult or impossible. Many small, thin silicon particle detectors, on the other hand, had very low cross-sections for X-and γ-ray interactions, and therefore did not saturate, even though they did respond strongly to the peak. We have therefore analysed the observations of the Wind 3D plasma and energetic particle experiments 52 and of the RHESSI particle monitor detector 53 with the GEANT3 and GEANT4 simulation codes to obtain information about the initial spike (specifically, the first kT in Fig. 1b, and the spectrum, time history and kT in Fig. 2). The RHESSI particle detector has an area of 25 mm 2 and is 960 µm thick. Wind has six double-ended solid-state telescopes (SSTs), five with two back-to-back 1.5-cm 2 , 300-µm-thick silicon detectors (called O and F, with nine and seven PHA channels, respectively), and one SST with a third, 15-cm 2 , 500-µm-thick detector (T) in between. The multi-channel analysers covered the 20 keV to 11 MeV range with various time resolutions between 12 and 96 s, while the RHESSI detector had two discriminators with 50-and 620-keV thresholds that were read out with 0.125-s resolution. In each case, the simulations included the matter surrounding the detectors, and attempted to reproduce the observed count rates with incoming power-law, thermal bremsstrahlung, and blackbody energy spectra. In all cases, the power law and bremsstrahlung spectra were strongly rejected by the Wind data (χ 2 =42
and 69 for 10 degrees of freedom), and only the blackbody provided an acceptable fit (χ 2 =10 for 10 degrees of freedom). These fits were performed for the Wind detectors with the highest statistics (F and O), because they gave the strongest restriction on the error bars for the blackbody temperature (175±25 keV). A systematic error of ±10% was assumed for the Wind simulations. This is a typical conservative estimate for simulations of this type; it includes uncertainties in the masses and compositions in the structure surrounding the detector, as well as uncertainties in the detector size, volume, and calibration. Fits including all the Wind detectors are also consistent with these results. An additional systematic uncertainty of ±15% was included for the RHESSI data, to include the effects of absorption in the spacecraft structure and interception of photons scattered off the Earth's atmosphere. Both these effects were modelled in GEANT3, with the prediction that 25% of the incoming photons are removed by the spacecraft-absorption process, and an approximately equal number are added by the photon-interception process, but at lower energies, tending to soften the overall spectrum. The observed RHESSI response is consistent with the blackbody fit.
For the peak, the sum of the count rates from the 12 Wind detectors reached 1,900 counts per second per detector. The Wind particle detectors have a 600-ns shaping time, which is fast enough that pulse pile-up in the detectors is negligible.
However, the overall throughput of the system is determined by the sampling rate of the multiplexed analogue-to-digital converters, which is not well quantified at these data rates. Therefore the overall livetime of the detectors is uncertain, and the responses cannot be used to measure the fluence, even though they are well within the count rate range of measuring the spectral shape correctly. Thus Wind was used to measure the spectral shape during the spike, and the RHESSI particle detector was used to derive the normalization.
The RHESSI particle detector counted 3,008 counts in the peak 125 ms of the flare. Its saturation level is approximately 10 5 counts in 125 ms. Thus pulse pile-up is negligible. Because this detector has only two channels, it cannot strongly constrain the spectral shape, although it can confirm or reject the spectral shapes found by the Wind detectors, and it can determine the normalization of the Wind spectra accurately. These data were used to produce the time history and kT in the inset to Fig. 2.
RHESSI γ-ray detector data analysis
The RHESSI γ-ray detectors are segmented Ge detectors which record the time and energy of each photon interaction >3 keV. They were unsaturated after the initial spike. However, there are two structures that can attenuate the incoming photons in the observations of the oscillatory phase described here. The first is a shutter that was automatically put into place over the front segments as a response to the high count rates, and remained there for the first 272 s in Fig. 1. The second is the imaging grid structure above the detectors, which affects both the front and rear segments. However, as the spacecraft rotates, a direct (unattenuated) path exists to some of the detectors for brief intervals twice per rotation period. We call these intervals 'snapshots'. To minimize the effects of attenuation in Fig. 1a, the inset to Fig. 1a and the black curve in Fig. 5a, we have used counts >20 keV. To eliminate these effects in Fig. 4 and Fig. 5a, we have used the snapshot data. We have also used these snapshots to obtain the spectral temperatures in Fig. 1b. We have used the on-axis (0°) RHESSI response matrices for this analysis, which should reproduce reasonable flux numbers and spectral distributions. With the current matrices we are unable to distinguish strongly between thermal bremsstrahlung and blackbody spectral fits for the tail, so we have included both in this paper. We anticipate that further spectral analysis including response matrices for this source location (under construction) should discriminate between these models.
Detectability of magnetar flares by BATSE and Swift
We estimated the BATSE sampling depth for MFs using our peak incident flux from this flare in the standard BATSE 50-300-keV energy range (determined from our bestfit RHESSI Particle Detector fluence and WIND spectral fit), over the BATSE trigger timescales of 64, 256 and 1,024 ms. We find the optimal BATSE trigger timescale to be 256 ms (BATSE's P256). Given the 50%-efficiency trigger flux for P256 (ref. 13) of 0.50 photons cm −1 s −1 , we determine that this flare would have been detected by BATSE to a distance of 31 Mpc. As a check, we analysed the 50-300-keV fluence of all the BATSE short-duration, hard-spectrum GRBs with durations T 90 =0.1-0.2 s, and found a threshold fluence of ~5×10 −8 erg cm −2 , corresponding to comparable detection distance. This is lower than the distance originally quoted in GCN 2936 (ref. 39) as a result of our spectral fits-the black-body fit is much harder than typical GRB spectra, resulting in lower photon fluxes in the 50-300-keV range than a typical short-duration, hardspectrum GRB spectrum with comparable energy flux. To estimate the Swift BAT sensitivity, we used a P256 (50-300 keV) photon flux sensitivity 5 times better than BATSE's (see figure 9 in ref. 45), corresponding to ~0.10 photons cm −2 s −1 , for a limiting detection distance for BAT of 70 Mpc. As a check, the advertised energy flux sensitivity of ~10 −8 erg cm −2 s −1 yields an even larger limiting distance.
To estimate the BATSE sensitivity to pulsating tails, we examined the strongest short-duration, hard-spectrum GRB seen by BATSE, trigger number 6293. This GRB had a duration T 90 =0.192 s, and a total fluence of 4.30×10 −5 erg cm −2 , dominated by photons of >300 keV. Given the background count rate in the 400-s period after this burst, we estimate a 5σ upper limit on a 20-100-keV tail fluence of 2×10 −7 erg cm −2 , setting the BATSE upper limit on the ratio of tail-to-peak fluence of 0.5%.
To estimate the Swift XRT sensitivity to the pulsating tails, we used the XRT response available in the HEASARC WebPIMMS package. We developed a model of the pulsating X-ray tail from our time-dependent thermal bremsstrahlung fits over the course of the 380-s tail, assuming the average 3-10-keV pulse shape. Folding the timedependent model through the XRT response, and assuming an optimistic 20-s slew time, we estimate a marginal 0.3-10-keV detection of the soft tail at 10-40 Mpc for blackbody and bremsstrahlung spectra. As a check, the 27 December tail produced an incident 0.3-10-keV fluence of 0.18−1.6×10 −3 erg cm −2 . The quoted threshold flux for XRT detection is 2×10 −14 erg cm −2 s −1 for a 10 4 -s observation, corresponding to a fluence threshold of 2×10 −10 erg cm −2 . Comparing this with our measured X-ray fluence yields a comparable detection distance. We also determined that the magnetar rotation period can be picked out of the XRT data by Fast Fourier Transforms out to distances of ~2-8.5 Mpc (it is clearly visible by eye out to ~1-4 Mpc).
Rate of magnetar flares
To estimate the rate of extragalactic magnetar flares, we needed to estimate the blue luminosity of the Milky is near the value a=2/3 expected for a homogeneous, spherical trapped fireball 23,49 . Inset, RHESSI γ-ray detector light curve for the first ten cycles of the flare tail. The energy range is 20-100 keV. The first peak of the trapped fireball emission is evident on the falling edge of the hard spike at t=30 s. A changing twopeaked pulse-interpulse structure is present.
The giant flare from SGR 1806−20 On 27 December 2004, the International Gamma-Ray Astrophysics Laboratory 11 (INTEGRAL) reported the detection of a spectacular flare. Four other missions in the third interplanetary network of GRB detectors (the High Energy Neutron Detector and Gamma Sensor Head aboard Mars Odyssey 12 , the solar-pointing RHESSI 13 , particle and γ-ray detectors aboard Wind 14 , and NASA's recently launched GRB observatory Swift 15 ) also reported this event. The light curve is shown in Fig. 1. Triangulation constrains the flare position to a portion of an annulus consistent with SGR 1806−20's position (annulus centre J2000, right ascension 15 h 56 m 37 s, declination −20° 13′ 50′′, annulus radius 30.887±0.030°). No other known or candidate SGR lies within this area of the sky. SGR 1806−20 was 5.25° from the Sun at the time of these observations.
is the yield strain. Supplying the energy of the December 27 flare thus requires a relatively large yield strain, as well as a large twist of the crust with angular displacement approaching ~ 0.
4 arcmin. This localization can be greatly improved, to an accuracy of <~10 arcsec, if the oscillating tail of the flare is detected by Swift's X-ray Telescope (XRT) when it slews to observe the burst site within about 1 min. Our measurements of soft X-ray emissions in the giant flare tail(Fig. 4) make it possible to assess the prospects of XRT acquisition for the first time. Extrapolating our X-ray spectral fits down to 0.3 keV, we find that the 27 December pulsating tail produced a 0.3-10-keV incident fluence of (0.18-1.6)×10 −3 erg cm −2 . The threshold fluence for XRT detection 44 is 2×10 −10 erg cm −2 , so that the 27 December flare tail could be marginally detected to a distance of D tail =10-40 d15 Mpc. Thus only the nearest fraction (D tail /D Swift ) 3 ≈0.2 of all
the optically thin regime. Extrapolating to 10 14.5 Hz gives L opt ≈4×10 37 t 3 −1.5 erg s −1 s post-flare. Such a source would have a brightness of 22nd magnitude at 1 Mpc for t 3 ≈1.
Figure 1 Figure 2 Figure 3
123Way, L b, MW . The synthetic Galactic model of ref. 54, based upon Hipparcos data and recent large-scale surveys in the optical and infrared, implies a Galactic stellar thin disk mass M MW =2×10 10 M , where M is the mass of the Sun. We divided this by M/L b =1.4M /L , which was found from the average of 30 Milky-Waylike galaxies of types Sb to Sc with luminosities of 5×10 9 L to 5×10 10 L within the Nearby Field Galaxy Survey (S. Kannappan, personal communication). Profiles of the 27 December 2004 giant flare. a, 20-100-keV time history plotted with 0.5-s resolution, from the RHESSI γ-ray detectors. Zero seconds corresponds to 77,400 s Universal Time (UT). In this plot, the flare began with the spike at 26.64 s and saturated the detectors within 1 ms. The detectors emerged from saturation on the falling edge 200 ms later and remained unsaturated after that. Photons with energies >~20 keV are unattenuated; thus the amplitude variations in the oscillatory phase are real, and are not caused by any known instrumental effect (Supplementary Information). Inset, time history of the precursor with 8-ms resolution. Zero corresponds to 77,280 s UT. b, Spectral temperature versus time in the oscillatoryphase. The temperature of the spike was determined by the RHESSI and Wind particle detectors; the temperatures of the oscillatory phase were measured by the RHESSI γ-ray detectors. Although RHESSI measured time-and energy-tagged photons >3 keV continuously, unattenuated spectra were measured for short 'snapshot' intervals only twice in each 4.06-s spacecraft spin period during the oscillatory phase (Supplementary Information). Preliminary spectral analysis (3-100 keV), using the RHESSI on-axis response matrices, is generally consistent with a single-temperature blackbody or optically thin thermal bremsstrahlung model; the blackbody temperatures have been plotted. The formal uncertainties in the oscillatory phase are smaller than the data points and are not shown. Spectrum and time history of the initial spike, from the RHESSI and Wind particle detectors. The crosses show the spectrum measured by the Wind 3D O detector 52 with coarse time resolution that averages over the peak. The error bars are 1σ, plus 10% systematic errors. The line is the best-fitting blackbody convolved with the detector response function; its temperature is 175±25 keV (Supplementary Information). Inset, the time history of the peak (histogram, left-hand scale) and of the blackbody temperature (error bars, right-hand scale) with 0.125-s resolution, from the RHESSI particle detector (ref.35 andSupplementary Information). The error bars are 1σ, plus 25% systematic errors. Time-averaged counts in the tail phase of the giant flare, compared with the 'trapped fireball' model. Zero corresponds to 77,280 s UT. The step plot shows the RHESSI γ-ray detector data averaged over the 7.56-s rotation period of the neutron star.It is fitted by a simple model (smooth curve) that describes the emission from the cool surface of a magnetically confined plasma as it contracts and evaporates in a finite time: L x (t)=L O [1−(t/t evap )] a/(1−a) (ref.49). We find t evap =382±3 s,
Figure 4 3Figure 5
45-100-keV phase-averaged energy spectrum of the pulsed tail, from the RHESSI γ-ray detectors. The crosses show the measured spectrum with 1σ statistical error bars; the solid line represents a fit to a blackbody function E 2 (exp(E/kT)−1) −1 , where E is the energy and kT=5.1±1.0 keV. This spectrum is averaged over various phases between 272 and 400 s inFig. 1, corresponding to intervals where the photons could reach the detectors passing through a minimum amount of intervening materials (Supplementary Information). An optically thin thermal bremsstrahlung function with kT≈22 keV also provides a reasonable fit. The spectra show evidence of deviations from both models, probably due to the use of an approximate response matrix24 . Detailed profiles of the oscillations, from the RHESSI γ-ray detectors. a, RHESSI light curve for the oscillatory portion of the giant flare, folded modulo the 7.56-s neutron star rotation period (20-100 keV, fine resolution curve, and 3-10 keV, coarse resolution curve). b, The blackbody spectral temperature kT. The radius of the emitting surface varies between ~18 and 40 km at 15 kpc.
The GRB observatory Swift 44 was designed, in part, to unravel the shortduration GRB mystery. How many MFs will Swift spot? The Swift Burst Alert Telescope has a photon flux sensitivity (50-300 keV) that is ~5 times better than BATSE45 , corresponding to a trigger threshold of ~0.10 photons cm −2 s −1 . Thus for our prototype MF, D Swift =70 d15 Mpc. The expected rate of MF detections, given Swift's sky coverage of 1.4 steradians, is then yr −1 , or about one MF per week. Of course, the galactic rate of MFs, Γ=τ −1 , is very uncertain. Given that there has occurred one MF with peak luminosity in the range47 10 erg s −1 in our Galaxy during 30 0 = t yr of observations, the bayesian probability distribution for the underlying galactic rate Γof such bright MFs is (dP/dΓ)=t 0 exp(−Γt 0 ), with expected − = Γ t . This implies that the probability that Swift will detect one or more MF 97}%. The prospects for observing MFs during Swift's 24-month prime mission are excellent.of
0.5 photons cm −2 s −1 for the 256-ms timescale yields a BATSE sampling depth of
D BATSE =30 d 15 Mpc. If such events generally happen once every
30
=
τ
yr in galaxies
like the Milky Way (such as has now occurred in the Milky Way itself) then the BATSE
detection rate of MFs is
30
/
(
19
)
(
3
15 τ
d
BATSE
N
=
&
yr) −1 yr −1 . Here we have estimated
the effective number of galaxies like the Milky Way within D BATSE of Earth by
multiplying the local blue luminosity density 41 j b =5.8×10 41 h 70 erg Mpc −3 by the
sampling volume (4π/3)D BATSE
3 , and dividing by the blue luminosity of the Milky Way
as estimated in the Supplementary Information. We use blue emissions as a benchmark
because SGRs are Population I objects, the post-supernova remnants of massive, short-
lived, blue stars. Thus, over 9.5 yr of operation with half-sky coverage, BATSE
probably detected
30
/
(
180 3
15 τ
d
yr) −1 MFs, representing
30
/
(
4
.
0 3
15 τ
d
yr) −1 of all
BATSE short-duration bursts. There is evidence of 100-s-long soft tails in the co-added
time histories of many short-duration BATSE GRBs 42,43 ; but not in any single event.
For the brightest observed BATSE short-duration, hard-spectrum GRB (trigger number
6293), we find that the ratio of the tail-to-peak fluence is <0.5%, compared to our
measured ratio for the 27 December event of 0.34%. Thus BATSE was not sensitive
enough to have detected MF tails in single bursts.
30
/
(
53
)
(
3
15 τ
d
Swift
N
=
&
yr) −1 value
1
0
per month is 80% for
1
15 =
d
. The probabilities of detecting one or more event per {3, 6,
12, 24} months are {93, 96, 98, 99}%, respectively. Even if
10
=
d
kpc, the probabilities
would be {78, 88, 94,
Acknowledgements We are grateful to J. Scalo, E. Vishniac and S. Kannappan for discussions and expert help. In the US, this work was supported by NASA. The INTEGRAL mission is supported by the German government via the DLR agency.Supplementary Information accompanies the paper on www.nature.com/nature.Competing interests statement. The authors declare that they have no competing financial interests.Correspondence and requests for materials should be addressed to K.H. ([email protected]).
Formation of very strongly magnetized neutron stars: implications for gamma-ray bursts. R Duncan, C Thompson, Astrophys. J. 392Duncan, R. & Thompson, C. Formation of very strongly magnetized neutron stars: implications for gamma-ray bursts. Astrophys. J. 392, L9-L13 (1992).
An X-ray pulsar with a superstrong magnetic field in the soft gamma-ray repeater SGR1806-20. C Kouveliotou, Nature. 393Kouveliotou, C. et al. An X-ray pulsar with a superstrong magnetic field in the soft gamma-ray repeater SGR1806-20. Nature 393, 235-237 (1998).
The soft gamma repeaters as very strongly magnetized neutron stars. I. Radiative mechanism for outbursts. C Thompson, R Duncan, Mon. Not. R. Astron. Soc. 275Thompson, C. & Duncan, R. The soft gamma repeaters as very strongly magnetized neutron stars. I. Radiative mechanism for outbursts. Mon. Not. R. Astron. Soc. 275, 255-300 (1995).
Discovery of a magnetar associated with the soft gamma repeater SGR1900+14. C Kouveliotou, Astrophys. J. 510Kouveliotou, C. et al. Discovery of a magnetar associated with the soft gamma repeater SGR1900+14. Astrophys. J. 510, L115-L118 (1999).
Neutron star dynamos and the origins of pulsar magnetism. C Thompson, R Duncan, Astrophys. J. 408Thompson, C. & Duncan, R. Neutron star dynamos and the origins of pulsar magnetism. Astrophys. J. 408, 194-224 (1993).
The soft gamma repeaters as very strongly magnetized neutron stars II. Quiescent neutrino, X-ray and Alfven wave emission. C Thompson, R C Duncan, Astrophys. J. 473Thompson, C. & Duncan, R. C. The soft gamma repeaters as very strongly magnetized neutron stars II. Quiescent neutrino, X-ray and Alfven wave emission. Astrophys. J.473, 322-342 (1996)
A flaring X-ray pulsar in Dorado. E Mazets, Nature. 282Mazets, E. et al. A flaring X-ray pulsar in Dorado. Nature 282, 587-589 (1979).
A giant periodic flare from the soft γ-ray repeater SGR1900+14. K Hurley, Nature. 397Hurley, K. et al. A giant periodic flare from the soft γ-ray repeater SGR1900+14. Nature 397, 41-43 (1999).
Location of the gamma-ray transient event of. W Evans, Astrophys. J. 237Evans, W. et al. Location of the gamma-ray transient event of 1979 March 5. Astrophys. J. 237, L7-L9 (1980).
The discovery of an embedded cluster of high-mass stars near SGR1900+14. F Vrba, Astrophys. J. 533Vrba, F. et al. The discovery of an embedded cluster of high-mass stars near SGR1900+14. Astrophys. J. 533, L17-L20 (2000).
Giant flare from SGR1806-20 detected by INTEGRAL. J Borkowski, GCN Circ. 2920Borkowski, J. et al. Giant flare from SGR1806-20 detected by INTEGRAL. GCN Circ. 2920 (2004).
IPN localization of the giant flare from. K Hurley, SGR1806-20. GCN Circ. 2921Hurley, K. et al. IPN localization of the giant flare from SGR1806-20. GCN Circ. 2921 (2004).
SGR1806-20, RHESSI observations of the 041227 giant flare. S Boggs, GCN Circ. 2936Boggs, S. et al. SGR1806-20, RHESSI observations of the 041227 giant flare. GCN Circ. 2936 (2004).
The giant outburst from. E Mazets, SGR1806-20. GCN Circ. 2922Mazets, E. et al. The giant outburst from SGR1806-20. GCN Circ. 2922 (2004).
A giant gamma-ray flare from the magnetar SGR1806-20. D Palmer, Nature. 4341107Palmer, D. et al. A giant gamma-ray flare from the magnetar SGR1806-20. Nature 434, 1107 (2005).
The connection between W31, SGR 1806-20, and LBV 1806-20: Distance, extinction, and structure. S Corbel, S Eikenberry, Astron. Astrophys. 419Corbel, S. & Eikenberry, S. The connection between W31, SGR 1806-20, and LBV 1806-20: Distance, extinction, and structure. Astron. Astrophys. 419, 191-201 (2004).
The double-lined spectrum of LBV 1806-20. D F Figer, F Najarro, R P Kudritzki, Astrophys. J. 610Figer, D. F., Najarro, F. & Kudritzki, R. P. The double-lined spectrum of LBV 1806-20. Astrophys. J. 610, L109-L113 (2004).
Detection of a fast, intense and unusual gamma ray transient. T Cline, Astrophys. J. 237Cline, T. et al. Detection of a fast, intense and unusual gamma ray transient. Astrophys. J. 237, L1-L5 (1980).
GEOTAIL observation of the SGR1806-20 giant flare: the first 600 ms. T Terasawa, Nature. 4341110Terasawa, T. et al. GEOTAIL observation of the SGR1806-20 giant flare: the first 600 ms. Nature 434, 1110 (2005).
Fine time structure in the 1979 March 5 gamma ray burst. C Barat, Astron. Astrophys. 126Barat, C. et al. Fine time structure in the 1979 March 5 gamma ray burst. Astron. Astrophys. 126, 400-402 (1983).
Gamma-ray bursters at cosmological distances. B Paczynski, Astrophys. J. 308Paczynski, B. Gamma-ray bursters at cosmological distances. Astrophys. J. 308, L43-L46 (1986)
An expanding radio nebula from the giant flare from the magnetar SGR 1806-20. B M Gaensler, Nature. 4341104Gaensler, B. M. et al. An expanding radio nebula from the giant flare from the magnetar SGR 1806-20. Nature 434, 1104 (2005).
from SGR1900+14. Radiative mechanism and physical constraints on the source. C Thompson, R Duncan, Astrophys. J. 561The giant flare ofThompson, C. & Duncan, R. The giant flare of 1998 August 27 from SGR1900+14. Radiative mechanism and physical constraints on the source. Astrophys. J. 561, 980-1005 (2001).
The giant flare of. S Boggs, from SGR 1806-20Astrophys. J. submittedBoggs, S. et al. The giant flare of December 27, 2004 from SGR 1806-20. Astrophys. J. (submitted)
Temporal and spectral characteristics of short bursts from the soft gamma repeaters 1806-20 and 1900+14. E Gogus, Astrophys. J. 558Gogus, E. et al. Temporal and spectral characteristics of short bursts from the soft gamma repeaters 1806-20 and 1900+14. Astrophys. J. 558, 228-236 (2001).
Electrodynamics of magnetars: implications for the persistent X-ray emission and spin-down of the soft gamma repeaters and anomalous X-ray pulsars. C Thompson, M Lyutikov, S Kulkarni, Astrophys. J. 574Thompson, C., Lyutikov, M. & Kulkarni, S. Electrodynamics of magnetars: implications for the persistent X-ray emission and spin-down of the soft gamma repeaters and anomalous X-ray pulsars. Astrophys. J. 574, 332-355 (2002).
. S Golenetskii, SGR1806-20. GCN Circ. 2823Bright bursts fromGolenetskii, S. et al. Bright bursts from SGR1806-20. GCN Circ. 2823 (2004).
Gradual brightening of SGR1806-20. P Woods, Astronomer's Telegram. 313Woods, P. et al. Gradual brightening of SGR1806-20. Astronomer's Telegram 313 (2004)
Explosive reconnection in magnetars. M Lyutikov, Mon. Not. R. Astron. Soc. 346Lyutikov, M. Explosive reconnection in magnetars. Mon. Not. R. Astron. Soc. 346, 540-554 (1998).
Large torque variations in two soft gamma repeaters. P M Woods, Astrophys. J. 576Woods, P. M. et al. Large torque variations in two soft gamma repeaters. Astrophys. J. 576, 381-390 (2002).
Gamma-ray bursts from extragalactic magnetar flares. R Duncan, AIP Conf. Proc. Martel, H. & Wheeler, J. C.586AIPDuncan, R. Gamma-ray bursts from extragalactic magnetar flares. AIP Conf. Proc. 586, 495-501 (eds Martel, H. & Wheeler, J. C.) (AIP, New York, 2001)
Waiting for the big one: a new class of soft gamma repeater outbursts. D Eichler, Mon. Not. R. Astron. Soc. 576Eichler, D. Waiting for the big one: a new class of soft gamma repeater outbursts. Mon. Not. R. Astron. Soc. 576, 381-392 (2002).
The 5 March 1979 event and the distinct class of short gamma bursts -are they of the same origin?. E Mazets, S Golenetskii, Yu Guryan, V Ilyinskii, Astrophys. Space Sci. 84Mazets, E., Golenetskii, S., Guryan, Yu. & Ilyinskii, V. The 5 March 1979 event and the distinct class of short gamma bursts -are they of the same origin? Astrophys. Space Sci. 84, 173-189 (1982).
The hardness-duration diagram of gamma-ray bursts. J.-P Dezalay, Astrophys. J. 471Dezalay, J.-P. et al. The hardness-duration diagram of gamma-ray bursts. Astrophys. J. 471, L27-L30 (1996).
Frequency of fast, narrow gamma ray bursts. J Norris, T Cline, U Desai, B Teegarden, Nature. 308Norris, J., Cline, T., Desai, U. & Teegarden, B. Frequency of fast, narrow gamma ray bursts. Nature 308, 434-435 (1984).
Gamma-ray burst observations: past and future, in gamma-ray bursts. K Hurley, AIP Conf. Proc. Paciesas, W. & Fishman, G.265AIPHurley, K. Gamma-ray burst observations: past and future, in gamma-ray bursts. AIP Conf. Proc. 265, 3-12 (eds Paciesas, W. & Fishman, G.) (AIP, New York, 1992).
Identification of two classes of gamma-ray bursts. C Kouveliotou, Astrophys. J. 413Kouveliotou, C. et al. Identification of two classes of gamma-ray bursts. Astrophys. J. 413, L101-L104 (1993).
Afterglow upper limits for four short-duration, hard spectrum gamma-ray bursts. K Hurley, Astrophys. J. 567Hurley, K. et al. Afterglow upper limits for four short-duration, hard spectrum gamma-ray bursts. Astrophys. J. 567, 447-453 (2002).
The Fourth BATSE Gamma-Ray Burst Catalog (Revised). W Paciesas, Astrophys. J. Suppl. 122Paciesas, W. et al. The Fourth BATSE Gamma-Ray Burst Catalog (Revised). Astrophys. J. Suppl. 122, 465-495 (1999).
The first BATSE gamma-ray burst catalog. G Fishman, Astrophys. J. Suppl. 92Fishman, G. et al. The first BATSE gamma-ray burst catalog. Astrophys. J. Suppl. 92, 229-283 (1994).
The bivariate brightness function of galaxies. N Cross, S P Driver, Mon. Not. R. Astron. Soc. 329Cross, N. & Driver, S. P. The bivariate brightness function of galaxies. Mon. Not. R. Astron. Soc. 329, 579-598 (2002).
Possible detection of hard X-ray afterglows of short gamma-ray bursts. D Lazzati, E Ramirez-Ruiz, G Ghisellini, Astron. Astrophys. 379Lazzati, D., Ramirez-Ruiz, E. & Ghisellini, G. Possible detection of hard X-ray afterglows of short gamma-ray bursts. Astron. Astrophys. 379, L39-L43 (2001).
BATSE observations of gamma-ray burst tails. V Connaughton, Astrophys. J. 567Connaughton, V. BATSE observations of gamma-ray burst tails. Astrophys. J. 567, 1028-1036 (2002).
The Swift gamma-ray burst mission. N Gehrels, Astrophys. J. 611Gehrels, N. et al. The Swift gamma-ray burst mission. Astrophys. J. 611, 1005-1020 (2004).
Swift's ability to detect gamma-ray bursts. E Fenimore, Preprint atFenimore, E. et al. Swift's ability to detect gamma-ray bursts. Preprint at http://arXiv.org/astro-ph/0408513 (2004).
Evidence for a sudden magnetic field reconfiguration in soft gamma repeater SGR1900+14. P Woods, Astrophys. J. 552Woods, P. et al. Evidence for a sudden magnetic field reconfiguration in soft gamma repeater SGR1900+14. Astrophys. J. 552, 748-755 (2001).
Diagnosing magnetars with transient cooling. Y Lyubarsky, D Eichler, C Thompson, Astrophys. J. 580Lyubarsky, Y., Eichler, D. & Thompson, C. Diagnosing magnetars with transient cooling. Astrophys. J. 580, L69-L72 (2002).
An extended burst tail from SGR 1900+14 with a thermal X-ray spectrum. G T Lenters, Astrophys. J. 587Lenters, G. T. et al. An extended burst tail from SGR 1900+14 with a thermal X-ray spectrum. Astrophys. J. 587, 761-778 (2003).
from SGR1900+14. I. An interpretive study of BeppoSAX and Ulysses observations. M Feroci, K Hurley, R Duncan, C Thompson, Astrophys. J. 549The giant flare ofFeroci, M., Hurley, K., Duncan, R. & Thompson, C. The giant flare of 1998 August 27 from SGR1900+14. I. An interpretive study of BeppoSAX and Ulysses observations. Astrophys. J. 549, 1021-1038 (2001).
The quiescent counterpart of the soft gamma repeater SGR 0526-66. S Kulkarni, Astrophys. J. 585Kulkarni, S. et al. The quiescent counterpart of the soft gamma repeater SGR 0526- 66. Astrophys. J. 585, 948-954 (2003).
Recurrent bursts in GBS0526-66, the source of the 5 March 1979 γ-ray burst. S Golenetskii, V Ilyinskii, E Mazets, Nature. 307Golenetskii, S., Ilyinskii, V. & Mazets, E. Recurrent bursts in GBS0526-66, the source of the 5 March 1979 γ-ray burst. Nature 307, 41-43 (1984).
A three dimensional plasma and energetic particle experiment for the Wind spacecraft. R Lin, Space Sci. Rev. 71Lin, R. et al. A three dimensional plasma and energetic particle experiment for the Wind spacecraft. Space Sci. Rev. 71, 125-153 (1995).
The RHESSI spectrometer. D M Smith, Solar Phys. 210Smith, D. M. et al. The RHESSI spectrometer. Solar Phys. 210, 33-60 (2002).
A synthetic view on the structure and evolution of the Milky Way. A C Robin, C Reyle, S Derriere, S Picaud, Astron. Astrophys. 409Robin, A.C., Reyle, C., Derriere, S. & Picaud, S. A synthetic view on the structure and evolution of the Milky Way. Astron. Astrophys. 409, 523-540 (2003).
|
[] |
[
"Low energy physics of interacting bosons with a moat spectrum, and the implications for condensed matter and cold nuclear matter",
"Low energy physics of interacting bosons with a moat spectrum, and the implications for condensed matter and cold nuclear matter"
] |
[
"A M Tsvelik \nDivision of Condensed Matter Physics and Material Science\nBrookhaven National Laboratory\nUpton11973-5000NYUSA\n",
"R D Pisarski \nDepartment of Physics\nBrookhaven National Laboratory\nUpton11973-5000NYUSA\n"
] |
[
"Division of Condensed Matter Physics and Material Science\nBrookhaven National Laboratory\nUpton11973-5000NYUSA",
"Department of Physics\nBrookhaven National Laboratory\nUpton11973-5000NYUSA"
] |
[] |
We discuss bosonic models with a moat spectrum, where in momentum space the minimum of the dispersion relation is on a sphere of nonzero radius. For spinless bosons with O(N ) symmetry, we emphasize the essential difference between N = 2 and N > 2. When N = 2, there are two phase transitions: at zero temperature, a transition to a state with Bose condensation, and at nonzero temperature, a transition to a spatially inhomogeneous state. When N > 2, previous analysis [1, 2] suggests that a mass gap is generated dynamically at any temperature. In condensed matter, a moat spectrum is important for spin-orbit-coupled bosons. For cold nuclear or quarkyonic matter, we suggest that the transport properties, such as neutrino emission, are dominated by the phonons related to a moat spectrum; also, that at least in the quarkyonic phase the nucleons may be a non-Fermi liquid.Several recent papers [3-5] discuss bosonic systems with a "moat" spectrum, where the energy ǫ(p) depends upon the spatial momentum p aswhere v 2 , r, and especially Q 2 are all nonzero. The minimum of the energy is at the bottom of the moat, when p 2 = Q 2 , and has a local maximum at zero momentum [6].Refs. [3] and [4]suggest that such systems display certain analogies to Fermi liquids, where the gapless surface survives down to the lowest energies. In this paper we argue that this is unlikely, at least for the models considered in Refs. [3][4][5]. Following our previous work in Refs.[1] and [2], we suggest an alternate picture from that of Refs.[3] and [4]. Our conclusions agree with those of Ref.[5], as we provide a more detailed analysis. To illustrate the physics, we consider two models: a single species of bosons with an O(2) symmetry, like that of Refs. [3][4][5], and an O(N ) symmetric nonlinear sigma model with N > 2 [1, 2, 7].In two and three spatial dimensions, we argue that a system with O(2) symmetry undergoes two phase transitions: at zero temperature, a transition to a state with Bose condensation, and at nonzero temperature, a transition to a spatially inhomogeneous state. At nonzero temperature the rotational symmetry in space is spontaneously broken by singling out a particular wave vector Q on the moat, while at zero temperature, a Bose condensate develops at Q.Even when r = 0, when the symmetry is non-Abelian, such as O(N ) with N > 2, there is no condensate either at nonzero [1] nor zero [2] temperature. Instead a dynamically generated gap opens over the entire bottom of the moat, p 2 = Q 2 . In Refs.[1] and [2] this was shown using a O(N ) model at large N , but we suggest that it occurs for all N > 2.Besides the question of principle, such models are of interest in both condensed matter and nuclear physics. For example, spin-orbit-coupled bosons [8] have a moat spectrum. For Quantum ChromoDynamics (QCD) [9], in nuclear matter it arises for pion [10-23] and kaon [24][25][26][27] condensates, and in the quarkyonic regime [1, 2, 28-30], for chiral spirals [31-68]. As we discuss, the moat spectrum will has important implications for both, and especially for the transport properties of nuclear matter.Spinless bosons with a moat spectrum. The first model we consider is a model of d-dimensional bosons with a moat spectrum,(2)For a real b-field, this model is similar to the Landau-Brazovskii model of weak crystallization [69]. We consider a complex b-field, where it is possible to have inhomogeneous phases which exhibit the spontaneous breaking of translational symmetry[70]. Free b bosons do not condense, but interacting bosons can. We begin by integrating out fluctuations in the density. When the average density is large, fluctuations in the density are massive and can be integrated out by a change in variables, b = √ ρe ( iφ), so that L = d D x iρ∂ τ φ + g 2 (ρ − µ/g) 2 + 1 2mQ 2 ρ 1/2 e −iφ (∇ 2 + Q 2 ) 2 e iφ ρ 1/2 .We assume that the interaction is weak, replacing ρ by its average value, ρ → ρ 0 . The conventional Hartree approximation yields µ = gρ 0 , and sets an upper bound on the interaction strength, gρ 0 ≪ Q 2 /m. A lower bound
| null |
[
"https://arxiv.org/pdf/2103.15835v2.pdf"
] | 232,417,396 |
2103.15835
|
8341333f3a528d5f70d183e13334672b568164fc
|
Low energy physics of interacting bosons with a moat spectrum, and the implications for condensed matter and cold nuclear matter
7 Apr 2021
A M Tsvelik
Division of Condensed Matter Physics and Material Science
Brookhaven National Laboratory
Upton11973-5000NYUSA
R D Pisarski
Department of Physics
Brookhaven National Laboratory
Upton11973-5000NYUSA
Low energy physics of interacting bosons with a moat spectrum, and the implications for condensed matter and cold nuclear matter
7 Apr 2021(Dated: April 9, 2021)arXiv:2103.15835v2 [nucl-th]
We discuss bosonic models with a moat spectrum, where in momentum space the minimum of the dispersion relation is on a sphere of nonzero radius. For spinless bosons with O(N ) symmetry, we emphasize the essential difference between N = 2 and N > 2. When N = 2, there are two phase transitions: at zero temperature, a transition to a state with Bose condensation, and at nonzero temperature, a transition to a spatially inhomogeneous state. When N > 2, previous analysis [1, 2] suggests that a mass gap is generated dynamically at any temperature. In condensed matter, a moat spectrum is important for spin-orbit-coupled bosons. For cold nuclear or quarkyonic matter, we suggest that the transport properties, such as neutrino emission, are dominated by the phonons related to a moat spectrum; also, that at least in the quarkyonic phase the nucleons may be a non-Fermi liquid.Several recent papers [3-5] discuss bosonic systems with a "moat" spectrum, where the energy ǫ(p) depends upon the spatial momentum p aswhere v 2 , r, and especially Q 2 are all nonzero. The minimum of the energy is at the bottom of the moat, when p 2 = Q 2 , and has a local maximum at zero momentum [6].Refs. [3] and [4]suggest that such systems display certain analogies to Fermi liquids, where the gapless surface survives down to the lowest energies. In this paper we argue that this is unlikely, at least for the models considered in Refs. [3][4][5]. Following our previous work in Refs.[1] and [2], we suggest an alternate picture from that of Refs.[3] and [4]. Our conclusions agree with those of Ref.[5], as we provide a more detailed analysis. To illustrate the physics, we consider two models: a single species of bosons with an O(2) symmetry, like that of Refs. [3][4][5], and an O(N ) symmetric nonlinear sigma model with N > 2 [1, 2, 7].In two and three spatial dimensions, we argue that a system with O(2) symmetry undergoes two phase transitions: at zero temperature, a transition to a state with Bose condensation, and at nonzero temperature, a transition to a spatially inhomogeneous state. At nonzero temperature the rotational symmetry in space is spontaneously broken by singling out a particular wave vector Q on the moat, while at zero temperature, a Bose condensate develops at Q.Even when r = 0, when the symmetry is non-Abelian, such as O(N ) with N > 2, there is no condensate either at nonzero [1] nor zero [2] temperature. Instead a dynamically generated gap opens over the entire bottom of the moat, p 2 = Q 2 . In Refs.[1] and [2] this was shown using a O(N ) model at large N , but we suggest that it occurs for all N > 2.Besides the question of principle, such models are of interest in both condensed matter and nuclear physics. For example, spin-orbit-coupled bosons [8] have a moat spectrum. For Quantum ChromoDynamics (QCD) [9], in nuclear matter it arises for pion [10-23] and kaon [24][25][26][27] condensates, and in the quarkyonic regime [1, 2, 28-30], for chiral spirals [31-68]. As we discuss, the moat spectrum will has important implications for both, and especially for the transport properties of nuclear matter.Spinless bosons with a moat spectrum. The first model we consider is a model of d-dimensional bosons with a moat spectrum,(2)For a real b-field, this model is similar to the Landau-Brazovskii model of weak crystallization [69]. We consider a complex b-field, where it is possible to have inhomogeneous phases which exhibit the spontaneous breaking of translational symmetry[70]. Free b bosons do not condense, but interacting bosons can. We begin by integrating out fluctuations in the density. When the average density is large, fluctuations in the density are massive and can be integrated out by a change in variables, b = √ ρe ( iφ), so that L = d D x iρ∂ τ φ + g 2 (ρ − µ/g) 2 + 1 2mQ 2 ρ 1/2 e −iφ (∇ 2 + Q 2 ) 2 e iφ ρ 1/2 .We assume that the interaction is weak, replacing ρ by its average value, ρ → ρ 0 . The conventional Hartree approximation yields µ = gρ 0 , and sets an upper bound on the interaction strength, gρ 0 ≪ Q 2 /m. A lower bound
We discuss bosonic models with a moat spectrum, where in momentum space the minimum of the dispersion relation is on a sphere of nonzero radius. For spinless bosons with O(N ) symmetry, we emphasize the essential difference between N = 2 and N > 2. When N = 2, there are two phase transitions: at zero temperature, a transition to a state with Bose condensation, and at nonzero temperature, a transition to a spatially inhomogeneous state. When N > 2, previous analysis [1,2] suggests that a mass gap is generated dynamically at any temperature. In condensed matter, a moat spectrum is important for spin-orbit-coupled bosons. For cold nuclear or quarkyonic matter, we suggest that the transport properties, such as neutrino emission, are dominated by the phonons related to a moat spectrum; also, that at least in the quarkyonic phase the nucleons may be a non-Fermi liquid.
Several recent papers [3-5] discuss bosonic systems with a "moat" spectrum, where the energy ǫ(p) depends upon the spatial momentum p as
ǫ(p) 2 = v 2 (p 2 − Q 2 ) 2 + r ,(1)
where v 2 , r, and especially Q 2 are all nonzero. The minimum of the energy is at the bottom of the moat, when p 2 = Q 2 , and has a local maximum at zero momentum [6]. Refs.
[3] and [4] suggest that such systems display certain analogies to Fermi liquids, where the gapless surface survives down to the lowest energies. In this paper we argue that this is unlikely, at least for the models considered in Refs. [3][4][5]. Following our previous work in Refs.
[1] and [2], we suggest an alternate picture from that of Refs.
[3] and [4]. Our conclusions agree with those of Ref.
[5], as we provide a more detailed analysis. To illustrate the physics, we consider two models: a single species of bosons with an O(2) symmetry, like that of Refs. [3][4][5], and an O(N ) symmetric nonlinear sigma model with N > 2 [1, 2, 7].
In two and three spatial dimensions, we argue that a system with O(2) symmetry undergoes two phase transitions: at zero temperature, a transition to a state with Bose condensation, and at nonzero temperature, a transition to a spatially inhomogeneous state. At nonzero temperature the rotational symmetry in space is spontaneously broken by singling out a particular wave vector Q on the moat, while at zero temperature, a Bose condensate develops at Q.
Even when r = 0, when the symmetry is non-Abelian, such as O(N ) with N > 2, there is no condensate either at nonzero 68]. As we discuss, the moat spectrum will has important implications for both, and especially for the transport properties of nuclear matter.
Spinless bosons with a moat spectrum. The first model we consider is a model of d-dimensional bosons with a moat spectrum,
L = d d x b + ∂ τ b − µb + b + 1 2 g(b + b) 2 + 1 2mQ 2 b + (−∇ 2 − Q 2 ) 2 b .
(2)
For a real b-field, this model is similar to the Landau-Brazovskii model of weak crystallization [69]. We consider a complex b-field, where it is possible to have inhomogeneous phases which exhibit the spontaneous breaking of translational symmetry [70]. Free b bosons do not condense, but interacting bosons can. We begin by integrating out fluctuations in the density. When the average density is large, fluctuations in the density are massive and can be integrated out by a change in variables, b = √ ρe ( iφ), so that
L = d D x iρ∂ τ φ + g 2 (ρ − µ/g) 2 + 1 2mQ 2 ρ 1/2 e −iφ (∇ 2 + Q 2 ) 2 e iφ ρ 1/2 .(3)
We assume that the interaction is weak, replacing ρ by its average value, ρ → ρ 0 . The conventional Hartree approximation yields µ = gρ 0 , and sets an upper bound on the interaction strength, gρ 0 ≪ Q 2 /m. A lower bound follows by comparing µ, evaluated in Hartree approximation for bosons, with an alternative "fermionized" state in two dimensions specific for the moat spectrum [8], gρ 0 ≫ ρ 2 0 /mQ 2 . These two constraints are compatible at low density, ρ 0 ≪ Q 2 .
We now show that φ = Qr + α is a stable ansatz, where Q 2 = Q 2 and the direction of Q is arbitrary. This choice breaks the rotational symmetry, where the order parameter is the current J = ib + ∇b. Notice that introduction of several wave vectors decreases the interaction energy. There is a second order parameter, namely b itself. We demonstrate that this order forms only at zero temperature in D ≤ 3 spatial dimensions.
We show: at nonzero temperature there is a transition at which the rotational invariance is spontaneously broken; for D > 1 fluctuations of α are infrared finite at zero temperature so the the bosons condense; and lastly, at nonzero temperature in D = 2, 3 there is a finite infrared scale beyond which the Bose condensate disappears.
The Lagrangian density for α is
L = (∂ τ α) 2 2g + ρ 0 2mQ 2 {[2(Q∇α)+(∇α) 2 ] 2 +(∇ 2 α) 2 }. (4)
It is convenient to rescale α = M 1/2ᾱ and ν 2 = M/g, where M = mQ 2 /ρ 0 , so that Eq. (4) becomes
L = ν 2 (∂ τᾱ ) 2 2 + 2[Q(∂ xᾱ ) + M 1/2 (∇ᾱ) 2 2 ] 2 + (∇ 2ᾱ ) 2 2 ,(5)
for which the bare inverse propagator is
ᾱᾱ −1 = ν 2 ω 2 + γp 2 x + (p 2 ) 2 ,(6)
where γ = 4Q 2 .
Here and in what follows we assume that Q lies along the x axis. This form of the correlator is preserved at T = 0 because of rotational invariance and since Eq. (4) is infrared finite at zero temperature. Indeed, an infinitesimal changeᾱ →ᾱ + B · r, where B · Q = 0 does not change the action. This implies the absence of a term ∼ p 2 ⊥ . To find corrections to the propagator we rewrite the last term in (5) as (2Q(∂ x α) + (∇α) 2 ) 2 /(2M ), so to remove spurious p 2 corrections to the self energy, which are removed by shiftingQ → Q 0 − (∇α) 2 /(2Q 0 ). We also distinguish between Q in M and the coefficient in front of ∂ x α since these two quantities renormalize differently.
The crucial difference between our analysis and that of Ref.
[4] is their neglect of higher order terms in the inverse propagator, ∼ (p 2 ) 2 , Eq. (6), while we include them. We show that this term ensures that the Bose condensate is stable at zero temperature.
In D spatial dimensions the first correction to the self energy is
Σ (1) = p 2 x (2Q) 2 M T n d D p (2π) D (2p 2 x + p 2 ) 2 ν 2 ω 2 n + 4Q 2 p 2 x + (p 2 ) 2(7)
This integral converges in the infrared at zero temperature, T = 0 for D > 1 and diverges for D ≤ 3 at T = 0. Since G −1 = G −1 0 − Σ, this singular diagram leads to reduction of the longitudinal stiffness. At zero temperature in D = 2, the correction to the stiffness is δγ/γ ∼ −M/ν = (mQ 2 0 g/ρ 0 ) 1/2 , which is the small parameter of the expansion. We show that the stiffness is a nonanalytic function of T .
At zero temperature, the single particle correlation function is
G(τ, r) = b(τ, r)b + (0, 0) ≈ ρ 0 e iQr e iα(τ,r) e −iα(0,0) , (8) whereQ is the renormalized wave vector. When D = 2, b = ρ 1/2 0 e iα = (9) ρ 1/2 0 exp − M 2 dωd 2 p (2π) 3 1 ν 2 ω 2 + p 4 + 4Q 2 p 2 cos 2 φ = ρ 1/2 0 exp − M 8πν dφ 2π ln(Λ/Q| cos φ|) = 0 .
Thus the bosons spontaneously choose a wave vector on the circle |Q| = Q and condense. However, at T = 0 the integral diverges in the infrared for D < 4, so there is no condensation.
To determine what happens at nonzero temperatures we concentrate on classical fluctuations, corresponding to zero Matsubara frequency. For D = 3 the first correction to the stiffness diverges logarithmically. The renormalization group equations are (γ = 4Q 2 ):
dγ dξ = −γ 1/2 M ; dM dξ = 10γ −1/2 M 2 ,(10)
where ξ = ln(Q 0 /p)/(8π), γ = 4Q 2 0 (M 0 /M ) 1/10 and
M = M 0 /[1 − (19M 0 /16πQ 0 ) ln(Q 0 /p)] 20/19 . This im- plies that for D = 3 at T = 0 the longitudinal stiffness disappears at the momenta p 0 ∼ Q 0 exp[−16πQ 0 /19M 0 ], where M 0 = T mQ 2 0 /ρ 0 .
At this scale the fluctuations of ∇α become of the order of Q 0 and the spectrum becomes effectively isotropic around Q.
It is interesting that at nonzero frequency the infrared divergence in (7) is cut by the frequency itself. Therefore there is also a frequency scale above which the stiffness remains finite.
As far as the broken rotational symmetry associated with the finite current J, the average order parameter remains finite, at least until some critical temperature,
Q(T ) = Q 0 − (∇α) 2 = Q 0 − constT .
Moats in cold nuclear/quarkyonic matter. In this section we would like to comment on the role of a moat spectrum for cold nuclear or quarkyonic matter [1, 2, 30-32]. Consider N f flavors of massless quarks coupled to a SU (N c ) gauge theory. From the left-and right-handed quarks q ia L,R , one can form the gauge invariant quantity,
Φ ab (x) = q ia L q ib R ,(11)
where i, j . . . 1 . . . N c are indices for the fundamental representation of the SU (N c ) gauge group, and a, b . . . N f for the flavor symmetry of SU (N f ) L × SU (N f ) R , which in vacuum breaks spontaneously to SU (N f ). There is also an axial U (1) A symmetry which is broken dynamically by topologically nontrivial fluctuations, but this probably remains strongly broken until extremely high densities [71].
With dynamical quarks there is no precise measure of confinement, but at asymptotically high temperature or baryon density the pressure approaches that of a nearly ideal gas of quarks and gluons. Our interest here is what happens at low temperature as the quark chemical potential decreases, and one enters a quarkyonic phase [1, 2, 28-68]. While the free energy is approximately that of free quarks, the excitations near the edge of the Fermi surface are confined. As the chemical potential decreases further, quarkyonic matter becomes hadronic, with a free energy which far from quarkish, and again excitations near the Fermi surface which are confined. This illustrates the basic continuity between hadronic and quarkyonic matter.
Studies in lower dimensional models show that at low temperature and nonzero density a spatially inhomogeneous solution arises in 1 + 1 [51, 67, 68, 72-80] and 2 + 1 dimensions [68,80,81]. For spatially inhomogeneous states pairing occurs between a particle at one edge of the Fermi surface, with momentum k F , and a hole on the other edge, with momentum − k F , Fig. (3) of Ref.
[31]. Because the pairing is between a particle-hole pair , the condensate carries a net momentum 2 k F .
In a gauge theory a gauge invariant order parameter can be constructed in terms of the quark fields. Since the kink crystal is spatially varying, we take the quarks at different points, and following Deryagin, Grigoriev, and Rubuakov [82], introduce
G ab (x, y) = q ia L (x) P exp ig x y A µ (z)dz ij q(y) jb R .
(12) Gauge invariance is ensured by inserting the path ordered (P) exponential for the gauge field between q and q. When the Fourier transform of the static operator,
G ab ( k) = d 3 x e i k· x G ab (0, x; 0, 0) ,(13)
acquires an expectation value at a given momentum, on the order of 2k F , a kink crystal develops. It is also possible to look directly at the spatial variation in Φ ab in Eq.
(11). The global symmetry can be enlarged by the spin degrees of freedom. In quarkyonic matter, the flavor SU (N f ) symmetry increases to a SU (2N f ) symmetry of spin and flavor when magnetic interactions can be ignored [30]. Similarly, in nuclear matter an increased spin-flavor symmetry is exact at infinite N c [83], where it is related to the supermultiplet symmetry of Wigner [84,85]. Our analysis, which here is entirely qualitative, is very similar in either case.
In greater than one spatial dimension, the direction of the density wave is chosen spontaneously, and the Fermi surface is covered by patches of kink condensates [1, 2, 30-32]. Because kink crystals are spatially periodic, they spontaneously break translational symmetry along the condensate axis, and generate phonons as the associated Goldstone modes. There are non-Abelian phonons, associated with flavor rotations of matrix field G ab , and Abelian, associated with the overall U (1) phase of det(G) [30,32]. In greater than one spatial dimension, the stiffness of the phonons vanishes in the transverse direction, as in Eq. (6). At nonzero temperature, the absence of the transverse stiffness leads to strong fluctuations which generate a finite correlation length for the non-Abelian phonons, while the Abelian phonon remain massless [1, 30, 32]. At zero temperature, the correlation length for the non-Abelian phonons diverges exponentially at zero temperature, Eq. (21) of Ref.
[30]. In contrast, in a phenomenological O(N ) sigma model with a moat spectrum, the correlation length for the non-Abelian phonons remains finite even at zero temperature [86]. It is not clear if the difference is relevant, as the mass gap at zero temperature is much smaller than at nonzero temperature. Further the chiral symmetry is only approximate in QCD, which thus generates a small mass gap for the flavored phonons in any case. The Abelian phonon is related only to fermion number, and so is always massless.
The phonons can play an essential role in transport properties in neutron/quarkyonic stars. Consider, for example, cooling through the emission of neutrinos [87][88][89][90][91][92]. In that case, the flavored phonon can decay into a virtual nucleon pair, and thereby through the weak interaction into a lepton neutrino pair [93]:
L W ∼ ig W [ē(Q · γ)(1 − γ 5 )ν e +μ(Q · γ)(1 − γ 5 )v µ ] × Tr(G + Q · ∇Gτ + ) ,(14)
where τ + is a Pauli matrix acting on flavor indices and G is the field for the non-Abelian phonon [30]. The Abelian phonon only has diagonal couplings to nucleons, and so only produces neutrinos (and leptons) through processes of second order in the weak interactions. This decay process is analogous to neutrino emission by pion condensates [87][88][89][90][91][92].
How do the nucleons near the Fermi surface contribute to the transport properties? In the quarkyonic phase, a model with a confining potential reduces to QCD in 1 + 1 dimensions at nonzero density. The phase diagram of this model is not known, as there are only results for a single, heavy quark by Bringoltz [74]. For a Nambu-Jona-Lasino model in 1 + 1 dimensions, using conformal symmetry and the truncated spectrum approach, it has been shown that the theory could be a non-Fermi liquid with gapless but incoherent nucleons [46, 51]. Nucleon operators are expresseable in terms of the bosonic fields following the rules of bosonization. Existence of this non-Fermi liquid regime depends upon a value of a parameter, K, which is the coefficient of the kinetic term for the Abelian phonon. Making the simplest assumption that K = 1,corresponding to weak interactions the NJL model at nonzero density is a non-Fermi liquid, a type of "strange" metal familiar from high-T c superconductivity [94] and holography [95].
If this applies to the quarkyonic phase, the baryons do not contribute to the transport properties, which instead are dominated by Abelian and non-Abelian phonons. In the nuclear phase, this is less clear: as Goldstone bosons, the phonons only couple to the nucleons (near the Fermi surface) through derivative interactions [96]. Such soft interactions are unlikely to produce a non-Fermi liquid.
We conclude by nothing that while considerable effort has been devoted to finding explicit solutions for pion condensates [10-23], kaon condensates [24-27], and quarkyonic chiral spirals [31-68], the detailed dynamics necessarily involves the effect of fluctuations, especially from the non-Abelian and Abelian phonons. Lastly, the possibility of quarkyonic matter forming a non-Fermi liquid suggests that it is well worth trying to understand the phase diagram of QCD in 1 + 1 dimensions.
SUPPLEMENTARY MATERIAL
In this Section for the sake of completeness we repeat the calculations for the O(N) nonlinear sigma model from [2].
We consider the Lagrangian density in D + 1 dimensions
L = (15) N 2g (∂ τ n) 2 + 1 m 2 (∇ 2 + Q 2 )n 2 , N a=1
n 2 a = 1.
We will treat this model in large N approximation. The Green's function is n a (−ω n , −p)n a (ω n , p) = (16) g N 1 ω 2 n + 1 m 2 (p 2 − Q 2 ) 2 + M 2 . The saddle point condition is
1/g = T n d D p (2π) D 1 ω 2 n + 1 m 2 (p 2 − Q 2 ) 2 + M 2 ,(17)
At T = 0 we have
2/g = d D p (2π) D 1 [ 1 m 2 (p 2 − Q 2 ) 2 + M 2 ] 1/2(18)
The gap M is finite for any D, in particular for D = 2 we have
M = (p 2 max − Q 2 ) 1/2 Q m exp − 4π/mg .(19)
Hence the spectrum is gapped but regains its moat-like form, unlike the U (1) model.
A.M.T. was supported by the U.S. Department of Energy, Office of Science, Materials Sciences and Engineering Division under contract DE-SC0012704. R.D.P. was supported by the U.S. Department of Energy under contract DE-SC001270 and by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C 2 QA) under contract DE-SC001270. A.M.T. and R.D.P. were also supported by B.N.L. under the Lab Directed Research and Development program 18-036. A.M.T. is grateful to L. Glazman for early discussions; R.D.P. thanks J. Schaffner-Bielich and J. Lattimer for discussions. We thank E. Lake, S. Sur, X.-G. Zhang and G. Chen for discussions of their work.
[ 1 ]
1Robert D. Pisarski, Alexei M. Tsvelik, and Semeon Valgushev, "How transverse thermal fluctuations disorder a condensate of chiral spirals into a quantum spin liquid," Phys. Rev. D 102, 016015 (2020), arXiv:2005.10259 [hep-ph]. [2] Robert D. Pisarski, "Remarks on nuclear matter: how an ω0 condensate can spike the speed of sound, and a model of Z(3) baryons,"
. Rev. D7, 953-964 (1973). [15] A. B. Migdal, "Pi condensation in nuclear matter," Phys. Rev. Lett. 31, 257-260 (1973). [16] Arkady B. Migdal, "Pion Fields in Nuclear Matter," Rev. Mod. Phys. 50, 107-172 (1978). [17] Arkady B. Migdal, E. E. Saperstein, M. A. Troitsky, and D. N. Voskresensky, "Pion degrees of freedom in nuclear matter," Phys. Rept. 192, 179-437 (1990). [18] H. Kleinert, "No pion condensate in nuclear matter due to fluctuations," Phys. Lett. B 102, 1-5 (1981). [19] G. Baym, B. L. Friman, and G. Grinstein, "Fluctuations and long range order in finite temperature pion condensates," Nucl. Phys. B210, 193-209 (1982). [20] K. Kolehmainen and G. Baym, "Pion condensation at Finite Temperature. Simple models including thermal excitations of the pion field," Nucl. Phys. A 382, 528-541 (1982). [21] G. G. Bunatian and I. N. Mishustin, "Thermodynamical theory of pion condensation," Nucl. Phys. A404, 525-550 (1983). [22] T. Takatsuka and R. Tamagaki, "π 0 condensation in dense symmetric nuclear matter at finite temperature," Prog. Theor. Phys. 77, 362-375 (1987). [23] H. Kleinert and B. Van den Bossche, "No massless pions in Nambu-Jona-Lasinio model due to chiral fluctuations,"
[1] nor zero [2] temperature. Instead a dynamically generated gap opens over the entire bottom of the moat, p 2 = Q 2 . In Refs. [1] and [2] this was shown using a O(N ) model at large N , but we suggest that it occurs for all N > 2. Besides the question of principle, such models are of interest in both condensed matter and nuclear physics. For example, spin-orbit-coupled bosons [8] have a moat spectrum. For Quantum ChromoDynamics (QCD) [9], in nuclear matter it arises for pion [10-23] and kaon [24-27] condensates, and in the quarkyonic regime [1, 2, 28-30], for chiral spirals [31-
Michael Buballa and Stefano Carignano, "Inhomogeneous chiral symmetry breaking in dense neutron-star matter," Eur. Phys. J. A52, 57 (2016), arXiv:1508.04361 [nucl-th]. [42] Jens Braun, Felix Karbstein, Stefan Rechenberger, and Dietrich Roscher, "Crystalline ground states in Polyakov-loop extended Nambu-Jona-Lasinio models," Phys. Rev. D93, 014032 (2016), arXiv:1510.04012 [hep-ph]. [43] S. Carignano, E. J. Ferrer, V. de la Incera, and L. Paulucci, "Crystalline chiral condensates as a component of compact stars," Phys. Rev. D92, 105018 (2015), arXiv:1505.05094 [nucl-th]. [44] Achim Heinz, Francesco Giacosa, Marc Wagner, and Dirk H. Rischke, "Inhomogeneous condensation in effective models for QCD using the finiteing in the quark-meson model with vacuum fluctuations," Phys. Rev. D94, 034023 (2016), arXiv:1606.08859 [hep-ph]. [46] P. Azaria, R.M. Konik, Ph. Lecheminant, T. Palmai, G. Takacs, and A.M. Tsvelik, "Particle Formation and Ordering in Strongly Correlated Fermionic Systems: Solving a Model of Quantum Chromodynamics," Phys. Rev. D 94, 045003 (2016), arXiv:1601.02979 [hep-th]. [47] Prabal Adhikari and Jens O. Andersen, "Consistent regularization and renormalization in models with inhomogeneous phases," Phys. Rev. D95, 036009 (2017), arXiv:1608.01097 [hep-ph]. [48] Prabal Adhikari and Jens O. Andersen, "Chiral density wave versus pion condensation in the 1+1 dimensional NJL model," Phys. Rev. D95, 054020 (2017), arXiv:1610.01647 [hep-th]. [49] Jens O. Andersen and Patrick Kneschke, "Inhomogeneous phases at finite density in an external magnetic field," (2017), arXiv:1710.08341 [hep-ph]. [50] Prabal Adhikari, Jens O. Andersen, and Patrick Kneschke, "Inhomogeneous chiral condensate in the quark-meson model," Phys. Rev. D96, 016013 (2017), arXiv:1702.01324 [hep-ph]. [51] Andrew J. A. James, Robert M. Konik, Philippe Lecheminant, Neil J. Robinson, and Alexei M. Tsvelik, "Non-perturbative methodologies for lowdimensional strongly-correlated systems: From non-abelian bosonization to truncated spectrum methods," Rept. Prog. Phys. 81, 046002 (2018), arXiv:1703.08421 [cond-mat.str-el]. [52] Stefano Carignano, Luca Lepori, Andrea Mam-Eur. Phys. J. A53, 35 (2017), arXiv:1610.06097 [hep-ph]. [53] T. G. Khunjua, K. G. Klimenko, R. N. Zhokhov, and V. C. Zhukovsky, "Inhomogeneous charged pion condensation in chiral asymmetric dense quark matter in the framework of NJL2 model," Phys. Rev. D95, 105010 (2017), arXiv:1704.01477 [hep-ph]. [54] T. G. Khunjua, K. G. Klimenko, and R. N. Zhokhov, "Dense baryon matter with isospin and chiral imbalance in the framework of NJL4 model at large Nc: duality between chiral symmetry breaking and charged pion condensation," Phys. Rev. D97, 054036 (2018), arXiv:1710.09706 [hep-ph]. [55] Jens O. Andersen and Patrick Kneschke, "Chiral density wave versus pion condensation at finite density and zero temperature," Phys. Rev. D 97, 076005 (2018), arXiv:1802.01832 [hep-ph]. [56] Stefano Carignano, Marco Schramm, and Michael Buballa, "Influence of vector interactions on the favored shape of inhomogeneous chiral condensates," Phys. Rev. D98, 014033 (2018), arXiv:1805.06203 [hep-ph]. [57] Michael Buballa and Stefano Carignano, "Inhomogeneous chiral phases away from the chiral limit," (2018), arXiv:1809.10066 [hep-ph]. [58] T. G. Khunjua, K. G. Klimenko, and R. N. Zhokhov, "Dualities in dense quark matter with isospin, chiral, and chiral isospin imbalance in the framework of the large-Nc limit of the NJL4 model," Phys. Rev. D98, 054030 (2018), arXiv:1804.01014 [hep-ph]. [59] T. G. Khunjua, K. G. Klimenko, and R. N. Zhokhov, "Chiral imbalanced hot and dense quark matter: NJL analysis at the physical point and compari-son with lattice QCD," Eur. Phys. J. C79, 151 (2019), arXiv:1812.00772 [hep-ph]. [60] Stefano Carignano and Michael Buballa, "Inhomogeneous chiral condensates in three-flavor quark matter," Phys. Rev. D101, 014026 (2020), arXiv:1910.03604 [hep-ph]. [61] T. G. Khunjua, K. G. Klimenko, and R. N. Zhokhov, "Dualities and inhomogeneous phases in dense quark matter with chiral and isospin imbalances in the framework of effective model," JHEP 06, 006 (2019), arXiv:1901.02855 [hep-ph]. [62] T. G. Khunjua, K. G. Klimenko, and R. N. Zhokhov, "Charged pion condensation and duality in dense and hot chirally and isospin asymmetric quark matter in the framework of the NJL2 model," Phys. Rev. D100, 034009 (2019), arXiv:1907.04151 [hep-ph]. 1911.11439 [hep-th]., arXiv:hep-ph/9908284 [hep-ph].
[24] D. B. Kaplan and A. E. Nelson, "Strange
Goings
on
in
Dense
Nucleonic
Matter,"
Phys. Lett. B175, 57-63 (1986).
[25] G. E. Brown, Chang-Hwan Lee, Mannque Rho, and
Vesteinn Thorsson, "From kaon -nuclear interactions to
kaon condensation," Nucl. Phys. A567, 937-956 (1994),
arXiv:hep-ph/9304204 [hep-ph].
[26] G. E. Brown and Mannque Rho, "From chi-
ral mean field to Walecka mean field and kaon
condensation,"
Nucl. Phys. A596, 503-514 (1996),
arXiv:nucl-th/9507028 [nucl-th].
[27] Gerald E. Brown, Chang-Hwan Lee,
and Man-
nque Rho, "Recent Developments on Kaon Con-
densation and Its Astrophysical Implications,"
Phys. Rept. 462, 1-20 (2008), arXiv:0708.3137 [hep-ph].
[28] Larry
McLerran
and
Robert
D.
Pisarski,
"Phases
of
cold,
dense
quarks
at
large
N(c),"
Nucl. Phys. A796, 83-100 (2007),
arXiv:0706.2191 [hep-ph].
[29] Larry
McLerran
and
Sanjay
Reddy,
"Quarkyonic
Matter
and
Neutron
Stars,"
Phys. Rev. Lett. 122, 122701 (2019),
arXiv:1811.12503 [nucl-th].
[30] Robert D. Pisarski, Vladimir V. Skokov, and Alexei M.
Tsvelik, "Fluctuations in cool quark matter and the
phase diagram of Quantum Chromodynamics," (2018),
arXiv:1801.08156 [hep-ph].
[31] Toru Kojo, Yoshimasa Hidaka, Larry McLer-
ran,
and Robert D. Pisarski, "Quarkyonic Chi-
ral
Spirals,"
Nucl. Phys. A843, 37-58 (2010),
arXiv:0912.3800 [hep-ph].
[32] Toru Kojo, Robert D. Pisarski, and A. M. Tsvelik,
"Covering the Fermi Surface with Patches of Quarky-
onic Chiral Spirals," Phys. Rev. D82, 074015 (2010),
arXiv:1007.0248 [hep-ph].
[33] Toru Kojo, Yoshimasa Hidaka, Kenji Fukushima,
Larry D. McLerran, and Robert D. Pisarski, "Interweav-
ing Chiral Spirals," Nucl. Phys. A875, 94-138 (2012),
arXiv:1107.2124 [hep-ph].
[34] Toru Kojo, "Chiral Spirals from Noncontin-
uous
Chiral
Symmetry:
The
Gross-Neveu
model
results,"
Phys. Rev. D90, 065030 (2014),
arXiv:1406.4630 [hep-ph].
[35] Dominik Nickel, "How many phases meet at the chi-
ral critical point?" Phys. Rev. Lett. 103, 072301 (2009),
arXiv:0902.1778 [hep-ph].
[36] Dominik
Nickel,
"Inhomogeneous
phases
in
the
Nambu-Jona-Lasino
and
quark-meson
model,"
Phys. Rev. D80, 074025 (2009),
arXiv:0906.5295 [hep-ph].
[37] Michael
Buballa
and
Stefano
Carig-
nano,
"Inhomogeneous
chiral
condensates,"
Prog. Part. Nucl. Phys. 81, 39-96 (2015),
arXiv:1406.1367 [hep-ph].
[38] Stefano
Carignano,
Michael
Buballa,
and
Bernd-Jochen
Schaefer,
"Inhomogeneous
phases
in the quark-meson model with vacuum fluc-
tuations,"
Phys. Rev. D 90, 014033 (2014),
arXiv:1404.0057 [hep-ph].
[39] Yoshimasa Hidaka, Kazuhiko Kamikado, Takuya
Kanazawa, and Toshifumi Noumi, "Phonons, pions
and quasi-long-range order in spatially modulated
chiral condensates," Phys. Rev. D92, 034003 (2015),
arXiv:1505.00848 [hep-ph].
[40] Tong-Gyu Lee, Eiji Nakano, Yasuhiko Tsue, Toshi-
taka Tatsumi, and Bengt Friman, "Landau-Peierls in-
stability in a Fulde-Ferrell type inhomogeneous chi-
ral condensed phase," Phys. Rev. D92, 034024 (2015),
arXiv:1504.03185 [hep-ph].
[41] mode
approach,"
Phys. Rev. D93, 014007 (2016),
arXiv:1508.06057 [hep-ph].
[45] Stefano
Carignano,
Michael
Buballa,
and
Wael
Elkamhawy,
"Consistent
parameter
fix-
marella,
Massimo
Mannarelli,
and
Giu-
lia
Pagliaroli,
"Scrutinizing
the
pion
con-
densed
phase,"
[63] Michael
Thies,
"Phase
structure
of
the
(
1+1
)-dimensional
Nambu-Jona-Lasinio
model
with
isospin,"
Phys. Rev. D101, 014010 (2020),
arXiv:
Michael Thies, arXiv:2002.01190First-order phase boundaries of the massive 1+1 dimensional Nambu-Jona-Lasinio model with isospin. hep-thMichael Thies, "First-order phase boundaries of the mas- sive 1+1 dimensional Nambu-Jona-Lasinio model with isospin," (2020), arXiv:2002.01190 [hep-th].
Inhomogeneous phases in the 1+1 dimensional Gross-Neveu model at finite number of fermion flavors. Laurin Pannullo, Julian Lenz, Marc Wagner, Björn Wellegehausen, Andreas Wipf, 10.5506/APhysPolBSupp.13.127arXiv:1902.11066Acta Phys. Polon. Supp. 13hep-latLaurin Pannullo, Julian Lenz, Marc Wagner, Björn Wellegehausen, and Andreas Wipf, "In- homogeneous phases in the 1+1 dimensional Gross-Neveu model at finite number of fermion flavors," Acta Phys. Polon. Supp. 13, 127 (2020), arXiv:1902.11066 [hep-lat].
Lattice investigation of the phase diagram of the 1+1 dimensional Gross-Neveu model at finite number of fermion flavors. Laurin Pannullo, Julian Lenz, Marc Wagner, Björn Wellegehausen, Andreas Wipf, arXiv:1909.1151337th International Symposium on Lattice Field Theory. hep-latLaurin Pannullo, Julian Lenz, Marc Wagner, Björn Wellegehausen, and Andreas Wipf, "Lattice investiga- tion of the phase diagram of the 1+1 dimensional Gross- Neveu model at finite number of fermion flavors," in 37th International Symposium on Lattice Field Theory (2019) arXiv:1909.11513 [hep-lat].
Inhomogeneous phases in the Gross-Neveu model in 1+1 dimensions at finite number of flavors. Julian Lenz, Laurin Pannullo, Marc Wagner, Björn Wellegehausen, Andreas Wipf, arXiv:2004.00295hep-latJulian Lenz, Laurin Pannullo, Marc Wagner, Björn Wellegehausen, and Andreas Wipf, "Inhomoge- neous phases in the Gross-Neveu model in 1+1 dimensions at finite number of flavors," (2020), arXiv:2004.00295 [hep-lat].
Phase diagram of the large N Gross-Neveu model in a finite periodic box. Rajamani Narayanan, arXiv:2001.09200hep-thRajamani Narayanan, "Phase diagram of the large N Gross-Neveu model in a finite periodic box," (2020), arXiv:2001.09200 [hep-th].
Phase transition of an isotropic system to a nonuniform state. S A Brazovskii, Zh. Eksp. Teor. Fiz. S. A. Brazovskii, "Phase transition of an isotropic system to a nonuniform state," Zh. Eksp. Teor. Fiz. , 175-185 (1975).
Nature of anisotropic fluctuation modes in ordered systems. An-Chang Shi, 10.1088/0953-8984/11/50/311Journal of Physics: Condensed Matter. 11An-Chang Shi, "Nature of anisotropic fluctuation modes in ordered systems," Journal of Physics: Condensed Matter 11, 10183-10197 (1999).
Multiinstanton contributions to anomalous quark interactions. D Robert, Fabian Pisarski, Rennecke, 10.1103/PhysRevD.101.114019arXiv:1910.14052Phys. Rev. D. 101114019hep-phRobert D. Pisarski and Fabian Rennecke, "Multi- instanton contributions to anomalous quark in- teractions," Phys. Rev. D 101, 114019 (2020), arXiv:1910.14052 [hep-ph].
Emergence of Skyrme crystal in Gross-Neveu and 't Hooft models at finite density. Verena Schon, Michael Thies, 10.1103/PhysRevD.62.096002arXiv:hep-th/0003195Phys. Rev. 6296002hep-thVerena Schon and Michael Thies, "Emergence of Skyrme crystal in Gross-Neveu and 't Hooft mod- els at finite density," Phys. Rev. D62, 096002 (2000), arXiv:hep-th/0003195 [hep-th].
Phase diagram of the Gross-Neveu model: Exact results and condensed matter precursors. Oliver Schnetz, Michael Thies, Konrad Urlichs, 10.1016/j.aop.2004.06.009arXiv:hep-th/0402014Annals Phys. 314hep-thOliver Schnetz, Michael Thies, and Konrad Urlichs, "Phase diagram of the Gross-Neveu model: Exact results and condensed matter precursors," Annals Phys. 314, 425-447 (2004), arXiv:hep-th/0402014 [hep-th].
Chiral crystals in strong-coupling lattice QCD at nonzero chemical potential. Barak Bringoltz, 10.1088/1126-6708/2007/03/016arXiv:hep-lat/0612010JHEP. 0316hep-latBarak Bringoltz, "Chiral crystals in strong-coupling lattice QCD at nonzero chemical potential," JHEP 03, 016 (2007), arXiv:hep-lat/0612010 [hep-lat].
From relativistic quantum fields to condensed matter and back again: Updating the Gross-Neveu phase diagram. Michael Thies, 10.1088/0305-4470/39/41/S04arXiv:hep-th/0601049J. Phys. 39hep-thMichael Thies, "From relativistic quan- tum fields to condensed matter and back again: Updating the Gross-Neveu phase di- agram," J. Phys. A39, 12707-12734 (2006), arXiv:hep-th/0601049 [hep-th].
Selfconsistent crystalline condensate in chiral Gross-Neveu and Bogoliubov-de Gennes systems. Gokce Basar, Gerald V Dunne, 10.1103/PhysRevLett.100.200404arXiv:0803.1501Phys. Rev. Lett. 100hep-thGokce Basar and Gerald V. Dunne, "Self- consistent crystalline condensate in chiral Gross-Neveu and Bogoliubov-de Gennes sys- tems," Phys. Rev. Lett. 100, 200404 (2008), arXiv:0803.1501 [hep-th].
A Twisted Kink Crystal in the Chiral Gross-Neveu model. Gokce Basar, Gerald V Dunne, 10.1103/PhysRevD.78.065022arXiv:0806.2659Phys. Rev. 7865022hep-thGokce Basar and Gerald V. Dunne, "A Twisted Kink Crystal in the Chiral Gross- Neveu model," Phys. Rev. D78, 065022 (2008), arXiv:0806.2659 [hep-th].
Inhomogeneous Condensates in the Thermodynamics of the Chiral NJL(2) model. Gokce Basar, Gerald V Dunne, Michael Thies, 10.1103/PhysRevD.79.105012arXiv:0903.1868Phys. Rev. 79105012hep-thGokce Basar, Gerald V. Dunne, and Michael Thies, "In- homogeneous Condensates in the Thermodynamics of the Chiral NJL(2) model," Phys. Rev. D79, 105012 (2009), arXiv:0903.1868 [hep-th].
Particle Formation and Ordering in Strongly Correlated Fermionic Systems: Solving a Model of Quantum Chromodynamics. P Azaria, R M Konik, Ph, T Lecheminant, G Palmai, A M Takacs, Tsvelik, 10.1103/PhysRevD.94.045003arXiv:1601.02979Phys. Rev. 9445003hep-thP. Azaria, R. M. Konik, Ph. Lecheminant, T. Pal- mai, G. Takacs, and A. M. Tsvelik, "Particle Formation and Ordering in Strongly Correlated Fermionic Systems: Solving a Model of Quantum Chromodynamics," Phys. Rev. D94, 045003 (2016), arXiv:1601.02979 [hep-th].
Relevance of the three-dimensional Thirring coupling at finite temperature and density. R Narayanan, 10.1103/PhysRevD.102.016014arXiv:2006.00608Phys. Rev. D. 10216014hep-thR. Narayanan, "Relevance of the three-dimensional Thirring coupling at finite temperature and density," Phys. Rev. D 102, 016014 (2020), arXiv:2006.00608 [hep-th].
Regulator dependence of inhomogeneous phases in the 2+1-dimensional Gross-Neveu model. Michael Buballa, Lennart Kurth, Marc Wagner, Marc Winstel, 10.1103/PhysRevD.103.034503arXiv:2012.09588Phys. Rev. D. 10334503hep-latMichael Buballa, Lennart Kurth, Marc Wagner, and Marc Winstel, "Regulator dependence of inho- mogeneous phases in the 2+1-dimensional Gross- Neveu model," Phys. Rev. D 103, 034503 (2021), arXiv:2012.09588 [hep-lat].
Standing wave ground state in high density. D V Deryagin, Dmitri Yu Grigoriev, V A Rubakov, 10.1142/S0217751X92000302Int. J. Mod. Phys. 7zero temperature QCD at large N(cD. V. Deryagin, Dmitri Yu. Grigoriev, and V. A. Rubakov, "Standing wave ground state in high density, zero temperature QCD at large N(c)," Int. J. Mod. Phys. A7, 659-681 (1992).
The Spin flavor dependence of nuclear forces from large n QCD. B David, Martin J Kaplan, Savage, 10.1016/0370-2693(95)01277-XarXiv:hep-ph/9509371Phys. Lett. B. 365David B. Kaplan and Martin J. Savage, "The Spin flavor dependence of nuclear forces from large n QCD," Phys. Lett. B 365, 244-251 (1996), arXiv:hep-ph/9509371.
On the Consequences of the Symmetry of the Nuclear Hamiltonian on the Spectroscopy of Nuclei. E Wigner, 10.1103/PhysRev.51.106Phys. Rev. 51E. Wigner, "On the Consequences of the Symmetry of the Nuclear Hamiltonian on the Spectroscopy of Nuclei," Phys. Rev. 51, 106-119 (1937).
On the Structure of Nuclei Beyond Oxygen. E Wigner, 10.1103/PhysRev.51.947Phys. Rev. 51E. Wigner, "On the Structure of Nuclei Beyond Oxygen," Phys. Rev. 51, 947-958 (1937).
Using a O(N ) linear sigma model with higher spatial derivatives, at T = 0 the effective mass gap δm T =0 eff ∼ (λN T ) 2 /(Z 4 M ) as Z → −∞, where M a mass parameter for the higher spatial derivatives, λ the quartic coupling of the O(N ) model. and λN ∼ 1 as N → ∞, Eq. (12) of Ref. [30Using a O(N ) linear sigma model with higher spatial derivatives, at T = 0 the effective mass gap δm T =0 eff ∼ (λN T ) 2 /(Z 4 M ) as Z → −∞, where M a mass pa- rameter for the higher spatial derivatives, λ the quar- tic coupling of the O(N ) model, and λN ∼ 1 as N → ∞, Eq. (12) of Ref. [30].
2Z) 3/2 /(λN )) in the same limit, Eq. (12) of Ref. [2]. In the Supplementary Material, the analogous quantity is derived at zero temperature in a nonlinear sigma model, see Eq. (19). At T = 0, Δm T =0 Eff ∼ M Exp, Thus in the linear sigma model the transition from nonzero temperature to zero temperature occurs when T ∼ M exp(−π 2 (−2Z) 3/2 /(2λN ). which at least for large −Z, is very smallAt T = 0, δm T =0 eff ∼ M exp(−π 2 (−2Z) 3/2 /(λN )) in the same limit, Eq. (12) of Ref. [2]. In the Supplementary Material, the analogous quantity is derived at zero temperature in a nonlinear sigma model, see Eq. (19). Thus in the linear sigma model the transition from nonzero temperature to zero tem- perature occurs when T ∼ M exp(−π 2 (−2Z) 3/2 /(2λN )), which at least for large −Z, is very small.
Neutrino emission from neutron stars. D G Yakovlev, A D Kaminker, Oleg Y Gnedin, P Haensel, 10.1016/S0370-1573(00)00131-9arXiv:astro-ph/0012122Phys. Rept. 354D. G. Yakovlev, A. D. Kaminker, Oleg Y. Gnedin, and P. Haensel, "Neutrino emission from neutron stars," Phys. Rept. 354, 1 (2001), arXiv:astro-ph/0012122.
Neutron star cooling. G Dima, C J Yakovlev, Pethick, 10.1146/annurev.astro.42.053102.134013arXiv:astro-ph/0402143Ann. Rev. Astron. Astrophys. 42Dima G. Yakovlev and C. J. Pethick, "Neutron star cool- ing," Ann. Rev. Astron. Astrophys. 42, 169-210 (2004), arXiv:astro-ph/0402143.
Minimal cooling of neutron stars: A New paradigm. Dany Page, James M Lattimer, Madappa Prakash, Andrew W Steiner, Dany Page, James M. Lattimer, Madappa Prakash, and Andrew W. Steiner, "Minimal cooling of neutron stars: A New paradigm,"
. 10.1086/424844arXiv:astro-ph/0403657Astrophys. J. Suppl. 155Astrophys. J. Suppl. 155, 623-650 (2004), arXiv:astro-ph/0403657.
Rapid Cooling of the Neutron Star in Cassiopeia A Triggered by Neutron Superfluidity in Dense Matter. Dany Page, Madappa Prakash, James M Lattimer, Andrew W Steiner, 10.1103/PhysRevLett.106.081101arXiv:1011.6142Phys. Rev. Lett. 10681101astro-ph.HEDany Page, Madappa Prakash, James M. Lattimer, and Andrew W. Steiner, "Rapid Cooling of the Neutron Star in Cassiopeia A Triggered by Neutron Superfluidity in Dense Matter," Phys. Rev. Lett. 106, 081101 (2011), arXiv:1011.6142 [astro-ph.HE].
Cooling neutron star in the Cassiopeia˜A supernova remnant: Evidence for superfluidity in the core. Peter S Shternin, Dmitry G Yakovlev, Craig O Heinke, C G Wynn, Daniel J Ho, Patnaude, 10.1111/j.1745-3933.2011.01015.xarXiv:1012.0045Mon. Not. Roy. Astron. Soc. 412astro-ph.SRPeter S. Shternin, Dmitry G. Yakovlev, Craig O. Heinke, Wynn C. G. Ho, and Daniel J. Patnaude, "Cooling neutron star in the Cassiopeia˜A supernova remnant: Evidence for superfluidity in the core," Mon. Not. Roy. Astron. Soc. 412, L108-L112 (2011), arXiv:1012.0045 [astro-ph.SR].
Neutron stars -cooling and transport. A Y Potekhin, J A Pons, Dany Page, 10.1007/s11214-015-0180-9arXiv:1507.06186Space Sci. Rev. 191astro-ph.HEA. Y. Potekhin, J. A. Pons, and Dany Page, "Neutron stars -cooling and trans- port," Space Sci. Rev. 191, 239-291 (2015), arXiv:1507.06186 [astro-ph.HE].
Neutrino emissivities and mean free paths of degenerate quark matter. Naoki Iwamoto, 10.1016/0003-4916(82)90271-8Annals Phys. 141Naoki Iwamoto, "Neutrino emissivities and mean free paths of degenerate quark matter," Annals Phys. 141, 1-49 (1982).
Recent Developments in Non-Fermi Liquid Theory. Sung-Sik Lee, Sung-Sik Lee, "Recent Develop- ments in Non-Fermi Liquid Theory,"
. 10.1146/annurev-conmatphys-031016-025531arXiv:1703.08172Ann. Rev. Condensed Matter Phys. 9cond-mat.str-elAnn. Rev. Condensed Matter Phys. 9, 227-244 (2018), arXiv:1703.08172 [cond-mat.str-el].
Holographic quantum matter. Sean A Hartnoll, Andrew Lucas, Subir Sachdev, arXiv:1612.07324hep-thSean A. Hartnoll, Andrew Lucas, and Subir Sachdev, "Holographic quantum matter," (2016), arXiv:1612.07324 [hep-th].
The Lagrangian coupling the nucleons to the phonons is LNNχ = d 3 x N Q · ∂χ + 1 2 (∂χ) 2 N . The anisotropic coupling of the phonon is dictated by the spontaneous breaking of the rotational symmetry, as for the phonon Lagrangian. Since the kink crystal does not need to respect reflection symmetry along the axis of the kink crystal. a coupling linear in Q · ∂χ is allowedThe Lagrangian coupling the nucleons to the phonons is LNNχ = d 3 x N Q · ∂χ + 1 2 (∂χ) 2 N . The anisotropic coupling of the phonon is dictated by the spontaneous breaking of the rotational symmetry, as for the phonon Lagrangian. Since the kink crystal does not need to respect reflection symmetry along the axis of the kink crystal, a coupling linear in Q · ∂χ is allowed.
|
[] |
[
"ARAPrototyper: Enabling Rapid Prototyping and Evaluation for Accelerator-Rich Architectures",
"ARAPrototyper: Enabling Rapid Prototyping and Evaluation for Accelerator-Rich Architectures"
] |
[
"Yu-Ting Chen [email protected] \nCenter for Domain-Specific Computing\nUniversity of California\nLos Angeles\n",
"Jason Cong [email protected] \nCenter for Domain-Specific Computing\nUniversity of California\nLos Angeles\n",
"Zhenman Fang [email protected] \nCenter for Domain-Specific Computing\nUniversity of California\nLos Angeles\n",
"Bingjun Xiao [email protected] \nCenter for Domain-Specific Computing\nUniversity of California\nLos Angeles\n",
"Peipei Zhou \nCenter for Domain-Specific Computing\nUniversity of California\nLos Angeles\n"
] |
[
"Center for Domain-Specific Computing\nUniversity of California\nLos Angeles",
"Center for Domain-Specific Computing\nUniversity of California\nLos Angeles",
"Center for Domain-Specific Computing\nUniversity of California\nLos Angeles",
"Center for Domain-Specific Computing\nUniversity of California\nLos Angeles",
"Center for Domain-Specific Computing\nUniversity of California\nLos Angeles"
] |
[] |
Compared to conventional general-purpose processors, accelerator-rich architectures (ARAs) can provide ordersof-magnitude performance and energy gains and are emerging as one of the most promising solutions in the age of dark silicon. However, many design issues related to the complex interaction between general-purpose cores, accelerators, customized on-chip interconnects, and memory systems remain unclear and difficult to evaluate.In this paper we design and implement the ARAPrototyper to enable rapid design space explorations for ARAs in real silicons and reduce the tedious prototyping efforts far down to manageable efforts. First, ARAPrototyper provides a reusable baseline prototype with a highly customizable memory system, including interconnect between accelerators and buffers, interconnect between buffers and last-level cache (LLC) or DRAM, coherency choice at LLC or DRAM, and address translation support. Second, ARAPrototyper provides a clean interface to quickly integrate users' own accelerators written in high-level synthesis (HLS) code. The whole design flow is highly automated to generate a prototype of ARA on an FPGA system-on-chip (SoC). Third, to quickly develop applications that run seamlessly on the ARA prototype, ARAPrototyper provides a system software stack, abstracts the accelerators as software libraries, and provides APIs for software developers. Our experimental results demonstrate that ARAPrototyper enables a wide range of design space explorations for ARAs at manageable prototyping efforts, which has 4,000X to 10,000X faster evaluation time than fullsystem simulations. We believe that ARAPrototyper can be an attractive alternative for ARA design and evaluation.Index Terms-accelerator-rich architecture, FPGA prototyping, hardware/software co-design, interconnect synthesis, system on chip, design reuse.
| null |
[
"https://arxiv.org/pdf/1610.09761v1.pdf"
] | 16,995,638 |
1610.09761
|
10983e512d54126e4e1eeb70964a81f8970abb1f
|
ARAPrototyper: Enabling Rapid Prototyping and Evaluation for Accelerator-Rich Architectures
Yu-Ting Chen [email protected]
Center for Domain-Specific Computing
University of California
Los Angeles
Jason Cong [email protected]
Center for Domain-Specific Computing
University of California
Los Angeles
Zhenman Fang [email protected]
Center for Domain-Specific Computing
University of California
Los Angeles
Bingjun Xiao [email protected]
Center for Domain-Specific Computing
University of California
Los Angeles
Peipei Zhou
Center for Domain-Specific Computing
University of California
Los Angeles
ARAPrototyper: Enabling Rapid Prototyping and Evaluation for Accelerator-Rich Architectures
1
Compared to conventional general-purpose processors, accelerator-rich architectures (ARAs) can provide ordersof-magnitude performance and energy gains and are emerging as one of the most promising solutions in the age of dark silicon. However, many design issues related to the complex interaction between general-purpose cores, accelerators, customized on-chip interconnects, and memory systems remain unclear and difficult to evaluate.In this paper we design and implement the ARAPrototyper to enable rapid design space explorations for ARAs in real silicons and reduce the tedious prototyping efforts far down to manageable efforts. First, ARAPrototyper provides a reusable baseline prototype with a highly customizable memory system, including interconnect between accelerators and buffers, interconnect between buffers and last-level cache (LLC) or DRAM, coherency choice at LLC or DRAM, and address translation support. Second, ARAPrototyper provides a clean interface to quickly integrate users' own accelerators written in high-level synthesis (HLS) code. The whole design flow is highly automated to generate a prototype of ARA on an FPGA system-on-chip (SoC). Third, to quickly develop applications that run seamlessly on the ARA prototype, ARAPrototyper provides a system software stack, abstracts the accelerators as software libraries, and provides APIs for software developers. Our experimental results demonstrate that ARAPrototyper enables a wide range of design space explorations for ARAs at manageable prototyping efforts, which has 4,000X to 10,000X faster evaluation time than fullsystem simulations. We believe that ARAPrototyper can be an attractive alternative for ARA design and evaluation.Index Terms-accelerator-rich architecture, FPGA prototyping, hardware/software co-design, interconnect synthesis, system on chip, design reuse.
I. INTRODUCTION
T HE scaling of conventional multicore processors has been limited by the power and utilization walls because most portions of future chips cannot be simultaneously powered up. This unpowered portion is referred to as dark silicon [1] [2]. Customized acceleration [3][4] [5][6] [7][8] [9][10] [11] [2] has proved to be one of the most promising solutions to address this issue. Compared to conventional general-purpose processors, these customized accelerators can provide ordersof-magnitude performance improvement and energy savings. Recently, more accelerators are being integrated into the general-purpose processors; this new architecture is referred to as the accelerator-rich architecture (ARA) [12][13] [14]. Due to the significant performance and energy gains, numerous ARA efforts have been reported from both academia (such as research in [12][13] [14]) and industry (such as the IBM wirespeed processors in server markets [15] and the Intel video streaming processors in consumer markets [16]).
However, the accelerator-rich architectures (ARAs) are still in the early stages of development and many design issues, especially system-level issues, remain unclear and difficult to evaluate. Examples include efficient accelerator resource management, design choices of interconnect between accelerators and scratchpad buffers, interconnect between scratchpad buffers and LLC or DRAM, efficient address translation support, etc. Therefore, a research platform that can enable rapid ARA design space explorations will be extremely useful.
In prior work, there are two major approaches used to explore the ARA design spaces: 1) full-system simulation [17][12] [13][18] [6] [8], and 2) FPGA prototyping [19][20] [5][21] [9]. As shown in Figure 1, full-system simulators are very flexible when changing configurations and require little development effort to conduct design space explorations. However, the simulation time is very long and usually three to four orders-of-magnitude slower than native execution. On the other hand, FPGA prototyping provides rapid evaluation from real silicons, and it has gained increased attention. An FPGA prototype is a realization of the targeted ASIC design, which allows users to run real-life applications on the prototype at native speed and helps developers to verify the robustness of the design before taping out a chip. However, the tedious efforts for existing FPGA prototyping flows have impeded the wide adoption of FPGA prototyping for architectural design space exploration. The goal of the ARAPrototyper is to reduce the prototyping efforts to the extent that is manageable and enable both rapid prototyping and rapid evaluation/verification for ARAs. The major burden of FPGA prototyping for full-system evaluation involves significant design, implementation, and verification efforts. A robust FPGA prototype developed from scratch usually needs a very long development cycle because it requires a wide range of background knowledge, such as hardware accelerator design, system software stack support (including drivers), and application programming interfaces (APIs) design. Existing FPGA prototypes, like LegUp [22] [23] [21] [24] and CoRAM [25] [26], take years of engineering efforts for initial development and continuous improvement. An FPGA prototype developed for architectural design space exploration purposes imposes more challenges. First, architects usually want to explore different designs of ARAs or improve their ARAs in an incremental way. To reduce their burden, we should design our ARAPrototyper such that our baseline ARA prototype is highly reusable and customizable to avoid rebuilding the system from scratch. Second, users may want to add their own accelerators into the reusable baseline prototype for system-level evaluation, which still requires hundreds of lines of HLS code simply for integration in state-of-the-art FPGA prototyping flows-such as our prior effort PARC [20]. Therefore, a decent automation flow with a clean customization interface should be provided so that users can change a few lines of code and push a button to generate their own ARAs.
In this paper we present our latest prototyping flow ARA-Prototyper, which enables rapid design space explorations for ARAs in native execution time. Compared to our first generation prototyping flow PARC [20], ARAPrototyper has been significantly improved (discussed in Section II-A) and has been published in two posters [27] [28]. The built-in optimizer for customized interconnect between accelerators, buffers, and LLC or DRAM, has been published in [29]. In this work we choose the modern Xilinx R Zynq SoC [30], which is composed of a dual-core ARM Cortex-A9 CPU and FPGA fabrics, 1 as our underlying prototyping platform. To reduce the prototyping efforts for ARA design space explorations, we provide the following features in ARAPrototyper.
1. We develop a reusable and highly customizable baseline prototype for users to evaluate the performance of their ARAs. First, a shared memory architecture has been provided as highly parameterized hardware templates in the baseline prototype. Users can easily configure the interconnect topology between the accelerators and buffers, the interconnect topology between buffers and LLC or DRAM, coherency choice at LLC or DRAM, and TLB (translationaside buffer) sizes in the ARA specification file without writing RTL or HLS codes. Second, to gain more insight into the performance evaluation, we add a few performance counters at the accelerator side to monitor DRAM and TLB accesses. We also leverage the existing performance counters on the ARM CPU. These can significantly reduce system design efforts and improve the quality of evaluation. 2. To further reduce the efforts of the accelerator design, we support the integration of accelerators that are written in high-level synthesis (HLS) into our ARAPrototyper. More importantly, we provide a clean accelerator integration interface for users to integrate their own accelerators by abstracting away common functionalities such as issuing memory access requests and invoking address translations. Users just have to specify a few parameters and invoke the computation kernels of their own accelerators. The whole flow to integrate users' own accelerators with their customized ARA prototype is highly automated. 3. We provide a system software stack that supports users in compiling and running their applications seamlessly on their customized ARA prototype. For users to quickly develop their applications that use those accelerators, we abstract accelerators as software libraries and provide userfriendly C/C++ APIs to manipulate accelerators. To demonstrate the above design space exploration capability of the ARAPrototyper, we choose the medical imaging pipeline [32] as our main application domain for case studies. In order to illustrate the manageable prototyping efforts, we further integrate existing HLS-synthesisable accelerators from the widely used accelerator benchmark suite MachSuite [33] into ARAPrototyper. Only a few lines of code (LOCs) are needed for the integration compared to hundreds of LOCs in recent ARA prototyping work such as PARC [20]. Finally, we also compare the evaluation time of ARAPrototyper to that of the state-of-the-art full-system ARA simulator PARADE [17] by running a set of common medical imaging applications with different input sizes. ARAPrototyper achieves a 4,000X to 10,000X faster evaluation time. We believe that ARAPrototyper can be an attractive alternative to current full-system simulators for rapid ARA design and evaluation. In summary, this paper makes the following contributions: 1. Rapid FPGA prototyping for ARA design space explorations by providing a highly customizable baseline prototype with performance counters, a clean interface and automation flow to integrate the users' own HLSsynthesisable accelerators, a system software stack and accelerator APIs to quickly develop applications that can run seamlessly on the prototype. 2. Rapid evaluation of ARA designs in native execution time, which is about 4,000X to 10,000X faster than the state-ofthe-art full-system ARA simulator PARADE. 3. Case studies demonstrating ARAPrototyper's capability for a wide range of ARA design space explorations, manageable prototyping efforts, and rapid evaluation time. Table I summarizes the evaluation methodologies in existing accelerator-related research. Basically, we can divide them into two major categories: simulation-based evaluation and FPGA prototyping-based evaluation. PARC [20] The simulation methodologies can be further divided into the following four categories: 1) pre-RTL simulation [34], 2) RTL simulation [14] [11], 3) cycle-accurate simulation [7][10] [2], and 4) full-system cycle-accurate simulation [17][12] [13][18] [6] [8]. First, except for the pre-RTL simulation, all other simulations take a very long evaluation time that is orders-of-magnitude slower than the native execution. Second, the pre-RTL simulator Aladdin [34] uses dynamic data dependence graphs to model an accelerator, where the model depends on the input changes. More importantly, Aladdin only simulates the accelerator itself and lacks integration with fullsystem simulators to enable system-level exploration. Third, except for PARADE [17], all (full-system) cycle-accurate simulators also need to implement the accelerator design in RTL, which results in tedious efforts. Finally, PARADE is the state-of-the-art full-system cycle-accurate ARA simulator that provides various design space exploration choices. PARADE extends the widely used gem5 [36] simulator with HLS support to reduce the efforts of modeling the accelerators. We will compare the evaluation time of our ARAPrototyper to PARADE in Section VI-C.
II. BACKGROUND AND MOTIVATION
Compared to the long-running simulation, FPGA prototyping [19] [9] is gradually gaining increased attention because it enables native measurement of the performance and power in real silicons. However, the tedious prototyping efforts impede the wide adoption of FPGA prototyping for ARA design and evaluation. In this paper we exploit full-system FPGA prototyping to enable rapid design space explorations for the emerging ARAs. Our goal of ARAPrototyper is to reduce the tedious prototyping efforts far down to manageable efforts.
A. Comparison to Recent Prototyping Work
In this subsection we compare the ARAPrototyper to its early version PARC [20], commercial accelerator design tools such as Xilinx SDSoC [37], and two most related prototyping work CoRAM [25] [26] and LegUp [22] [23][21] [24]. 1. PARC [20]. PARC is our first-generation FPGA prototype designed to evaluate the ARA architecture described in [13]. ARAPrototyper shares some similar methodologies to PARC: the integration with high-level synthesis flow, shared memory architecture, and accelerator API support. However, ARAPrototyper provides many more new features. First, ARAPrototyper significantly reduces the prototyping efforts (hundreds of LOCs to a few LOCs as compared in Section VI-D) by providing a clean accelerator integration interface and automation flow. Second, ARAPrototyper significantly enlarges the scope of design space explorations for ARAs: 1) it adds the customizable interconnect layer between buffers and DRAM ports to explore the efficiency of off-chip accesses; 2) it adds the coherency choice at either LLC or DRAM. Third, ARAPrototyper adds performance counter support to provide more insights into the performance evaluation. Finally, ARAPrototyper is implemented in the newer Xilinx R Zynq SoC board [30] and has stronger ARM processor support (PARC uses a much weaker MicroBlaze processor), and thus models a real-life ARA more closely. 2. Commercial tools [37]. FPGA vendors also provide tools to design and prototype customized SoCs. For example, designers can use Xilinx SDSoC [37] to build their own accelerators using FPGA fabrics that work together with hard ARM cores. However, it does not support most features that an ARA needs, such as the global accelerator manager, customized interconnect between accelerators, buffers, and DRAM, performance counters, to name just a few. ARAPrototyper provides a reusable baseline with highly customizable parameters for a typical ARA, and provides easy accelerator integration for rapid prototyping. 3. CoRAM [25] [26]. The goal of CoRAM is to provide a scalable and portable memory architecture so that designers can focus on the accelerator design instead of building the memory architecture from scratch. A 2D-mesh interconnect is used to provide the connectivity between CoRAM blocks, which is different from the partial crossbar architecture explored in ARAPrototyper. CoRAM provides the flexibility for designers to customize the on-chip SRAM blocks into caches, FIFOs or buffers. But designers still need to expend considerable effort to design these customized memories, which impedes the goal of rapid prototyping and evaluation.
In addition, the full-system evaluation capability is not supported. Instead, ARAPrototyper provides the capability to observe interactions between hardware and OS, such as the performance impacts on TLB misses, which enlarges the scope of design space explorations. 4. LegUp [22][23][21] [24]. LegUp takes a standard C program as input and automatically compiles the program into a hybrid architecture with a MIPS soft processor and customized accelerators. The more recent update [21] uses an ARM processor in the Altera FPGA-SoC and can take OpenMP and pthread functions as input. LegUp can perform self-profiling on the processor and identify program sections that would benefit from hardware acceleration. The identified sections are synthesized by its own HLS engine. Compared to LegUp, ARAPrototyper takes a different design philosophy. ARAPrototyper allows users to design the accelerators themselves (also adopted in [23]) or leverage existing accelerators that other hardware developers provided. More importantly, ARAPrototyper models the emerging ARA architectures that have the global accelerator management (GAM), customizable interconnect between accelerators and buffers, interconnect between buffers and DRAMs, coherency choice at LLC or DRAM, etc. In addition, ARAPrototyper adds the performance counter support to provide more insights into ARA design space explorations. This is totally from a different perspective and none of these ARAPrototyper features are supported by LegUp.
III. THE BASELINE ARA PROTOTYPE We first present an overview of the ARA that we are prototyping, based on the architecture proposed in [13] [18]. As shown in Figure 2, the ARA mainly contains two planes: 1) the accelerator plane, and 2) the processor plane. The accelerator plane is composed of the heterogeneous accelerators, the ARA memory system to support the high memory demand of accelerators, and IOMMU for address translation. The processor plane is composed of a conventional multicore processor with a multilevel cache. From a system perspective, the user applications are launched in the processor plane, and the compute-intensive tasks can be offloaded to the accelerator plane. The system software stack acts as the interface between the two planes. It provides the services of reservations, starts, and releases for the accelerators. The system software stack is implemented in the privileged mode and transparent to users.
Next, in our baseline ARA prototype, we will present the detailed design of the customizable ARA memory system in the accelerator plane and the system software stack connecting the two planes. To gain more insights into the performance evaluation, we also add a few more performance counters in the accelerator plane, while we can leverage the existing performance counters for the processor plane. Finally, we will introduce some important features of Xilinx Zynq SoC [30], which is used for our ARA prototyping.
A. ARA Memory System
The ARAPrototyper can generate a shared memory (buffer) architecture for heterogeneous accelerators to share the onchip memory resources, which is similar to the architectures discussed in [18] [14]. To share on-chip memory resources, we provide a customized two-layer interconnect, which can be synthesized automatically by specifying the parameters in the hardware templates. We also provide the flexibility if users desire to fully customize the interconnects, even to support private buffers. Figure 3 presents the accelerator plane and its ARA memory system design in detail. The major components include 1) heterogeneous accelerators, 2) homogeneous shared buffers, 3) direct memory access controllers (DMACs), 4) physical memory ports, 5) a customized partial crossbar between accelerators and buffers, 6) a customized interleaved network between buffers and DMACs, and 7) an input/output memory management unit (IOMMU) and a dedicated TLB. We can have different types of accelerators and different numbers of accelerators for each type. Each type of accelerator has its own input and output port demands. Each port can connect to one or multiple buffers based on the generated partial crossbar topology.
The ARAPrototyper provides a pool of homogeneous buffers to be shared by the heterogeneous accelerators. Before computation, an accelerator needs to send requests to IOMMU to perform page translations. After that, IOMMU assigns corresponding DMACs to issue memory requests to fetch data through physical memory ports (MPs). The off-chip long burst requests are interleaved with the interleaved network to minimize possible conflicts. The memory requests are at the page granularity (4KB). The buffer size is 16KB by default, but can be configured by users.
1) Customizable Optimal Partial Crossbar:
The goal of the partial crossbar is to provide sufficient connectivity between the accelerators and shared buffers. The partial crossbar avoids extra arbitration cycles that occur in a conventional bus. Therefore, a deep-pipelined accelerator can achieve initiation interval (II) as low as one with the partial crossbar support. Figure 4 demonstrates a real interconnect topology generated from ARAPrototyper. In this example, the accelerator plane contains six heterogeneous accelerators. The numbers inside each parentheses represent the assigned buffer IDs to the accelerator, which forms the topology of the customized partial crossbar. When an accelerator is reserved by an application, the accelerator has the privilege of using the assigned buffers as its own local buffers. The accelerator can fetch one element from each buffer per cycle (II = 1) since a dedicated connection is built.
The ARAPrototyper provides a built-in optimization flow [29] for the customized partial crossbar. This optimizer takes the number of ports of each accelerator and the number of shared buffers as input. Designers need to provide the maximum number of simultaneous active accelerators as another constraint. The number of simultaneous active accelerators can influence 1) the power budget and 2) the complexity of the partial crossbar, which reflect the two important design criteriadynamic power and area. Our optimizer can guarantee the optimality of the crossbar with the minimum number of cross points based on the input and constraints. The key idea is to first guarantee the crossbar switches for those accelerators (say the number is c) with the largest memory bank demand, where one memory port exactly maps to one memory bank. For the remaining accelerators, each memory port maps to c memory banks, thus guaranteeing fully connectivity. The details of this optimizer is presented in [29] and omitted in this paper due to space constraints. In our first generation prototyping flow PARC [38], we only generate an optimal crossbar topology when the number of accelerator ports of each accelerator are equal, i.e., homogeneous. In ARAPrototyper, we have generalized the optimal partial crossbar design for accelerators with heterogeneous port demands. Finally, buffer demand information can also be reported by our built-in optimizer.
Private buffer architecture support. Although we mainly target the shared buffer architecture as previously explained, ARAPrototyper also supports a private buffer architecture: each accelerator has its own buffers without sharing. Users can simply set the number of shared buffers to be equal to the number of ports of all accelerators. In this case, the shared buffer architecture is customized to the private buffers while still benefiting from the interleaved network. This can be used to evaluate an ARA with abundant buffer resources and a large power budget.
2) Customizable Interleaved Network: The main purpose for an interleaved network is to minimize possible conflicts for the long off-chip burst requests. An accelerator usually issues multiple requests simultaneously to prefetch data from the offchip DRAM to on-chip buffers for near-future computation. For example, in a stencil computation, multiple data elements, e.g., five or seven, are required for a single computation. These data are prefetched into buffers in advance. If the simultaneous requests are not distributed evenly across physical memory ports, significant performance degradation can occur. First, an accelerator can start to work only when all required data are prefetched into its buffers. Second, an memory request is at the page granularity (4KB), and thus the latency is very large. The uneven distribution of requests can cause serious delay for the pending requests. Figure 4 shows how the interleaved network successfully distributed four simultaneous accesses into four DMACs. Note that the topology of the interleaved network depends on the topology of the customized partial crossbar. In ARAPrototyper, we support two design strategies for design space exploration: 1) interleaving the requests within an accelerator, and 2) interleaving the inter-accelerator requests.
3) Coherency Choice at LLC or DRAM: The ARAPrototyper supports two types of coherency. Users can select either one in our flow. First, the ARA memory system can be coherent with the last-level cache (LLC) residing in the processor plane. In this case, users do not need to worry about the coherency. Second, the ARA memory system can directly exchange data with the off-chip DRAM (i.e., coherent at DRAM). Compared to the LLC coherent case, the burst DMA transfer may provide higher memory bandwidth because of larger burst sizes and more physical memory ports. However, users need to invalidate the corresponding cache lines if the data is updated in the DRAM.
4) TLB Support in IOMMU:
Since accelerators in an ARA share the physical memory with the processor and use virtual memory for better programmability, a hardware IOMMU and a dedicated TLB is provided in the accelerator plane to support the virtual to physical address translation. The TLB size is configurable by users. We leverage the system software stack to handle a TLB miss, which will be explained in Section III-B4. To gain more insights, we also add two performance counters to monitor the TLB accesses and TLB misses. Since in our cases, the data is accessed consecutively in a streaming fashion, we can also use the TLB access counter to calculate the total DRAM accesses and achieved memory bandwidth from the accelerator plane. One can add a DRAM access counter if necessary. Figure 5 presents an overview of the ARA system software stack. ARAPrototyper can automatically generate the related software modules based on the ARA specification file. The five major components in the system software stack are: 1) global accelerator manager (GAM), 2) dynamic buffer allocator (DBA), 3) coherence manager, 4) TLB miss handler, and 5) performance monitor (PM). Next we present more details of these components. Users may further customize the software stack based on their needs. 1) Global Accelerator Manager: GAM is responsible for a) interfacing with user applications, b) accelerator resource management and task scheduling, and c) requesting buffer resources. User applications can talk to GAM with the provided APIs, which will be discussed in Section V. In GAM, we use a table to keep track of the available accelerators of each type. The incoming requests from user applications are scheduled in a first-come, first-serve basis. GAM would make requests for the shared buffer resources to the dynamic buffer allocator before reserving a target accelerator. As observed in our PARC [20] work, GAM runs on top of a separate ARM CPU core and is sufficient for managing FPGA accelerators used in our prototype. This design is easier than the dedicated hardware GAM for managing ASIC accelerators used in PARADE [17].
B. System Software Stack
2) Dynamic Buffer Allocator: In the shared memory/buffer architecture, discussed in Section III-A, a buffer bank can be shared by multiple accelerator ports. DBA is in charge of the dynamic buffer assignment during runtime based on the requests from user applications. Static assignment, which is used in our first-generation prototyping flow PARC [20], can no longer handle the dynamic cases and can limit the framework scalability for evaluation.
DBA receives the buffer requests from GAM. As shown in Figure 6, DBA uses a list structure, called task list, to store the requests that have not been processed. With information of the incoming tasks, DBA is able to provide different kinds of allocation policies to influence task scheduling. Users are able to modify the allocation policy of DBA based on their demand, such as throughput-driven or deadline-driven scheduling, to manipulate the task list. In ARAPrototyper, we provide a starvation-free buffer allocation policy, as illustrated in Figure 6. The tasks come in numerical order. It is possible that Acc 5 can starve since Acc 2 , Acc 3 , and Acc 4 occupy the buffers to serve the continuous incoming tasks. To prevent starvation, we use two flags for each buffer: occupied and reserved. A buffer can only be allocated by DBA when the buffer is neither occupied nor reserved. When a buffer is assigned to an accelerator, it will be marked as occupied. The reserved flag is used when a buffer is occupied but another accelerator would like to reserve it. The starvation can be resolved by providing the "reserved" privilege only for the task at the head of the task list. This guarantees the task at the head can always occupy or reserve the required buffers. After that, the algorithm greedily allocates buffers to the tasks in the task list in order until no feasible allocation can be found. This algorithm is lightweight and the overhead is negligible.
3) Coherency Manager: The ARAPrototyper offers a coherency manager for coarse-grained software-based coherence handling. When users try to directly write data to DRAM for higher memory bandwidth, the overlapping pages residing in multilevel caches in the processor need to be invalidated. We abstract the cache invalidation details in the coherency manager in our system software, so users only need to call the coherency manger to handle the possible coherency issue. 4) TLB Miss Handler: For the TLB misses arising from the accelerators, we currently use a software-based handler to handle the miss. To reduce overhead incurred in the communication between IOMMU and the TLB miss handler in the privileged mode, IOMMU groups multiple TLB misses and sends them to the handler together. Instead of using the slow kernel API to do page translation, we write our own version by leveraging the ARM architecture support for page table walk. Table II shows our profiling results on the average TLB miss handling time. Our efficient walker reduces the miss penalty from 4278 cycles to 458 cycles, and thus overhead from TLB misses in an ARA can be significantly reduced. We are also considering implementing a hardware-based page walker. It can be a scalable version when the number of accelerators is large. However, the hardware-based walker in our Zynq board [30] can lead to three sequential DRAM accesses (600 cycles) per TLB miss because of the walk on multilevel page tables. 5) Performance Monitor: To provide more insights into the system bottleneck analysis, we add performance counters in the IOMMU so that the TLB hit/miss events and memory bandwidth can be monitored on-the-fly, as mentioned in Section III-A4. We add a PM module in the system software stack to handle requests from applications and interact with the IOMMU to monitor or reset the performance counters. These performance counters can provide more in-depth performance characterization in addition to simple runtime numbers. The impact of different architecture parameters can also be observed and analyzed.
In addition to using the accelerator-side performance counters supported by PM, users can also use OProfile [39] to obtain the performance counter information inside CPU cores. We have successfully ported OProfile on top of our ARA baseline prototype under the Zynq platform.
C. Prototyping Platform: Xilinx Zynq SoC
We choose the Xilinx Zynq ZC706 evaluation board [30] with 1GB DRAM as our underlying prototyping platform. Figure 8 shows the architecture of Zynq SoC. It is composed of FPGA fabrics for accelerator implementation and a dualcore ARM for the system software implementation. The FPGA contains around 2MB on-chip block RAM, which can be used to implement the shared buffers. User applications can be launched on the ARM processor with Linux support. The Zynq architecture has the following advantages to support ARAPrototyper. 1. Faster processor cores. The hard ARM cores can run up to 800MHz, which is much more efficient than the soft Microblaze cores synthesized from FPGA. Linux can be ported on ARM cores and executed fluently. The system software stack provided in ARAPrototyper, including global accelerator manager, dynamic buffer allocator, coherence manager, TLB miss handler, and performance monitor, can all leverage the faster processor. 2. A coherent LLC support. The dual-core ARM provides a shared L2 cache. The shared buffers can be coherent with L2 cache through the accelerator-coherent port (ACP). This provides an alternative ARA design opportunity (as described in Section III-A3). 3. A fast ASIC on-chip memory controller. The ASIC onchip memory controller provides higher memory bandwidth compared to a memory controller synthesized in FPGA. In order to efficiently exploit the available memory bandwidth, Zynq provides four high-performance (HP) ports in the FPGA fabric. This gives us opportunities to explore the topology of the interleaved network to better utilize the off-chip memory bandwidth.
Our prior PARC work [20] is prototyped on the ML605 board with Virtex 6 FPGA. Compared to the Virtex 6 with only FPGA fabrics, Zynq SoC enables a wider range of ARA design explorations.
IV. DESIGN AUTOMATION FLOW AND ARA CUSTOMIZATION INTERFACE
The main challenge to do architectural design space exploration through FPGA prototyping is the long development cycle for each generation of an ARA, which requires extensive coding in RTL. To further reduce the prototyping efforts, we develop a highly automated design flow for users to customize the baseline ARA prototype and integrate their own accelerators. Users only have to configure an ARA specification XML file to customize their ARA, and specify a few parameters in the acceleration integration interface to add their own accelerators that are written in HLS. Our design flow can automatically generate users' customized ARAs and deploy them on the underlying FPGA prototyping platforms.
A. Design Automation Flow
We classify the components in the ARA prototype into the following three categories, as shown in Figure 7. 1. Platform-specific modules are mainly bonded to the hard modules in the FPGA chip and evaluation platform (board), such as the dual-core ARM processor. ARAPrototyper can adapt to different platforms, and thus users can spend minimum effort on the platform issues. 2. Platform-independent modules include two-layer interconnects, shared buffers, IOMMU and TLB, and DMACs, which are the major components in the ARA memory system. ARAPrototyper provides highly parameterized hardware templates for these platform-independent components. Users can easily customize them in the ARA specification file that will be explained in Section IV-B. 3. User-designed accelerators. In the ARAPrototyper, we provide a group of highly optimized accelerators in the medical imaging pipeline that will be further explained in Section VI. Users can easily develop their own accelerators in HLS. Furthermore, we provide a clean accelerator integration interface (to be explained in Section IV-C) for users to easily add their own accelerator into our ARAPrototyper. Figure 7 presents our design automation flow, which enables ARAPrototyper to maximize the reuse of hardware modules and user-designed accelerators when developing ARAs. It begins with the ARA specification file, and all following steps can be executed automatically upon a single "make" button. In the left branch, we apply ARA memory system optimizations to our platform-independent modules by using the configurations in the ARA specification file. This is combined with our hardware templates to create the ARA memory system. In the middle branch, HLS tools are applied to user-designed accelerators coded in C/C++ for generating RTL designs. In the right branch, the platform information is used to generate platform-specific modules. Depending on the target platform, e.g., Xilinx Zynq FPGA in our case, the flow can be seamlessly integrated with the corresponding back-end process (e.g., Xilinx PlanAhead flow for bitstream generation).
B. ARA Specification File
<s y s t e m> <ACCs> <a c c t y p e =" g r a d i e n t " num=" 2 " num params=" 5 "> <p o r t s i z e =" 16K" num=" 6 " /> </ acc> <a c c t y p e =" s e g m e n t a t i o n " num=" 1 " num params=" 13 "> <p o r t s i z e =" 16K" num=" 8 " /> </ acc> <a c c t y p e =" r i c i a n " num=" 1 " num params=" 7 "> <p o r t s i z e =" 16K" num=" 12 " /> </ acc> <a c c t y p e =" g a u s s i a n " num=" 1 " num params=" 7 "> <p o r t s i z e =" 16K" num=" 5 " /> </ acc> </ ACCs> The ARA specification file is provided for users to specify components and configure the parameters in the accelerator plane. Users can easily evaluate their new accelerators by integrating them into the reusable baseline prototype, and perform system-level design space explorations. It is composed of six major sections: (1) accelerator specification, (2) shared buffer and DMAC specification, (3) interconnect specification, (4) IOMMU specification, (5) coherence specification, and (6) target frequency. The ARA specification file is recorded in a XML format and users can easily modify it on top of the XML template we provide, including all the six sections. Listing 1 is an example of an ARA specification file. In section "ACCs," users can specify the name, the number of duplications, the number of input parameters, the number of ports (buffers) and the needed buffer size of each type of accelerator. In section "SharedBuffers," the total number of buffers, the size of a buffer, and the number of DMACs, can be configured. In section "Interconnects," ARAPrototyper allows users to configure the interconnects between accelerators and shared buffers, and the interconnects between shared buffers and DMACs. The "connectivity=3" means the generated par-tial crossbar can guarantee a feasible crossbar configuration for at least any three accelerators to have their own buffer resource at a given time in the ARA. The "auto" field is set to "1" to use the built-in optimizer. ARAPrototyper also provides the flexibility for users to specify the interconnects by themselves. In section "IOMMU," the users can configure the TLB size and eviction policy. In section "CoherentCache," users can specify whether the coherent L2 cache in the dual-core ARM is used. In section "AccFrequency," the clock frequency in the accelerator plane can be configured. As long as this file is prepared, the ARAPrototyper automation flow can be invoked for automated system generation. Compared to our earlier flow PARC [20], we keep section (1) and (2), but significantly enrich the configurations of the shared memory architecture and other design features in section (3) to (6).
C. Accelerator Integration Interface
To further reduce the development cycle of adding users' own accelerators, ARAPrototyper supports the integration of accelerators developed in HLS. Moreover, ARAPrototyper provides a clean accelerator integration interface that only needs a few lines of code to integrate users' own accelerators.
Accelerator development in HLS. Design productivity of accelerators can be significantly improved by raising the level of design abstraction beyond RTL. HLS tools [40][41] enable automatic synthesis of high-level, untimed or partially timed specifications (such as in C, C++, or SystemC) to low-level cycle-accurate RTL code. As reported in [40], the code density can be easily reduced by 7 to 10X when moved to high-level specification in C, C++, or SystemC; and at the same time, resource usage can also be reduced by 11% to 31% in an HLS solution, compared to a hand-coded RTL design.
Accelerator integration interface. To allow users to easily integrate existing accelerators into the ARAPrototyper, we provide the following flow. First, designers need to specify the accelerator port information in the ARA specification file, such as the number of parameters sent from CPU and the number of demanded buffers. Next, our tool can generate the port names and the corresponding HLS pragma for the control and data ports between the accelerator, IOMMU, and CPU automatically using the ARA specification file. This generated file in HLS-compatible C format is called accelerator integration template, as shown in Figure 9. Designers need to place the computation kernel and the invoking of read and write memory requests explicitly in the corresponding locations in the template. After that, our flow can automatically generate ARAPrototyper-compatible HLS codes.
Control and data ports, are generated as function parameters in the HLS codes. There are three kinds of ports. First, the input parameters sent from CPU (such as "vaddr port0") are generated. These parameters are sent from the CPU through the AXI-Lite port and are stored in the registers of the accelerator. Second, the communication channel ("IOMMU FIFO") realized by FIFO is generated. The accelerator uses the FIFO to send read and write requests to IOMMU to fetch data from DRAM (or L2 cache) to its own buffers and write data back from its own buffers to DRAM (or L2). Third, the ports to input and output buffers such as "port0" and "port1" are generated. These ports are connected to the shared buffers through the automatically synthesized crossbar described in Section III-A1.
The only changes that designers need to specify are the memory requests in reading data from DRAM (or L2 cache) and writing data back to DRAM (or L2) in the accelerator integration template. For example, a memory request ("memory request0") needs to be specified explicitly before the computational kernel reads data from its own buffer. Similarly, the output results need to be written back after computation is done in "memory request1." This only involves a few lines of code (LOCs) change. As shown in Figure 9, after the existing accelerator computation kernel is plugged in, only the two lines with "Req Length0" and "Req Length1" (shown in red color and bold italic font) are added. Detailed prototyping efforts (LOCs) will be presented in Section VI-D.
V. APPLICATION DEVELOPMENT API
To enable rapid development of applications that use the accelerators in ARAPrototyper, we abstract accelerators as software libraries and provide the user-friendly C/C++ APIs. With the information provided by the ARA specification file, ARAPrototyper can automatically generate the header file of accelerator APIs for programmers, which is similar to [20]. For each type of accelerator, we provide the following APIs with fine-grained accelerator control: 1) reserve(), 2) check reserved(), 3) send param(), 4) check done(), and 5) free(). Users can develop their applications with the C++ classes and member functions to manipulate the accelerators in the ARA. After applications are developed, users can simply set up a g++ cross compiler to compile applications into ARM executable, which can be seamlessly executed on the ported Linux on the Zynq board. Figure 10(a) is an application example using an accelerator. It first uses the reserve() function to make requests to GAM for reserving an accelerator. After the reservation is confirmed, the required parameters are sent through the send param() function and the accelerator will be started. The application should periodically check the status of accelerators with check done(). Once the accelerator finishes its task, it should be freed for future use. With these APIs, a programmer can explore more complicated settings, such as using multiple accelerators simultaneously in the user application.
Moreover, our latest ARAPrototyper adds a simplified API, called run(), which is intended for software developers who do not want to dig into the hardware accelerator details. This API covers the functionality from reserving to releasing the accelerator using a single function. Figure 10(b) is a code example, which achieves the same functionality that the code of Figure 10(a) does.
As presented in Section III-B5, ARAPrototyper provides a PM module to monitor performance counters added in our prototype. We provide several APIs built on top of the PM module so that designers can use these APIs to monitor those key performance counters for analyzing and improving the ARA design. Figure 10(c) shows how TLB accesses and misses can be monitored by using the provided APIs.
VI. EXPERIMENTAL RESULTS
In this section we present a quantitative evaluation of the rapid evaluation time and manageable prototyping efforts of ARAPrototyper for ARA design space explorations. We choose the medical imaging processing pipeline as our target application domain, where customized accelerators can achieve orders-of-magnitude energy gains. To demonstrate the capability and usage of ARAPrototyper, we conduct a number of case studies for ARA design explorations.
A. Target Domain: Medical Imaging
The target application domain we choose to accelerate is primarily the medical imaging processing pipeline, which is one of the most important application domains in personalized medical care. This pipeline is used to process the raw data obtained from computerized tomography (CT) [32]. We are motivated to accelerate it with a scenario in which a doctor is able to show the CT analysis interactively to patients with a tablet. Current mobile processors without accelerators cannot provide real-time and energy-efficient solutions.
The medical imaging pipeline can be divided into the following stages. After image reconstruction, it would 1) remove noise and blur; 2) align the current image with previous images from an individual; and 3) segment a region of interest for diagnosis [32]. The three tasks can be implemented by four accelerator kernels: gradient, gaussian, rician, and segmentation. By default, we use 128 slices of 128*128 images as the kernel input. In the following subsections, we will mainly demonstrate the benefits of ARAPrototyper using these four accelerator kernels for case studies.
B. Overall ARA Performance and Energy-Efficiency
First of all, we present the overall performance and energyefficiency of ARA. Table III presents the performance, power, and energy-efficiency results of the aforementioned four medical imaging kernels on state-of-the-art Intel Xeon and ARM processors, our ARA FPGA prototype, and projected ARA on ASIC. We use OpenMP to implement a 24-thread parallel version of each kernel on the Intel Xeon processor. The result on ARM Cortex-A9 uses a single thread. All kernels are compiled using gcc with -O2 option. Note that for the ARA version, in this experiment we only put a single processing element (PE) for each kernel (except for gradient which has two PEs), which can be easily duplicated. As shown in Table III, such an ARA FPGA prototype can achieve 3.9X to 65.4X better energy-efficiency than the 24-thread Intel Xeon CPU. According to the report in [42], the power gap between FPGA and ASIC is around 14X and delay gap is around 3.4X to 4.6x. If our ARA is implemented in an 45nm ASIC, 217X-3,661X energy savings over the Intel Xeon CPU are expected.
C. Evaluation Time
To demonstrate the rapid evaluation time of ARAPrototyper, we compare it against the state-of-the-art full-system ARA simulator PARADE [17] for a typical ARA configuration. Figure 11 compares the execution time of two common medical imaging applications with different input sizes on ARAPrototyper and PARADE. For the larger input size, it takes a couple of days for PARADE to simulate one single ARA configuration; while it only takes a minute or so for ARAPrototyper to run the configuration. We should mention that our flow generation time (generating the ARA configuration to the FPGA bitstream) takes around four hours, but it is a one-time effort for one ARA configuration that can run multiple applications with multiple inputs. Usually, the native executions on our FPGA prototype are 4,000X to 10,000X faster than full-system simulations, and we believe ARAPrototyper can be an attractive alternative for design space explorations.
D. Prototyping Efforts
To demonstrate the manageable prototyping efforts of ARA-Prototyper, we present the lines of code (LOCs) that users have to change or add to customize their own ARA using existing accelerators in the prototype or integrating their own accelerators.
We first present LOCs for users to configure their own ARA by leveraging our reusable baseline prototype and existing accelerators in the prototype. As shown in Table V, users can simply configure the ARA specification file with up to 33 lines of XML code to set up the parameters of the shared memory architecture and operating frequency. There is no C/C++ description or RTL code required. Users can start the push-button ARAPrototyper flow after the specification file is set to obtain FPGA prototype in hours. We also present the LOCs for automatically generated RTL from the baseline prototype in Table V, which is more than 37,000 lines. It reflects the huge engineering efforts required if everything is built from scratch.
Next, we present LOCs for users to integrate their own accelerators. To demonstrate the benefits of reduced prototyping efforts of ARAPrototyper compared to our first generation prototyping flow PARC [20], we also include it for a quantitative comparison. Table IV presents the LOCs to integrate our medical imaging accelerators and third-party MachSuite [33] accelerator kernels into PARC and ARAPrototyper, including total generated RTL code, total HLS C/C++ code, kernel-only HLS code, and integration-only code. We include eight more accelerator kernels from a widely used third-party accelerator benchmark suite MachSuite to better illustrate our manageable prototyping efforts. 2 As shown in Table IV, compared to the hundreds of LOCs for accelerator integration in PARC, users only need to add a few LOCs (most of the time less than 10 LOCs) to integrate their own accelerators into ARAPrototyper due to its clean accelerator integration interface and automation flow. 2 We do not include MachSuite accelerators for more studies because their performance is far below optimal. However, users can engage in further accelerator microarchitecture explorations based on ARAPrototyper.
E. Design Space Exploration
To demonstrate the capability and usage of ARAPrototyper, we conduct the following case studies for ARA design explorations.
1) Private vs. Shared Buffer Architecture: First, users can configure the ARAPrototyper to achieve either a private or shared buffer architecture (as explained in Section III-A1).
To demonstrate this, we use an example ARA with a total of five accelerators. In the private buffer architecture, each accelerator needs its own buffer resources, regardless of how many of the accelerators are simultaneously powered on dynamically. While in the shared one, with the maximum number of simultaneously powered-on accelerators in mind, we can allocate the minimum buffer resources to support any combination of accelerators running. Figure 12 shows that the shared buffer architecture can use much less physical buffer resources (thus less area and power) when not all accelerators are running simultaneously. On the other hand, if the shared buffer architecture is designed to support at most four simultaneous accelerators, but users need to run five tasks, then it would degrade the performance by 12.6% compared to the private buffer architecture (with 15.6% less buffer resources) based on our profiling.
2) Interleaved Network: Inter-Acc vs. Intra-Acc: Second, ARAPrototyper provides the flexibility for users to evaluate different interconnects between buffers and DRAM. Users can (statically) configure this interconnect to interleave interaccelerators to achieve fairness among accelerators or within an accelerator to achieve better performance for the accelerator, as explained in Section III-A2. Figure 13(a) presents the performance of inter-accelerator and intra-accelerator interleaved networks for our medical imaging accelerators. As discussed in Section III-A2, the intra-accelerator interleaving can prevent the case in which all long-burst requests from the accelerator are issued to the same DMACs.
To gain more insights into this performance speedup, we further compare the achieved bandwidth of both cases; this can be obtained using the added performance counters in ARA-Prototyper. As shown in Figure 13(b), intra-accelerator interleaving can achieve better bandwidth than inter-accelerator interleaving in our case, and thus achieves better performance. We can also observe that the available memory bandwidth is not the performance bottleneck. When we launch two, three, or four accelerators simultaneously, the utilized memory bandwidth still increases. Note that gaussian is a special case since it only fetches four pages of data, and thus the impact 3) Coherency Choices: Third, ARAPrototyper provides the flexibility for coherency choices at either LLC or DRAM depending on the application locality. Figure 14(a) presents the performance of both coherency choices. In our case, coherency at DRAM achieves up to 1.7X performance speedup compared to coherency at LLC. The major reason is that our medical imaging applications behave in a streaming fashion and have poor locality at LLC. Another reason is due to current Zynq board limitation, where the LLC has only one port while DRAM has four ports. As a result, coherency at DRAM can achieve higher bandwidth, as shown in Figure 14(b). 4) Impact of TLB Sizes: Fourth, The ARAPrototyper provides the flexibility for users to configure different TLB sizes. In addition, we also provide performance counters in ARAPrototyper to get the number of TLB accesses and misses for further performance analysis. A TLB miss handling (in software) penalty can also be collected in our system software stack. Figure 15(a) and Figure 15(b) present the TLB miss rate and TLB miss handling penalty (in terms of percentage of total execution time) for different TLB sizes. In our case, we choose 32K TLB entries in the design since the TLB miss rate and miss handling penalty will stop to shrink after this point. Another point we want to mention is that TLB misses can cause up to a penalty of 24% of the whole execution time in an ARA due to the streaming access behavior and accelerated computation. Therefore, it needs more attention to address translation support when designing an ARA compared to a general-purpose CPU. 5) Accelerator Microarchitecture Exploration: Finally, without loss of generality, users can conduct accelerator microchitecture explorations such as 1) algorithm-level changes, and 2) HLS pragmas tuning. Actually, ARAPrototyper makes the accelerator microarchitecture explorations easier by providing In this subsection we demonstrate a data reuse optimization for accelerators motivated by the profiled low computation percentage of total execution time in initial accelerator design. As shown in Figure 16(a), in our initial accelerator design, before data reuse optimization, the computation ratio is below 40% for most accelerators, which suggests the accelerators are not fully utilized but are waiting for data. Therefore, we apply the data reuse optimization presented in [43] to the accelerators. After this optimization, the computation ratio can be significantly increased, in most of the cases above 80%. As shown in Figure 16(b), the data reuse optimization can achieve up to 6X performance speedup.
VII. CONCLUSIONS
In this work we designed and implemented ARAPrototyper to enable rapid design space explorations for ARAs in FPGA prototypes with manageable efforts. Designers can easily integrate their HLS-compatible accelerator designs into our reusable baseline prototype for a few lines of code, and customize their own ARAs with up to 33 lines of XML code. The memory system of our ARA prototype is highly customizable and enables numerous design space explorations with insights provided by our added performance counters. Furthermore, we provide user-friendly APIs and the underlying system software stack for users to quickly develop their applications and deploy them seamlessly on our prototype. Finally ARAPrototyper achieves 4,000X to 10,000X faster evaluation time than fullsystem simulations. We believe that ARAPrototyper can be an attractive alternative for ARA design and evaluation.
Fig. 1 :
1Position of ARAPrototyper: rapid prototyping and evaluation for ARA design space exploration (DSE).
Fig. 2 :
2ARA overview: accelerator plane and processor plane.
Fig. 3 :
3Accelerator plane and the ARA memory system.
Fig. 4 :
4A real example of the interconnect topology generated from ARAPrototyper.
Fig. 5 :
5System software stack and the interactions with the ARA and user applications.
Fig. 6 :
6Dynamic buffer allocation: a starvation-free scheme.
Listing 1 :
1An example ARA specification file created for design space exploration with four types of accelerators via ARAPrototyper.
Fig. 7 :
7ARAPrototyper design automation flow.
Fig. 8 :
8The prototyping platform: Xilinx Zynq SoC.
Fig. 9 :
9Accelerator integration template in HLS-compatible C.
Fig. 10 :
10Code examples of using APIs to develop applications.
Fig. 11 :
11Evaluation time on ARAPrototyper and PA-RADE[17].
Fig. 12 :
12Buffer consumption: private vs. shared buffer arch.
Fig. 13 :
13Evaluation on (a) performance and (b) memory bandwidth between Inter-Acc and Intra-Acc interleaving networks.
Fig. 14 :
14Evaluation on (a) performance and (b) memory bandwidth for different coherency choices.is negligible.
Fig. 15 :
15The impact of TLB sizes on: (a) TLB miss rates and (b) TLB miss penalty over total runtime. more profiling statistics through performance counters and pointing out the optimization directions.
Fig. 16 :
16Evaluation of accelerator data reuse optimization[43]: (a) the ratio of computation in total runtime; (b) performance speedup.
TABLE I :
IEvaluation methodologies in existing acceleratorrelated research.Full-system
Methodology
Related work
N
pre-RTL simulation
Aladdin [34]
N
RTL simulation
AccStore [14]
with SPICE models
Sonic Millip3De [11]
N
Cycle-accurate
H.264 [7] Convolution Engines [10]
simulation
Conservation Cores [2]
Y
Full-system
Walker [8] DySER [6]
cycle-accurate
ARC [13] CHARM [12]
simulation
BiN [18] PARADE [17]
N
FPGA
LegUp [22][23][24]
prototyping
FPCA [35] CoRAM [25][26]
Y
Full-system FPGA
DySER [19] TSSP [9] LINQits [5]
Prototyping
TABLE II :
IIAverage TLB miss penalty; kernel APIs vs. software page table walk (Acc@100MHz).Microblaze (PARC [20])
Cortex-A9
Cortex-A9
Method
Kernel APIs
Kernel APIs
pgtwalk
Freq.
100MHz
667MHz
667MHz
Cycles
4975
4278
458
Time(us)
49.75
6.41
0.69
TABLE III :
IIIPerformance and energy comparison over 1) Intel Xeon (Haswell), 2) ARM Cortex-A9, 3) ARA FPGA prototype, and 4) projected ARA on ASIC XeonCortex-A9 ARA on FPGA ARA on ASICConfig.
Freq.
24 threads
1.9GHz
667MHz
Acc@100MHz
CPU@667MHz
-45nm
Power
190W(TDP)
1.1W
3.1W
0.22W
gradient
Runtime(s)
0.030
2.48
0.33
0.08
Energy Eff.
1x
2.07x
5.56x
311x
rician
Runtime(s)
0.036
2.60
0.57
0.14
Energy Eff.
1x
2.39x
3.89x
217x
gaussian
Runtime(s)
0.14
6.31
0.34
0.08
Energy Eff.
1x
3.87x
25.59x
1432x
segmentation Runtime(s)
0.40
13.67
0.37
0.09
Energy Eff.
1x
4.03x
65.38x
3661x
TABLE IV :
IVLines of code (LOCs) to integrate medical imaging and third-party MachSuite kernels into PARC/ARAPrototyper, including total generated RTL code, total HLS C/C++ code, kernel only HLS code, and integration-only code.Domain
Accelerator
Total
Total Kernel
PARC
ARAPrototyper
RTL
HLS
Integration
Integration
gaussian
15107
513
363
150
5
Medical
gradient
32538
778
616
162
6
Imaging
segmentation
63857 1304
1070
234
8
rician
42291 1140
850
290
12
FFT/TRANSPOSE
17072
530
412
118
4
GEMM/NCUBED
3201
121
23
98
3
GEMM/BLOCKED
5226
158
20
138
5
MachSuite
KMP/KMP
3593
167
45
122
4
[33]
MD/KNN
7023
243
53
190
7
(third-party) SORT/MERGE
2996
128
54
74
2
SPMV/CRS
4080
160
18
142
5
VITERBI/VITERBI
4212
177
35
142
5
TABLE V :
VLines of code (LOCs) to customize users' own ARA prototype using existing accelerators.# line of XML code
input
ARA description file
33
components
# line of RTL code
IOMMU
21407
automatically
crossbar
1526
generated
top module
14253
total
37186
According to the Xilinx UltraScale MPSoC roadmap [31], the next generation of Zynq boards will include a quad-core ARM CPU and ultrascale FPGA fabrics which will enable the design space explorations for even larger-scale ARAs.
Dark silicon and the end of multicore scaling. H Esmaeilzadeh, E Blem, R St, K Amant, D Sankaralingam, Burger, Proceedings of the 38th Annual International Symposium on Computer Architecture, ser. ISCA '11. the 38th Annual International Symposium on Computer Architecture, ser. ISCA '11H. Esmaeilzadeh, E. Blem, R. St. Amant, K. Sankaralingam, and D. Burger, "Dark silicon and the end of multicore scaling," in Pro- ceedings of the 38th Annual International Symposium on Computer Architecture, ser. ISCA '11, 2011, pp. 365-376.
Conservation cores: Reducing the energy of mature computations. G Venkatesh, J Sampson, N Goulding, S Garcia, V Bryksin, J Lugo-Martinez, S Swanson, M B Taylor, Proceedings of the Fifteenth Edition of ASPLOS on Architectural Support for Programming Languages and Operating Systems, ser. ASPLOS XV. the Fifteenth Edition of ASPLOS on Architectural Support for Programming Languages and Operating Systems, ser. ASPLOS XVG. Venkatesh, J. Sampson, N. Goulding, S. Garcia, V. Bryksin, J. Lugo- Martinez, S. Swanson, and M. B. Taylor, "Conservation cores: Reducing the energy of mature computations," in Proceedings of the Fifteenth Edi- tion of ASPLOS on Architectural Support for Programming Languages and Operating Systems, ser. ASPLOS XV, 2010, pp. 205-218.
Fpga-accelerated 3d reconstruction using compressive sensing. J Chen, J Cong, M Yan, Y Zou, Proceedings of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '12. the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '12J. Chen, J. Cong, M. Yan, and Y. Zou, "Fpga-accelerated 3d reconstruc- tion using compressive sensing," in Proceedings of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '12, 2012, pp. 163-166.
Analysis and architecture design of an hdtv720p 30 frames/s h.264/avc encoder. L.-G Chen, Chen, IEEE Trans. Cir. and Sys. for Video Technol. 166Chen, and L.-G. Chen, "Analysis and architecture design of an hdtv720p 30 frames/s h.264/avc encoder," IEEE Trans. Cir. and Sys. for Video Technol., vol. 16, no. 6, pp. 673-688, Sep. 2006.
Linqits: Big data on little clients. E S Chung, J D Davis, J Lee, Proceedings of the 40th Annual International Symposium on Computer Architecture, ser. ISCA '13. the 40th Annual International Symposium on Computer Architecture, ser. ISCA '13E. S. Chung, J. D. Davis, and J. Lee, "Linqits: Big data on little clients," in Proceedings of the 40th Annual International Symposium on Computer Architecture, ser. ISCA '13, 2013, pp. 261-272.
Dyser: Unifying functionality and parallelism specialization for energy-efficient computing. V Govindaraju, C.-H Ho, T Nowatzki, J Chhugani, N Satish, K Sankaralingam, C Kim, IEEE Micro. 325V. Govindaraju, C.-H. Ho, T. Nowatzki, J. Chhugani, N. Satish, K. Sankaralingam, and C. Kim, "Dyser: Unifying functionality and parallelism specialization for energy-efficient computing," IEEE Micro, vol. 32, no. 5, pp. 38-51, Sep. 2012.
Understanding sources of inefficiency in general-purpose chips. R Hameed, W Qadeer, M Wachs, O Azizi, A Solomatnikov, B C Lee, S Richardson, C Kozyrakis, M Horowitz, Proceedings of the 37th Annual International Symposium on Computer Architecture, ser. ISCA '10. the 37th Annual International Symposium on Computer Architecture, ser. ISCA '10R. Hameed, W. Qadeer, M. Wachs, O. Azizi, A. Solomatnikov, B. C. Lee, S. Richardson, C. Kozyrakis, and M. Horowitz, "Understanding sources of inefficiency in general-purpose chips," in Proceedings of the 37th Annual International Symposium on Computer Architecture, ser. ISCA '10, 2010, pp. 37-47.
Meet the walkers: Accelerating index traversals for inmemory databases. O Kocberber, B Grot, J Picorel, B Falsafi, K Lim, P Ranganathan, Proceedings of the 46th Annual IEEE/ACM International Symposium on Microarchitecture, ser. MICRO-46. the 46th Annual IEEE/ACM International Symposium on Microarchitecture, ser. MICRO-46O. Kocberber, B. Grot, J. Picorel, B. Falsafi, K. Lim, and P. Ran- ganathan, "Meet the walkers: Accelerating index traversals for in- memory databases," in Proceedings of the 46th Annual IEEE/ACM International Symposium on Microarchitecture, ser. MICRO-46, 2013, pp. 468-479.
Thin servers with smart pipes: Designing soc accelerators for memcached. K Lim, D Meisner, A G Saidi, P Ranganathan, T F Wenisch, Proceedings of the 40th Annual International Symposium on Computer Architecture, ser. ISCA '13. the 40th Annual International Symposium on Computer Architecture, ser. ISCA '13K. Lim, D. Meisner, A. G. Saidi, P. Ranganathan, and T. F. Wenisch, "Thin servers with smart pipes: Designing soc accelerators for mem- cached," in Proceedings of the 40th Annual International Symposium on Computer Architecture, ser. ISCA '13, 2013, pp. 36-47.
Convolution engine: Balancing efficiency &. W Qadeer, R Hameed, O Shacham, P Venkatesan, C Kozyrakis, M A Horowitz, W. Qadeer, R. Hameed, O. Shacham, P. Venkatesan, C. Kozyrakis, and M. A. Horowitz, "Convolution engine: Balancing efficiency &
Proceedings of the 40th Annual International Symposium on Computer Architecture, ser. ISCA '13. the 40th Annual International Symposium on Computer Architecture, ser. ISCA '13flexibility in specialized computingflexibility in specialized computing," in Proceedings of the 40th Annual International Symposium on Computer Architecture, ser. ISCA '13, 2013, pp. 24-35.
Sonic millip3de: A massively parallel 3d-stacked accelerator for 3d ultrasound. R Sampson, M Yang, S Wei, C Chakrabarti, T F Wenisch, Proceedings of the 2013 IEEE 19th International Symposium on High Performance Computer Architecture (HPCA), ser. HPCA '13. the 2013 IEEE 19th International Symposium on High Performance Computer Architecture (HPCA), ser. HPCA '13R. Sampson, M. Yang, S. Wei, C. Chakrabarti, and T. F. Wenisch, "Sonic millip3de: A massively parallel 3d-stacked accelerator for 3d ultrasound," in Proceedings of the 2013 IEEE 19th International Sympo- sium on High Performance Computer Architecture (HPCA), ser. HPCA '13, 2013, pp. 318-329.
Accelerator-rich architectures: Opportunities and progresses. J Cong, M A Ghodrat, M Gill, B Grigorian, K Gururaj, G , Proceedings of the 51st Annual Design Automation Conference, ser. DAC '14. the 51st Annual Design Automation Conference, ser. DAC '14New York, NY, USAACM180ReinmanJ. Cong, M. A. Ghodrat, M. Gill, B. Grigorian, K. Gururaj, and G. Rein- man, "Accelerator-rich architectures: Opportunities and progresses," in Proceedings of the 51st Annual Design Automation Conference, ser. DAC '14. New York, NY, USA: ACM, 2014, pp. 180:1-180:6.
Architecture support for accelerator-rich cmps. J Cong, M A Ghodrat, M Gill, B Grigorian, G Reinman, Proceedings of the 49th Annual Design Automation Conference, ser. DAC '12. the 49th Annual Design Automation Conference, ser. DAC '12J. Cong, M. A. Ghodrat, M. Gill, B. Grigorian, and G. Reinman, "Architecture support for accelerator-rich cmps," in Proceedings of the 49th Annual Design Automation Conference, ser. DAC '12, 2012, pp. 843-849.
The accelerator store: A shared memory framework for accelerator-based systems. M J Lyons, M Hempstead, G.-Y. Wei, D Brooks, ACM Trans. Archit. Code Optim. 84M. J. Lyons, M. Hempstead, G.-Y. Wei, and D. Brooks, "The accelerator store: A shared memory framework for accelerator-based systems," ACM Trans. Archit. Code Optim., vol. 8, no. 4, pp. 48:1-48:22, Jan. 2012.
Introduction to the wire-speed processor and architecture. H Franke, J Xenidis, C Basso, B M Bass, S S Woodward, J D Brown, C L Johnson, IBM J. Res. Dev. 541H. Franke, J. Xenidis, C. Basso, B. M. Bass, S. S. Woodward, J. D. Brown, and C. L. Johnson, "Introduction to the wire-speed processor and architecture," IBM J. Res. Dev., vol. 54, no. 1, pp. 27-37, Jan. 2010.
Haswell: A family of ia 22nm processors. N Kurd, M Chowdhury, E Burton, T Thomas, C Mozak, B Boswell, M Lal, A Deval, J Douglas, M Elassal, A Nalamalpu, T Wilson, M Merten, S Chennupaty, W Gomes, R Kumar, Solid-State Circuits Conference Digest of Technical Papers (ISSCC). 14N. Kurd, M. Chowdhury, E. Burton, T. Thomas, C. Mozak, B. Boswell, M. Lal, A. Deval, J. Douglas, M. Elassal, A. Nalamalpu, T. Wilson, M. Merten, S. Chennupaty, W. Gomes, and R. Kumar, "Haswell: A family of ia 22nm processors," in Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 2014 IEEE International, ser. ISSCC '14, Feb 2014, pp. 112-113.
Parade: A cycle-accurate full-system simulation platform for accelerator-rich architectural design and exploration. J Cong, Z Fang, M Gill, G Reinman, IEEE/ACM International Conference on Computer-Aided Design (ICCAD 2015). J. Cong, Z. Fang, M. Gill, and G. Reinman, "Parade: A cycle-accurate full-system simulation platform for accelerator-rich architectural de- sign and exploration," 2015 IEEE/ACM International Conference on Computer-Aided Design (ICCAD 2015), pp. 380-387, Nov. 2015.
Bin: A buffer-in-nuca scheme for accelerator-rich cmps. J Cong, M A Ghodrat, M Gill, C Liu, G Reinman, Proceedings of the 2012 ACM/IEEE International Symposium on Low Power Electronics and Design, ser. ISLPED '12. the 2012 ACM/IEEE International Symposium on Low Power Electronics and Design, ser. ISLPED '12J. Cong, M. A. Ghodrat, M. Gill, C. Liu, and G. Reinman, "Bin: A buffer-in-nuca scheme for accelerator-rich cmps," in Proceedings of the 2012 ACM/IEEE International Symposium on Low Power Electronics and Design, ser. ISLPED '12, 2012, pp. 225-230.
Design, integration and implementation of the dyser hardware accelerator into opensparc. J Benson, R Cofell, C Frericks, C.-H Ho, V Govindaraju, T Nowatzki, K Sankaralingam, Proceedings of the 2012 IEEE 18th International Symposium on High-Performance Computer Architecture, ser. HPCA '12. the 2012 IEEE 18th International Symposium on High-Performance Computer Architecture, ser. HPCA '12Washington, DC, USAIEEE Computer SocietyJ. Benson, R. Cofell, C. Frericks, C.-H. Ho, V. Govindaraju, T. Nowatzki, and K. Sankaralingam, "Design, integration and implementation of the dyser hardware accelerator into opensparc," in Proceedings of the 2012 IEEE 18th International Symposium on High-Performance Computer Architecture, ser. HPCA '12. Washington, DC, USA: IEEE Computer Society, 2012, pp. 1-12.
Accelerator-rich cmps: From concept to real hardware. Y.-T Chen, J Cong, M Ghodrat, M Huang, C Liu, B Xiao, Y Zou, IEEE 31st International Conference on, ser. ICCD '13. Computer Design (ICCD)Y.-T. Chen, J. Cong, M. Ghodrat, M. Huang, C. Liu, B. Xiao, and Y. Zou, "Accelerator-rich cmps: From concept to real hardware," in Computer Design (ICCD), 2013 IEEE 31st International Conference on, ser. ICCD '13, Oct 2013, pp. 169-176.
Automating the design of processor/accelerator embedded systems with legup high-level synthesis. B Fort, A Canis, J Choi, N Calagar, R Lian, S Hadjis, Y T Chen, M Hall, B Syrowik, T Czajkowski, S Brown, J Anderson, Embedded and Ubiquitous Computing (EUC). 12th IEEE International Conference onB. Fort, A. Canis, J. Choi, N. Calagar, R. Lian, S. Hadjis, Y. T. Chen, M. Hall, B. Syrowik, T. Czajkowski, S. Brown, and J. Anderson, "Automating the design of processor/accelerator embedded systems with legup high-level synthesis," in Embedded and Ubiquitous Computing (EUC), 2014 12th IEEE International Conference on, Aug 2014, pp. 120-129.
Legup: High-level synthesis for fpga-based processor/accelerator systems. A Canis, J Choi, M Aldham, V Zhang, A Kammoona, J H Anderson, S Brown, T Czajkowski, Proceedings of the 19th ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '11. the 19th ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '11A. Canis, J. Choi, M. Aldham, V. Zhang, A. Kammoona, J. H. An- derson, S. Brown, and T. Czajkowski, "Legup: High-level synthesis for fpga-based processor/accelerator systems," in Proceedings of the 19th ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '11, 2011, pp. 33-36.
From software to accelerators with legup high-level synthesis. A Canis, J Choi, B Fort, R Lian, Q Huang, N Calagar, M Gort, J J Qin, M Aldham, T Czajkowski, S Brown, J Anderson, Compilers, Architecture and Synthesis for Embedded Systems (CASES), 2013 International Conference on, ser. CASES '13. A. Canis, J. Choi, B. Fort, R. Lian, Q. Huang, N. Calagar, M. Gort, J. J. Qin, M. Aldham, T. Czajkowski, S. Brown, and J. Anderson, "From software to accelerators with legup high-level synthesis," in Compilers, Architecture and Synthesis for Embedded Systems (CASES), 2013 International Conference on, ser. CASES '13, Sept 2013, pp. 1-9.
Impact of fpga architecture on resource sharing in high-level synthesis. S Hadjis, A Canis, J H Anderson, J Choi, K Nam, S Brown, T Czajkowski, Proceedings of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '12. the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '12S. Hadjis, A. Canis, J. H. Anderson, J. Choi, K. Nam, S. Brown, and T. Czajkowski, "Impact of fpga architecture on resource sharing in high-level synthesis," in Proceedings of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '12, 2012, pp. 111-114.
Coram: An in-fabric memory architecture for fpga-based computing. E S Chung, J C Hoe, K Mai, Proceedings of the 19th ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '11. the 19th ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '11New York, NY, USAACME. S. Chung, J. C. Hoe, and K. Mai, "Coram: An in-fabric memory architecture for fpga-based computing," in Proceedings of the 19th ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '11. New York, NY, USA: ACM, 2011, pp. 97-106.
Prototype and evaluation of the coram memory architecture for fpgabased computing. E S Chung, M K Papamichael, G Weisz, J C Hoe, K Mai, Proceedings of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '12. the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '12New York, NY, USAACME. S. Chung, M. K. Papamichael, G. Weisz, J. C. Hoe, and K. Mai, "Prototype and evaluation of the coram memory architecture for fpga- based computing," in Proceedings of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA '12. New York, NY, USA: ACM, 2012, pp. 139-142.
Aracompiler: a prototyping flow and evaluation framework for accelerator-rich architectures. Y.-T Chen, J Cong, B Xiao, IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS 2015). Y.-T. Chen, J. Cong, and B. Xiao, "Aracompiler: a prototyping flow and evaluation framework for accelerator-rich architectures," in IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS 2015),, March 2015, pp. 157-158.
Araprototyper: Enabling rapid prototyping and evaluation for accelerator-rich architecture (abstact only). Y.-T Chen, J Cong, Z Fang, P Zhou, Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '16. the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '16New York, NY, USAACMY.-T. Chen, J. Cong, Z. Fang, and P. Zhou, "Araprototyper: Enabling rapid prototyping and evaluation for accelerator-rich architecture (abstact only)," in Proceedings of the 2016 ACM/SIGDA International Sympo- sium on Field-Programmable Gate Arrays, ser. FPGA '16. New York, NY, USA: ACM, 2016, pp. 281-281.
Interconnect synthesis of heterogeneous accelerators in a shared memory architecture. Y.-T Chen, J Cong, Low Power Electronics and Design. Y.-T. Chen and J. Cong, "Interconnect synthesis of heterogeneous accelerators in a shared memory architecture," in Low Power Electronics and Design (ISLPED), 2015 IEEE/ACM International Symposium on, July 2015, pp. 359-364.
Xilinx Zynq Platform. Xilinx, Xilinx, "Xilinx Zynq Platform." [Online].
Xilinx Ultrascale Zynq. 31Available: http://www.xilinx.com/products/silicon-devices/soc/zynq-7000 [31] --, "Xilinx Ultrascale Zynq." [Online].
Platform characterization for domain-specific computing. A Bui, K.-T Cheng, J Cong, L Vese, Y.-C Wang, B Yuan, Y Zou, Design Automation Conference (ASP-DAC), 2012 17th Asia and South Pacific. A. Bui, K.-T. Cheng, J. Cong, L. Vese, Y.-C. Wang, B. Yuan, and Y. Zou, "Platform characterization for domain-specific computing," in Design Automation Conference (ASP-DAC), 2012 17th Asia and South Pacific, Jan 2012, pp. 94-99.
MachSuite: Benchmarks for Accelerator Design and Customized Architectures. B Reagen, R Adolf, Y S Shao, G.-Y. Wei, D Brooks, IEEE International Symposium on Workload Characterization (IISWC). B. Reagen, R. Adolf, Y. S. Shao, G.-Y. Wei, and D. Brooks, "MachSuite: Benchmarks for Accelerator Design and Customized Architectures," in IEEE International Symposium on Workload Characterization (IISWC), 2014, pp. 110-119.
Aladdin: A pre-rtl, power-performance accelerator simulator enabling large design space exploration of customized architectures. Y S Shao, B Reagen, G.-Y. Wei, D Brooks, Proceeding of the 41st Annual International Symposium on Computer Architecuture, ser. ISCA '14. eeding of the 41st Annual International Symposium on Computer Architecuture, ser. ISCA '14Y. S. Shao, B. Reagen, G.-Y. Wei, and D. Brooks, "Aladdin: A pre-rtl, power-performance accelerator simulator enabling large design space exploration of customized architectures," in Proceeding of the 41st Annual International Symposium on Computer Architecuture, ser. ISCA '14, 2014, pp. 97-108.
A fully pipelined and dynamically composable architecture of cgra. J Cong, H Huang, C Ma, B Xiao, P Zhou, Field-Programmable Custom Computing Machines (FCCM), 2014 IEEE 22nd Annual International Symposium on. IEEEJ. Cong, H. Huang, C. Ma, B. Xiao, and P. Zhou, "A fully pipelined and dynamically composable architecture of cgra," in Field-Programmable Custom Computing Machines (FCCM), 2014 IEEE 22nd Annual Inter- national Symposium on. IEEE, 2014, pp. 9-16.
The gem5 simulator. N Binkert, B Beckmann, G Black, S K Reinhardt, A Saidi, A Basu, J Hestness, D R Hower, T Krishna, S Sardashti, R Sen, K Sewell, M Shoaib, N Vaish, M D Hill, D A Wood, SIGARCH Comput. Archit. News. 392N. Binkert, B. Beckmann, G. Black, S. K. Reinhardt, A. Saidi, A. Basu, J. Hestness, D. R. Hower, T. Krishna, S. Sardashti, R. Sen, K. Sewell, M. Shoaib, N. Vaish, M. D. Hill, and D. A. Wood, "The gem5 simulator," SIGARCH Comput. Archit. News, vol. 39, no. 2, pp. 1-7, Aug. 2011.
SDSoC Development Environment. Xilinx, Xilinx, "SDSoC Development Environment." [Online]. Available: http://www.xilinx.com/products/design-tools/software-zone/sdsoc.html
Optimization of interconnects between accelerators and shared memories in dark silicon. J Cong, B Xiao, Proceedings of the International Conference on Computer-Aided Design, ser. ICCAD '13. the International Conference on Computer-Aided Design, ser. ICCAD '13J. Cong and B. Xiao, "Optimization of interconnects between accel- erators and shared memories in dark silicon," in Proceedings of the International Conference on Computer-Aided Design, ser. ICCAD '13, 2013, pp. 630-637.
A system profiler for linux. Oprofile, OProfile, "A system profiler for linux." [Online]. Available: http://oprofile.sourceforge.net/about/
High-level synthesis for fpgas: From prototyping to deployment. J Cong, B Liu, S Neuendorffer, J Noguera, K Vissers, Z Zhang, IEEE Transactions on. 304Computer-Aided Design of Integrated Circuits and SystemsJ. Cong, B. Liu, S. Neuendorffer, J. Noguera, K. Vissers, and Z. Zhang, "High-level synthesis for fpgas: From prototyping to deployment," Computer-Aided Design of Integrated Circuits and Systems, IEEE Trans- actions on, vol. 30, no. 4, pp. 473-491, April 2011.
Vivado High Level Synthesis. Xilinx, Xilinx, "Vivado High Level Synthesis." [Online]. Available: http://www.xilinx.com/products/design-tools/vivado/integration/esl- design/index.htm
Measuring the gap between fpgas and asics. I Kuon, J Rose, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 262I. Kuon and J. Rose, "Measuring the gap between fpgas and asics," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 26, no. 2, pp. 203-215, Feb 2007.
An optimal microarchitecture for stencil computation acceleration based on non-uniform partitioning of data reuse buffers. J Cong, P Li, B Xiao, P Zhang, Proceedings of the 51st Annual Design Automation Conference, ser. DAC '14. the 51st Annual Design Automation Conference, ser. DAC '14New York, NY, USAACM77J. Cong, P. Li, B. Xiao, and P. Zhang, "An optimal microarchitecture for stencil computation acceleration based on non-uniform partitioning of data reuse buffers," in Proceedings of the 51st Annual Design Automation Conference, ser. DAC '14. New York, NY, USA: ACM, 2014, pp. 77:1-77:6.
|
[] |
[
"Finite-momentum Bose-Einstein condensates in shaken 2D square optical lattices",
"Finite-momentum Bose-Einstein condensates in shaken 2D square optical lattices"
] |
[
"M Di Liberto \nInstitute for Theoretical Physics\nUtrecht University\nLeuvenlaan 43584CEUtrechtthe Netherlands\n\nScuola Superiore di Catania\nUniversità di Catania\nVia Valdisavoia 9I-95123CataniaItaly\n",
"O Tieleman \nInstitute for Theoretical Physics\nUtrecht University\nLeuvenlaan 43584CEUtrechtthe Netherlands\n",
"V Branchina \nDepartment of Physics\nUniversity of Catania\n\n\nINFN\nSezione di Catania\nVia Santa Sofia 64I-95123CataniaItaly\n",
"C Morais \nInstitute for Theoretical Physics\nUtrecht University\nLeuvenlaan 43584CEUtrechtthe Netherlands\n",
"Smith "
] |
[
"Institute for Theoretical Physics\nUtrecht University\nLeuvenlaan 43584CEUtrechtthe Netherlands",
"Scuola Superiore di Catania\nUniversità di Catania\nVia Valdisavoia 9I-95123CataniaItaly",
"Institute for Theoretical Physics\nUtrecht University\nLeuvenlaan 43584CEUtrechtthe Netherlands",
"Department of Physics\nUniversity of Catania\n",
"INFN\nSezione di Catania\nVia Santa Sofia 64I-95123CataniaItaly",
"Institute for Theoretical Physics\nUtrecht University\nLeuvenlaan 43584CEUtrechtthe Netherlands"
] |
[] |
We consider ultracold bosons in a 2D square optical lattice described by the Bose-Hubbard model. In addition, an external time-dependent sinusoidal force is applied to the system, which shakes the lattice along one of the diagonals. The effect of the shaking is to renormalize the nearest-neighbor hopping coefficients, which can be arbitrarily reduced, can vanish, or can even change sign, depending on the shaking parameter. It is therefore necessary to account for higher-order hopping terms, which are renormalized differently by the shaking, and introduce anisotropy into the problem. We show that the competition between these different hopping terms leads to finite-momentum condensates, with a momentum that may be tuned via the strength of the shaking. We calculate the boundaries between the Mott-insulator and the different superfluid phases, and present the time-of-flight images expected to be observed experimentally. Our results open up new possibilities for the realization of bosonic analogs of the FFLO phase describing inhomogeneous superconductivity.
|
10.1103/physreva.84.013607
|
[
"https://arxiv.org/pdf/1104.4350v1.pdf"
] | 118,268,255 |
1104.4350
|
e4cc7b302d59adae9bc249959980a96742f9cc34
|
Finite-momentum Bose-Einstein condensates in shaken 2D square optical lattices
M Di Liberto
Institute for Theoretical Physics
Utrecht University
Leuvenlaan 43584CEUtrechtthe Netherlands
Scuola Superiore di Catania
Università di Catania
Via Valdisavoia 9I-95123CataniaItaly
O Tieleman
Institute for Theoretical Physics
Utrecht University
Leuvenlaan 43584CEUtrechtthe Netherlands
V Branchina
Department of Physics
University of Catania
INFN
Sezione di Catania
Via Santa Sofia 64I-95123CataniaItaly
C Morais
Institute for Theoretical Physics
Utrecht University
Leuvenlaan 43584CEUtrechtthe Netherlands
Smith
Finite-momentum Bose-Einstein condensates in shaken 2D square optical lattices
(Dated: January 20, 2013)numbers: 0375-b0375Lm6785-d6785Hj
We consider ultracold bosons in a 2D square optical lattice described by the Bose-Hubbard model. In addition, an external time-dependent sinusoidal force is applied to the system, which shakes the lattice along one of the diagonals. The effect of the shaking is to renormalize the nearest-neighbor hopping coefficients, which can be arbitrarily reduced, can vanish, or can even change sign, depending on the shaking parameter. It is therefore necessary to account for higher-order hopping terms, which are renormalized differently by the shaking, and introduce anisotropy into the problem. We show that the competition between these different hopping terms leads to finite-momentum condensates, with a momentum that may be tuned via the strength of the shaking. We calculate the boundaries between the Mott-insulator and the different superfluid phases, and present the time-of-flight images expected to be observed experimentally. Our results open up new possibilities for the realization of bosonic analogs of the FFLO phase describing inhomogeneous superconductivity.
I. INTRODUCTION
Ultracold atoms in optical lattices are ideal systems to simulate the quantum behavior of condensed matter because the lattice geometry, the type of atoms (bosons or fermions), and their interactions can be manipulated in a perfectly clean environment [1]. Furthermore, they provide a perfect testing ground for a wide variety of theoretical models. One of the most prominent examples is the Bose-Hubbard model, which has been studied extensively theoretically (e.g. Refs. [2,3]) and realised experimentally [4,5].
More recently, much interest has been devoted to timedependent, periodically stirred optical lattices, which allow for engineering synthetic gauge fields into the system [6][7][8]. In the presence of a staggered rotation, Dirac cones were shown to emerge for square optical lattices, thus simulating the behavior of anisotropic graphene when the system is loaded with fermions [9]. If instead a mixture of fermions and bosons is used, several properties of high-T c superconductors can be reproduced [10,11]. Loading the same lattice with dipolar bosons leads to a supersolid phase with vortices [12].
Besides the interesting features that arise in the presence of rotation, a full range of new possibilities was shown to emerge by shaking the optical lattice. If the shaking frequency is much larger than the other characteristic energy scales in the problem, the parameters of the Hamiltonian are renormalized. This provides another tool to control the lattice parameters, and even enables the simulation of otherwise experimentally inaccessible lattice models [13][14][15]. In the Bose-Hubbard model, for instance, the superfluid-Mott-insulator transition has been driven by ramping the shaking perturbation and thus tuning the effective hopping parameter to zero [16]. Another fascinating experiment has revealed that mag-netically frustrated systems can be realized with spinless bosons by applying elliptical shaking to a triangular lattice [17].
Here, we consider a 2D square lattice shaken along one diagonal and investigate the effect of next-nearestneighbor (nnn) and next-next-nearest neighbor (nnnn) hopping in the behavior of a bosonic system. In an effective description, the shaking perturbation leads to a renormalization of the nearest-neighbor (nn) hopping parameter, which can vanish or even become negative [13,15]. When this parameter is tuned to be very small, higher order hopping terms, which are usually negligible, may become relevant and must therefore be included in the model. Although the nnn hopping coefficients are strictly zero in 2D optical lattices where the x− and y− directions are independent (separable potential), they are relevant for non-separable optical lattices. In this paper, we show that a tunable finite-momentum condensate can be realized in a certain range of parameters for a realistic and simple setup, thus bringing us a step further in the realization and control of finite-momentum Bose-Einstein condensates (BECs).
Finite-momentum condensates have recently attracted a great deal of attention. In the original proposals by Fulde, Ferrel, Larkin, and Ovchinnikov (FFLO), it was argued that finite momentum Cooper pairs would lead to inhomogeneous superconductivity, with the superconducting order parameter varying spatially (the so-called FFLO phase) [18]. Early NMR experiments at high magnetic fields and low temperatures in the heavy-fermion compound CeCoIn 5 have shown indications of an FFLO phase [19], although recent data suggest the existence of a more complex phase, where the exotic FFLO superconductivity coexists with an incommensurate spin-density wave [20]. For ultracold fermions with spin imbalance, on the other hand, the observation of the FFLO phase arXiv:1104.4350v1 [cond-mat.quant-gas] 21 Apr 2011 has been recently reported in 1D [21].
Earlier theoretical studies of a square-lattice toy model for a scalar field, which took into account non-trivial hopping beyond nearest neighbors, have shown that quantum phases may be generated in which the order parameter is modulated in space [22]. Finite-momentum condensates were also experimentally detected for bosons in more complex lattice geometries, such as the triangular lattice under elliptical shaking [17], or for more complex interactions, as e.g. for spinor bosons in a trap in the presence of Zeeman and spin-orbit interactions [23]. With regard to bosons in a square lattice, it was recently shown that a staggered gauge field may lead to finite-momentum condensates [24]. In this case, the bosons condense either at zero momentum or in the corner of the Brillouin zone, and a first-order phase transition occurs between these two phases [24]. Here, we propose that finite-momentum condensates can be realized for bosons in a shaken square lattice and that we may tune the momentum of the condensate smoothly from 0 to (π, π), by varying the shaking parameter K 0 . To the best of our knowledge, this is the first time that such an effect has been predicted for optical lattices as originating solely from beyond-nearesthopping terms. The interaction simply shifts the groundstate energy by a constant and does not change the condensation momentum.
In the following, we consider in Sec. II an extended Bose-Hubbard model which includes higherorder-hopping coefficients for a non-separable 2D square optical lattice. After introducing a sinusoidal shaking force to the system, we show in Sec. III how the finitemomentum condensate arises, and how the condensation momentum depends on the shaking. We present a 3D phase diagram, with as parameters the Hubbard interaction U , the chemical potential µ, and the shaking parameter K 0 , and indicate the required parameters for the realization of the tunable regime in Sec. IV. Finally, we calculate the expected outcome of time-of-flight experiments in Sec. V and present our conclusions in Sec. VI.
II. THE MODEL
Before discussing the generic 2D problem, let us recall the behavior of 1D lattices and 2D separable lattices. A simple calculation shows that in 1D optical lattices of the form V (x) = (V 0 /2) cos(2kx) (V 0 is the potential depth andk = 2π/λ is the wave vector of the laser beam), nnn hopping coefficients do not change the position of the global minima in the single-particle spectrum but generate metastable states. 2D separable potentials do not introduce new physics from this point of view. The simplest non-separable potential in 2D is given by [25]
V (r) = −V 0 sin 2 [k(x + y)] + sin 2 [k(x − y)] + 2α sin[k(x + y)] sin[k(x − y)] ,(1)
where k = 2 √ 2π/2λ (the factor √ 2/2 comes from a coordinates transformation corresponding to a rotation of the lattice of π/4) and we will make the choice α = 1 in the remainder of this work. Had we chosen α = 0, the potential would have been separable, whereas for 0 < α < 1 the potential would correspond to a superlattice, with neighboring wells of different depths. As shown in e.g. Ref. [1], we can calculate the hopping coefficients from the exact band structure
E n (q) = R t n (R) e iq·R ,(2)
where n is the band index, q is the quasimomentum, and R is a lattice vector. In this notation, t n (R) is the hopping coefficient between two sites separated by the lattice vector R in the n-th energy band. The non-separable optical potential generates hopping coefficients along directions other than those of the elementary lattice vectors of the lattice which were exactly zero for separable potentials. A lattice vector has the form R = mae x + nae y , where a = λ/ √ 2 is the lattice spacing, m and n are integers, and e x and e y are unit vectors in the x-and ydirections; R is indicated in short notation as (m, n). For the non-separable potential that we have introduced, we find non zero hopping terms also for pairs of sites identified by (1,1) or (2,1), which vanish for separable lattices. Table I shows the most relevant lowest-band hopping coefficients for shallow lattices. Higher-order hopping coefficients are neglected because they are at least ten times smaller than t and therefore not important, as will become clear afterwards. We will assume that the lowest-orbital Wannier functions are still even and real for this non-separable potential. As shown by Kohn [26], this can be proven for separable potentials; for non-separable ones it is also a reasonable conjecture, supported by numerical simulations, as shown in Ref. [27]. If we apply a driving sinusoidal force like the one studied in Ref. [13], but now along one of the diagonals, the shaking term in the co-moving reference frame that has to be added to the Hamiltonian reads
V0/Er (1, 0) ↔ −t (1, 1) ↔ t (2, 0) ↔ t 1.0 −2.45 × 10 −2 −8.W (τ ) = K cos(ωτ ) i,j (i + j)n ij ,(3)
where ω is the shaking frequency, τ is the real time, and n ij is the density operator at site (i, j). Following the approach discussed in Refs. [13,15], the non-interacting effective hamiltonian for the quasienergy spectrum in the high-frequency limit ω U, t (and thus ω t , t ) is
H 0 eff = −tJ 0 (K 0 ) r,ν=x,y a † r a r±eν + t J 0 (2K 0 ) r a † r a r±(ex+ey) + t r a † r a r±(ex−ey) + t J 0 (2K 0 ) r,ν=x,y a † r a r±2eν ,(4)
where the shaking parameter is K 0 = K/ ω. The Bessel function J 0 (x) has a node at x 2.4048; hence, when the nn-hopping coefficient t eff = t J 0 (K 0 ) is negligible, the higher-order ones are not. Note that the hopping coefficient along the diagonal perpendicular to the shaking direction is not affected by the shaking.
III. TUNABLE FINITE-MOMENTUM CONDENSATE
The effective Hamiltonian is diagonal in reciprocal space and the single-particle spectrum reads
E k = −2tJ 0 (K 0 ) [cos(k x ) + cos(k y )] +2t J 0 (2K 0 ) cos(k x + k y ) + 2t cos(k x − k y ) +2t J 0 (2K 0 ) [cos(2k x ) + cos(2k y )] ,(5)
where k ν = k · e ν and we have set the lattice constant to unity. The spectrum has an absolute minimum at the center of the Brillouin zone (k = 0) when K 0 < 2.4048−δ and at the four corners of the Brillouin zone when K 0 > 2.4048 + δ. In the interval 2.4048 − δ < K 0 < 2.4048 + δ, two symmetric minima develop along one diagonal of the Brillouin zone at ±k 0 , as shown in Fig. 2. We may determine the precise position of these two minima by studying the first derivative of the single-particle spectrum for k = k x = k y . The non-trivial minima are given by the solution of the equation
cos(ka) = J 0 (K 0 ) 2J 0 (2K 0 )(t 1 + 2t 2 ) ≡ f (K 0 ) .(6)
where t 1 = t /t and t 2 = t /t. We have found that for V 0 = 2E r , 3E r , and 4E r , the second derivative of the single-particle spectrum shows that Eq.(6) corresponds to a true minimum, while for V 0 = 1E r it is a maximum. The largest interval Σ = 2δ of the shaking parameter K 0 for which the non-trivial minima appear has been found to be at lattice depth V 0 = 2.2E r , where δ = 0.0045, and hence the condensation momentum is finite for 2.4003 < K 0 < 2.4093. Since we expect the bosons to condense at the minimum of the single-particle spectrum, the condensation momentum given by Eq. (6) is a function of the shaking parameter K 0 and smoothly evolves from k = 0 at the left edge of Σ to k = (π, π) at the right edge of Σ, see Fig. 3. The two minima in the Σ region are inequivalent because they are not connected by reciprocal lattice vectors and we thus need to take both into account for evaluating the condensation momentum. The arccosine shape of the evolution curve can be explained by linearising Eq. (6) around K 0 = 2.4048, which is a good approximation because δ 1.
The non-interacting ground state of the tunable-momentum SF phase with momenta ±k 0 is
|G = N n=0 c n n!(N − n)! (a † k0 ) n (a † −k0 ) N −n |0 = N n=0 c n |n k0 , (N − n) −k0 ,(7)
where the coefficients c n obey the normalization condition n |c n | 2 = 1. The ground state is thus N + 1-fold degenerate, where N is the number of particles. We stress that there is a close similarity between our system and a BEC of magnons (or triplons). In dimerized antiferromagnets, the magnons condense at a non-zero wavevector k 0 = (π/a, π/a) for applied magnetic fields H which lie between two critical values, for H c1 < H < H c2 [28]. In addition, the magnon dispersion of a two-leg antiferromagnetic ladder with frustrated nnn couplings along the legs shows a minimum that is incommensurate with the lattice spacing [29].
IV. PHASE DIAGRAM
Let us now consider an additional term to the Hamiltonian (4), which describes the local interactions between the bosons
H int = U 2 r n r (n r − 1).(8)
We will treat the interactions between the atoms in a perturbative way and study their effect on the ground state degeneracy. By applying first-order perturbation theory, we find that the correction to the ground-state energy N E k0 is given by
m, N − m|H int |n, N − n = = U 2N s −2n 2 + 2nN + N (N − 1) δ mn ,(9)
where N s is the number of lattice sites. The matrix element is in diagonal form and the eigenvalues are an upside down parabola in n. This means that the minima are at the edge of the interval n ∈ [0, N ] and that they are degenerate. Interactions have partially removed the degeneracy; the perturbative (degenerate) ground state to zeroth order is
|G = c + √ N ! (a † k0 ) N |0 + c − √ N ! (a † −k0 ) N |0(10)
and has energy
H = H 0 + H int = N E k0 + U 2N s N (N − 1). (11)
Eq. (10) shows that the ground state is a superposition of two degenerate states in which all the particles have momentum k 0 or −k 0 . These two states are entangled and behave in a very similar way to the states found by Stanescu et al. [30] for condensates with spin-orbit coupling.
One can generalize the approach described in Ref. [31] to calculate the MI-SF phase boundaries, taking into account higher-order hopping terms. The outcome is
µ ± =Ū 2 (2N 0 − 1)+ + ε k0 2 ± 1 2 ε 2 k0 + 2(2N 0 + 1)Ū ε k0 +Ū 2 ,(12)
whereμ = µ/2t,Ū = U/2t, ε k0 = E k0 /2t and k 0 is the condensation momentum, which depends on the shaking parameter K 0 . Plottingμ ± then gives the phase diagram, which is shown in Figs. 4 and 5. We note that the condensation momentum is not changed by the interactions. This can be seen e.g. by doing first order perturbation theory calculations: in the presence of interactions, the energy per particle is shifted by an amount of N U/2N s , which is momentum independent.
V. EXPERIMENTAL CONSIDERATIONS
The lobe with unit filling N 0 = 1 yields a critical value ofŪ below which only the SF phase is allowed, see Fig. 6. Typical values of U are too large to allow us to probe the tunable-momentum SF with ordinary experimental setups. However, we can decrease U by reducing the swave scattering length with Feshbach resonances, which are available for both the Rubidium isotopes and also for other alkali atoms. We remark that although the range of K 0 in which the condensation momentum is tunable is very small, the required precision is well within experimental control [32].
The quantum phases discussed above could be experimentally observed by doing the usual time-of-flight experiments. These experiments measure the momentumspace density distribution where W (k) is the Fourier transform of the Wannier function and we adopted the coherent state approximation for the SF ground state. The delta functions select the positions of the peaks in the absorption image and are a clear signal of the presence of such a condensate. When the image is recorded, and the first atom is measured to have one of the two momenta, then the wave function collapses on that state, showing only one peak. An array of identical 2D systems would reveal a pattern with both peaks and we can study the effect of the non separability of the optical potential on the Wannier functions in reciprocal space. In Fig.7, we show a qualitative indication of the time-of-flight image described by Eq. (13). We have lumped together the effects of a hypothetical external trap and the Fourier transform of the Wannier function into a Gaussian filter, suppressing the peak heights in higher Brillouin zones. In addition, we have modeled the broadening of the peaks by replacing every peak by a highly localised Gaussian. It is instructive to compare this pattern with predictions for other systems like the time-of-flight images given in e.g. Refs. [9,12] for finite-momentum superfluids and supersolids. We clearly see that these phases have a pattern different from Fig.7, because the position of the peaks differs. Hence, we can be sure that we would have an unambigous signal of the measurement of the tunable-
n(k) = ψ † (k)ψ(k) = N |W (k)| 2 |c + | 2 δ k,k0 + |c − | 2 δ k,−k0 ,(13)
VI. DISCUSSIONS AND CONCLUSIONS
In conclusion, we have explored the possibility to generate finite-momentum condensates in optical lattices under shaking, where the suppression of hopping can be tuned by the shaking. This opens up the possibility to investigate the role of higher-order hopping. To look for nontrivial condensation points lying inside the first Brillouin zone, we have studied non-separable optical potentials in 2D square lattices. By applying the shaking along the diagonal of the lattice, we found that in a small region of the shaking parameter, where the nn tunneling is suppressed, two kinds of higher-order-hopping coefficients govern the dynamics of the condensate. In this region, we have unveiled an intermediate phase, for which the condensation point varies continuously from the center to the edge of the Brillouin zone as we tune the shaking parameter. There are two minima in the single-particle spectrum and they are symmetric with respect to the center of the Brillouin zone. In addition, we found that small interactions between the particles force the ground state to be a superposition of two possible Fock states: one where all the particles condense in one minimum and the other where all the particles condense in the second minimum.
Finally, we note that the tunable-momentum condensate can be measured experimentally if the on-site interaction is reduced significantly. This can be achieved with present state-of-the-art experimental techniques, and we hope that our results can stimulate further experiments in this direction. This work yields new insights for the realization and control of finite-momentum condensates and opens new possibilities in the search for bosonic analogs of the FFLO superconducting phase.
FIG. 1 .
1Non-separable optical potential V (x, y) given by Eq.(1) with α = 1.
89 × 10 −4 8.88 × 10 −4 2.0 −4.52 × 10 −3 −6.65 × 10 −5 2.27 × 10 −5 3.0 −1.06 × 10 −3 −5.89 × 10 −6 1.06 × 10 −6 4.0 −2.97 × 10 −4 −6.74 × 10 −7 7.86 × 10 −8
FIG. 2 .
2Single-particle spectrum at V0 = 3Er for K0 = 2.4048 and contour plot.
FIG. 3 .
3Evolution of the minimum in the single-particle spectrum in units of the lattice spacing a as a function of the shaking parameter K0 at V0 = 3Er (only the positive branch is considered).
FIG. 4 .
4Phase boundaries for V0 = 3Er whereμ ≡ µ/2t and U ≡ U/2t: (a) lobe with N0 = 1; (b) lobes with N0 = 1, 2 in the region of the tunable finite-momentum condensate.
FIG. 5 .
5Phase boundaries between the Mott-insulator and the superfluid phase for V0 = 3Er at fixed K0 and filling factor N0 = 1: (a) K0 = 1; (b) K0 = 2.405; (c) K0 = 3. Note the different scales for each plot.
FIG. 6 .
6Critical valueŪc = (U/2t)c at V0 = 3Er as a function of the shaking parameter K0; in the inset, we show the region 2.4016 ≤ K0 ≤ 2.4081 where the tunable finite-momentum condensate is generated.
FIG. 7 .
7Time-of-flight picture expected from experiment as a signal of the finite-momentum-condensate phase. The black square indicates the first Brillouin zone, and the condensation momentum represented in this image is k0 = 2π/5, 2π/5. momentum superfluid phase from experiments.
TABLE I .
IRelevant hopping matrix elements (in units of the recoil energy Er) of the lowest band for shallow lattices.
ACKNOWLEDGMENTSIt is a pleasure to acknowledge A. Hemmerich, E. Arimondo, D. Makogon, A. Eckardt, N. Goldman, and I. Spielman for fruitful discussions. This work was partially supported by the Netherlands Organization for Scientific Research (NWO) and the Scuola Superiore di Catania.
. I Bloch, J Dalibard, W Zwerger, Rev. Mod. Phys. 80885I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008).
. M P A Fisher, P B Weichman, G Grinstein, D S Fisher, Phys. Rev. B. 40546M. P. A. Fisher, P. B. Weichman, G. Grinstein, and D. S. Fisher, Phys. Rev. B 40, 546 (1989).
. D Jaksch, C Bruder, J I Cirac, C W Gardiner, P Zoller, Phys. Rev. Lett. 813108D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner, and P. Zoller, Phys. Rev. Lett. 81, 3108 (1998).
. M Greiner, O Mandel, T Esslinger, T W Hänsch, I Bloch, Nature. 41539M. Greiner, O. Mandel, T. Esslinger, T. W. Hänsch, and I. Bloch, Nature 415, 39 (2002).
. I B Spielman, W D Phillips, J V Porto, Phys. Rev. Lett. 9880404I. B. Spielman, W. D. Phillips, and J. V. Porto, Phys. Rev. Lett. 98, 080404 (2007).
. R Bhat, L D Carr, M J Holland, Phys. Rev. Lett. 9660405R. Bhat, L. D. Carr, and M. J. Holland, Phys. Rev. Lett. 96, 060405 (2005).
. R A Williams, S Al-Assam, C J Foot, Phys. Rev. Lett. 10450404R. A. Williams, S. Al-Assam, and C. J. Foot, Phys. Rev. Lett. 104, 050404 (2010).
. N Gemelke, E Sarajlic, S Chu, arXiv:1007.2677N. Gemelke, E. Sarajlic, and S. Chu, arXiv:1007.2677 .
. Lih-King Lim, A Hemmerich, C. Morais Smith, Phys. Rev. A. 8123404Lih-King Lim, A. Hemmerich, and C. Morais Smith, Phys. Rev. A 81, 023404 (2010).
. Lih-King Lim, A Lazarides, A Hemmerich, C. Morais Smith, EPL. 8836001Lih-King Lim, A. Lazarides, A. Hemmerich, and C. Morais Smith, EPL 88, 36001 (2009).
. Lih-King Lim, A Lazarides, A Hemmerich, C. Morais Smith, Phys. Rev. A. 8213616Lih-King Lim, A. Lazarides, A. Hemmerich, and C. Morais Smith, Phys. Rev. A 82, 013616 (2010).
. O Tieleman, A Lazarides, C. Morais Smith, Phys. Rev. A. 8313627O. Tieleman, A. Lazarides, and C. Morais Smith, Phys. Rev. A 83, 013627 (2011).
. A Eckardt, C Weiss, M Holthaus, Phys. Rev. Lett. 95260404A. Eckardt, C. Weiss, and M. Holthaus, Phys. Rev. Lett. 95, 260404 (2005).
. A Eckardt, P Hauke, P Soltan-Panahi, C Becker, K Sengstock, M Lewenstein, EPL. 8910010A. Eckardt, P. Hauke, P. Soltan-Panahi, C. Becker, K. Sengstock, and M. Lewenstein, EPL 89, 10010 (2010).
. A Hemmerich, Phys. Rev. A. 8163626A. Hemmerich, Phys. Rev. A 81, 063626 (2010).
. A Zenesini, H Lignier, D Ciampini, O Morsch, E Arimondo, Phys. Rev. Lett. 102100403A. Zenesini, H. Lignier, D. Ciampini, O. Morsch, and E. Arimondo, Phys. Rev. Lett 102, 100403 (2009).
. H Lignier, C Sias, D Ciampini, Y Singh, A Zenesini, O Morsch, E Arimondo, Phys. Rev. Lett. 99220403H. Lignier, C. Sias, D. Ciampini, Y. Singh, A. Zen- esini, O. Morsch, and E. Arimondo, Phys. Rev. Lett. 99, 220403 (2007).
. J Struck, C Ölschläger, R Le Targat, P Soltan-Panahi, A Eckardt, M Lewenstein, P Windpassinger, K Sengstock, arXiv:1103.5944J. Struck, C.Ölschläger, R. Le Targat, P. Soltan- Panahi, A. Eckardt, M. Lewenstein, P. Windpassinger, and K. Sengstock and, arXiv:1103.5944 .
. P Fulde, R A Ferrell, Phys. Rev. 135550P. Fulde and R. A. Ferrell, Phys. Rev. 135, A550 (1964).
. A J Larkin, Y N Ovchinnikov, JETP. 20762A. J. Larkin and Y. N. Ovchinnikov, JETP 20, 762 (1965).
. A Bianchi, R Movshovich, C Capan, P G Pagliuso, J L Sarrao, Phys. Rev. Lett. 91187004A. Bianchi, R. Movshovich, C. Capan, P. G. Pagliuso, and J. L. Sarrao, Phys. Rev. Lett. 91, 187004 (2003).
. H A Radovan, N A Fortune, T P Murphy, S T Hannahs, E C Palm, S W Tozer, D Hall, Nature. 42551H. A. Radovan, N. A. Fortune, T. P. Murphy, S. T. Han- nahs, E. C. Palm, S. W. Tozer, and D. Hall, Nature 425, 51 (2003).
. M Kenzelmann, Th, C Strässle, M Niedermayer, B Sigrist, M Padmanabhan, A D Zolliker, M. Kenzelmann, Th. Strässle, C. Niedermayer, M. Sigrist, B. Padmanabhan, M. Zolliker, A. D.
. R Bianchi, E D Movshovich, J L Bauer, J D Sarrao, Thompson, Science. 3211652Bianchi, R. Movshovich, E. D. Bauer, J. L. Sarrao, and J. D. Thompson, Science 321, 1652 (2008).
. K Kumagai, H Shishido, T Shibauchi, Y Matsuda, Phys. Rev. Lett. 106137004K. Kumagai, H. Shishido, T. Shibauchi, and Y. Matsuda, Phys. Rev. Lett. 106, 137004 (2011).
. Y Liao, A S C Rittner, T Paprotta, W Li, G B Partridge, R G Hulet, S K Baur, Nature. 467567Y. Liao, A. S. C. Rittner, T. Paprotta, W. Li, G. B. Partridge, R. G. Hulet, and S. K. Baur, Nature 467, 567 (2010).
. V Branchina, H Mohrbach, J Polonyi, Phys. Rev. D. 6045006V. Branchina, H. Mohrbach, and J. Polonyi, Phys. Rev. D 60, 045006 (1999).
. Y.-J Lin, I B Jiménez-Garcia, Spielman, Nature. 47183Y.-J. Lin, Jiménez-Garcia, and I. B. Spielman, Nature 471, 83 (2011).
. Lih-King Lim, C Morais Smith, A Hemmerich, Phys. Rev. Lett. 100130402Lih-King Lim, C. Morais Smith, and A. Hemmerich, Phys. Rev. Lett. 100, 130402 (2008).
. A Hemmerich, C. Morais Smith, Phys. Rev. Lett. 99113002A. Hemmerich and C. Morais Smith, Phys. Rev. Lett 99, 113002 (2007).
. W Kohn, Phys. Rev. 115809W. Kohn, Phys. Rev. 115, 809 (1959).
. N Marzari, D Vanderbilt, Phys. Rev. B. 5612847N. Marzari and D. Vanderbilt, Phys. Rev. B 56, 12847 (1997).
. T Giamarchi, C Rüegg, O Tchernyshyov, Nature Physics. 4198T. Giamarchi, C. Rüegg, and O. Tchernyshyov, Nature Physics 4, 198 (2008).
. A A Tsirlin, Phys. Rev. B. 82144426A. A. Tsirlin et al., Phys. Rev. B 82, 144426 (2010).
. T D Stanescu, B Anderson, V Galitski, Phys. Rev. A. 7823616T. D. Stanescu, B. Anderson and V. Galitski, Phys. Rev. A 78, 023616 (2008).
. D Van Oosten, P Van Der Straten, H T C Stoof, Phys. Rev. A. 63536601D. van Oosten, P. van der Straten, and H. T. C. Stoof, Phys. Rev. A 63, 0536601 (2001).
. E Arimondo, private communicationE. Arimondo, private communication .
|
[] |
[
"Single Higgs boson production at a photon-photon collider: a 2HDM/MSSM comparison",
"Single Higgs boson production at a photon-photon collider: a 2HDM/MSSM comparison"
] |
[
"David López-Val [email protected] \nInstitut für Theoretische Physik\nUniversität Heidelberg\nPhilosophenweg 16D-69120HeidelbergGermany\n"
] |
[
"Institut für Theoretische Physik\nUniversität Heidelberg\nPhilosophenweg 16D-69120HeidelbergGermany"
] |
[
"LC11 Proceeedings Frascati Physics Series"
] |
We consider the loop-induced production of a single Higgs boson from direct γγscattering at a photon collider. A dedicated analysis of the total cross section < σ γγ→h > (for h = h 0 , H 0 , A 0 ), and the relative strength of the effective hγγ coupling r ≡ g γγh /g γγH SM , is carried out within the general Two-Higgs-Doublet Model (2HDM) and the Minimal Supersymmetric Standard Model (MSSM). We systematically survey representative regions over the parameter space, in full agreement with brought-to-date theoretical and phenomenological restrictions, and obtain production rates up to 10 4 Higgs boson events per 500fb −1 of integrated luminosity. We identify trademark phenomenological profiles for the different γγ → h channels and trace them back to the distinctive dynamical features characterizing each of these models -most significantly, the enhancement potential of the Higgs self-interactions in the general 2HDM. The upshot of our results illustrates the possibilities of γγ-physics and emphasizes the relevance of linear colliders for the Higgs boson research program.
| null |
[
"https://arxiv.org/pdf/1202.1075v1.pdf"
] | 119,206,697 |
1202.1075
|
d55eb7d3af8d50ba86c348d4df7a82661c21f71f
|
Single Higgs boson production at a photon-photon collider: a 2HDM/MSSM comparison
David López-Val [email protected]
Institut für Theoretische Physik
Universität Heidelberg
Philosophenweg 16D-69120HeidelbergGermany
Single Higgs boson production at a photon-photon collider: a 2HDM/MSSM comparison
LC11 Proceeedings Frascati Physics Series
We consider the loop-induced production of a single Higgs boson from direct γγscattering at a photon collider. A dedicated analysis of the total cross section < σ γγ→h > (for h = h 0 , H 0 , A 0 ), and the relative strength of the effective hγγ coupling r ≡ g γγh /g γγH SM , is carried out within the general Two-Higgs-Doublet Model (2HDM) and the Minimal Supersymmetric Standard Model (MSSM). We systematically survey representative regions over the parameter space, in full agreement with brought-to-date theoretical and phenomenological restrictions, and obtain production rates up to 10 4 Higgs boson events per 500fb −1 of integrated luminosity. We identify trademark phenomenological profiles for the different γγ → h channels and trace them back to the distinctive dynamical features characterizing each of these models -most significantly, the enhancement potential of the Higgs self-interactions in the general 2HDM. The upshot of our results illustrates the possibilities of γγ-physics and emphasizes the relevance of linear colliders for the Higgs boson research program.
Introduction
The LHC is now truly laying siege to the Higgs boson. The diphoton and gauge boson pair excesses recently reported by ATLAS and CMS [1] may indeed constitute, if confirmed, a first solid trace of its existence. In the meantime, the currently available data keeps narrowing down the mass range and the phenomenological portray under which the Higgs boson may manifest. On the other hand, strong theoretical motivation supports of the idea that Electroweak Symmetry Breaking (EWSB) is realized by some mechanism beyond that of the Standard Model (SM), viz. of a single, fundamental spinless field. One canonical example of the latter is the general 2HDM [2]. Here, the addition of a second scalar SU L (2) doublet tailors a rich and disclosing phenomenology [3]. The 2HDM can be fully set along in terms of the the physical Higgs boson masses; the ratio tan β ≡ H 0 2 / H 0 1 of the two Vacuum Expectation Values (VEVs) giving masses to the up-and down-like quarks; the mixing angle α between the two CP -even states, h 0 , H 0 ; and, finally, one genuine Higgs boson self-coupling, which we shall denote λ 5 . The Higgs sector of the MSSM corresponds to a particular (supersymmetric) realization of the general (unconstrained) 2HDM [4]. For further details we refer the reader to Ref. [5], where all the notation, model setup and restrictions are discussed at length. Following the eventual discovery of the Higgs boson(s) at the LHC, of crucial importance will be to address the precise experimental determination of its quantum numbers, mass spectrum and couplings to other particles. A linear collider (linac) can play a central role in this enterprise [6]. Dedicated studies have exhaustively sought for the phenomenological imprints of the basic 2HDM Higgs boson production modes, such as e.g. i) triple Higgs, e + e − → 3h [7]; ii) inclusive Higgs-pair through EW gauge boson fusion, e + e − → V * V * → 2h+X [8]; iii) exclusive Higgs-pair e + e − → 2h [5,9]; and iv) associated Higgs/gauge boson e + e − → hV [10], with h ≡ h 0 , A 0 , H 0 , H ± and V ≡ Z 0 , W ± 2 . As a common highlight, all these studies report sizable production rates and large quantum effects, arising from the potentially enhanced Higgs self-interactions. These self-couplings, unlike their MSSM analogues, are not anchored by the gauge symmetry, and may thus be strengthened as much as allowed by the unitarity bounds. Interestingly enough, Higgs boson searches at an e + e − collider may benefit from alternative operation modes, particularly from γγ scattering. In this vein, single (γγ → h) and double (γγ → 2h) Higgs boson pair production are examples of γγ-induced processes which entirely operate at the quantum level. The effective (loop-mediated) Higgs/photon interaction g γγh can be regarded as a direct probe of non-standard (charged) degrees of freedom coupled to the Higgs sector. The aforementioned single Higgs channels have been considered in the framework of the SM [12], the 2HDM [13] and the MSSM [14,15] and are known to exhibit excellent experimental prospects, not only due to the clean environment inherent to a linac machine, but also owing to the high attainable γγ luminosity, and the possibility to tune the γ-beam polarization as a strategy to enlarge the signal-versus-background ratios 3 .
Numerical analysis 2.1 Computational setup
In this contribution we present a fully updated analysis of the process γγ → h (h = h 0 , H 0 , A 0 ) and undertake a comparison of the 2HDM -versus the MSSM results. We focus our attention on the following two quantities: i) the total, spin-averaged cross section, [17]. The dashed regions are ruled out by b → sγ data. The linac center-of-mass energy is kept at √ s = 500 GeV.
σ γγ→h (s) = {ij} 1 0 dτ d L ee ij dτσ η i η j (ŝ) ,(1)
whereσ η i η j stands for the "hard" scattering cross section,ŝ = τ s being the partonic center-ofmass energy; while d L ee ij /dτ denotes the (differential) photon luminosity distributions, by which we describe the effective e ± → γ conversion of the primary linac beam. In turn, η i,j accounts for the respective polarization of the resulting photon beams; and ii) the γγh coupling strength, r ≡ g γγh /g γγH SM -that we normalize to the SM, identifying h 0 ≡ H SM . We compare the distinct phenomenological patterns that emerge from the 2HDM and the MSSM and spell out the specific dynamical features that may help to disentangle both models. Further details may be found in Refs. [13,14].
Throughout our study we make use of the standard algebraic and numerical packages Fey-nArts, FormCalc and LoopTools [18]. Updated experimental constraints ( stemming from the EW precision data, low-energy flavor-physics and the Higgs mass regions ruled out by the LEP, Tevatron and LHC direct searches), as well as the theoretical consistency conditions (to wit: perturbativity, unitarity and vacuum stability) are duly taken into account -cf. [19][20][21][22][23][24]. The photon luminosity distributions are obtained from [25], while the MSSM Higgs mass spectrum is provided by FeynHiggs [26].
Profiling γγ → h within the 2HDM
The upshot of our numerical analysis is displayed on the left panels of Figs. 1 -2. There we illustrate the behavior of < σ γγ→h > and the ratio r over representative regions of the 2HDM parameter space. For definiteness, we perform our calculation for a type-I 2HDM structure and for relatively light Higgs boson masses (as quoted in the Figure caption). The pinpointed trends, however, do not critically depend on the previous assumptions -see Ref. [13,14] for an extended discussion. Our results neatly illustrate the interplay of the charged Higgs boson, W ± boson and fermion loops, whose respective contributions to g γγh undergo a highly characteristic destructive interference. The strength of the Higgs self-coupling λ hH + H − , which is primarily modulated by tan β and λ 5 , determines whether the overall rates may become enhanced (r > 1) or suppressed (r < 1) relatively to the SM expectations. Scenarios yielding r > 1 could in principle be met for λ hH + H − ∼ O(10 3 ) GeV and M H ± ∼ O(100) GeV (due to strongly boosted H ± -mediated loops) or tan β < 1 (which enhances the top-mediated loops through the Higgs-top Yukawa coupling, g h 0 tt ∼ sin α/ sin β). In practice, however, both situations are disfavored by the combined effect of the unitarity and vacuum stability conditions, together with the flavor physics constraints (mostly from B 0 d −B 0 d ). Instead, the 2HDM regions with λ hH + H − ∼ O(10 2 ) GeV give rise to a trademark suppression of the single Higgs boson rates, and pull the relative hγγ coupling strength down to values of r ∼ −50%. Away from these largely subdued domains, we retrieve total cross sections in the ballpark of < σ γγ→h >∼ 1 − 50 fb -this is to say, up to a few thousand neutral, CP -even, single Higgs boson events, for the light (h 0 ) and the heavy (H 0 ) states alike. Finally, if the Higgs self-interactions are even weaker -or, alternatively, the charged Higgs bosons are very massive -then the H ± -mediated corrections become subleading.
In such instances we are left with r 1, as a reflect of the fact that the g γγh coupling is now essentially determined by a SM-like combination of W ± and fermion-mediated loops. It is also worth underlining the complementary nature of the production rates for the two neutral CPeven Higgs channels γγ → h 0 /H 0 , which ensues from the inverse correlation of the respective couplings to the charged Higgs, namely of λ h 0 H + H − with respect to λ H 0 H + H − -see the σ h 0 and σ H 0 curves from panels a-d in Fig. 1. We also observe that the results for γγ → H 0 tend to be slightly above the SM yields, whereas γγ → h 0 stays usually below. This follows from the kinematic structure of the total cross section,
< σ γγ→h >∼ M 4 M h /M 2 W , which implies σ H 0 > σ h 0 as M H 0 > M h 0 ≡ M H SM .
In contrast, and owing to its CP -odd nature, γγ → A 0 is essentially featureless and entails a minor numerical impact.
Profiling γγ → h within the MSSM
Let us now turn our attention to the MSSM. On the right panels of Figs. 1-2 we survey the behavior of the purported quantities < σ γγ→h > and r for the representative MSSM parameter setups that are quoted below [17]: -mixing 400 2000 200 0 200 1600 small α e f f 300 800 2000 -1100 500 500
M A 0 [GeV] M SU SY [GeV] µ [GeV] X t ≡ A t − µ/ tan β [GeV] M 2 [GeV] M 3 [GeV] no
We note that GUT relations between M 1 and M 2 , as well as universal trilinear couplings (A t = A b = A τ ), are assumed throughout. Likewise, we duly account for the impact of the different sets of constraints, most significantly stemming from B(b → sγ) (dashed areas, in yellow) and the Higgs boson and squark mass bounds settled by direct exclusion limits. In this SUSY setup, non-standard contributions to the effective g hγγ interaction may emerge from a twofold origin. On the one hand we have a panoply of the 2HDM one-loop diagrams mediated by the interchange of virtual charged Higgs bosons. In the present framework, however, these terms do no longer bear any enhancement capabilities, since the corresponding Higgs self-interactions are completely tied to the gauge couplings -as a consequence of the underlying SUSY invariance. On the other hand we find the squark-mediated quantum corrections. Their imprints on g γγh are mostly visible for relatively light squarks (with masses of few hundred GeV), hand in hand with sizable mass splittings between their respective left and right-handed components and large trilinear couplings to the Higgs bosons 4 . In prac-tice, however, the combination of the different experimental restrictions effectively tames the abovementioned enlargement power.
We can thus conclude that realistic MSSM scenarios encompass rather mild departures from the SM loop-induced mechanism (r ∼ −5%), rendering overall production rates again in the ballpark of < σ γγ→h >∼ O(10) fb for the lightest CP -even state h = h 0 -while its heavier companions H 0 , A 0 lie typically one order of magnitude below [14].
Discussion and concluding remarks
In this contribution we have reported on the single Higgs boson production through γγ scattering in a TeV-range linear collider. The process γγ → h is driven by an effective, loop-induced hγγ interaction, a mechanism that is directly sensitive to the eventual presence of new charged degrees of freedom. We have computed the total cross section, < σ γγ→h >, alongside with the effective (normalized) coupling strength r ≡ g γγh /g γγH SM , within both the 2HDM and the MSSM. We have disclosed characteristic phenomenological profiles and spelt out their main differences, which mostly stem from the respective Higgs self-interaction structures. In the MSSM, the aforementioned self-couplings are anchored by the gauge symmetry, while in the 2HDM they can be as large as permitted by the combined set of experimental and theoretical restrictions -most significantly unitarity. We have identified a sizable depletion of < σ γγ→h > (corresponding to values of r ∼ −50%) for those 2HDM configurations in which a relatively large λ hH + H − interaction is capable to thrust the H ± -mediated contribution to g γγh , and subsequently to maximize the destructive interference that operates between the different H ± , W + and fermion-mediated loops. A smoking gun of underlying 2HDM physics would thus manifest here as a missing number of single Higgs boson events. On the MSSM side, departures from the SM are comparably much tempered (r −5%) and essentially driven by the squark-mediated corrections, which are relatively suppressed by the mass scale of the exchanged SUSY particles and further weakened by the stringent experimental bounds. An additional distinctive feature of both models might manifest from the simultaneous observation of γγ → h 0 and γγ → H 0 . Situations where both channels yield O(10 3 ) events per 500 fb −1 could only be attributed to a non-standard, non-SUSY Higgs sector, since the mass splitting between the two neutral, CPeven Higgs states is typically enforced to be larger in the MSSM -so that the corresponding γγ → H 0 rates are comparably smaller.
The clean environment of a linac offers excellent prospects for the tagging and identification of the single Higgs boson final states through the corresponding decay products. The latter should arise in the form of either i) highly energetic, back-to-back heavy-quark dijets (h → jj, with jj ≡ cc, bb); ii) lepton tracks from gauge boson decays (h → W + W − → 2l + / E T , Z 0 Z 0 → 4l); iii) in the specific case of the MSSM, and if kinematically allowed, also the Higgs decays in the past in a wide variety of processes, see e.g. [27].
into chargino pairs (h →χ 1χ2 → jj + / E T ). Precise Higgs boson mass measurements could then be conducted upon the reconstruction of the dijet -or dilepton -invariant masses and should broaden the present coverage of the LHC. For instance, they would enable to sidestep the so-called "LHC wedge", namely the M A 0 200GeV and tan β ∼ O(10) domains of the MSSM parameter space [28]. The dominant backgrounds, corresponding to the processes γγ → bb/W + W − , could be handled not only by means of standard kinematic cuts, but also through a suitable tuning of the photon beam polarization [15].
A future generation of linac machines, and of γγ facilities in particular, should therefore be instrumental for a precise experimental reconstruction of the EWSB mechanism; namely for the measurement of the Higgs boson mass, couplings and quantum numbers, if not for the discovery of the Higgs boson itself -if its mass and/or its coupling pattern fell beyond the reach of the LHC and the e + e − colliders. Photon-photon physics may well furnish a most fruitful arena in which to carry the Higgs boson research program to completion.
Figure 1 :
1Left panels (a-d): Total spin-averaged cross-section σ γγ→h (s) and number of Higgs boson events, as a function of tan β (a,b) and sin α (c,d) within the 2HDM. The shaded (resp. dashed) areas are excluded by unitarity (resp. B 0 d −B 0 d mixing). The Higgs boson masses are fixed as follows: M h 0 = 115 GeV; M H 0 = 165 GeV; M A 0 = 100 GeV; M H ± = 105 GeV, with λ 5 = 0. Right panels (e-f ): σ γγ→h (s) within the MSSM, as a function of tan β, for both the no-mixing and the small-α ef f benchmark points
Figure 2 :
2Contour plots of the ratio r ≡ g γγh 0 /g γγH SM that measures the effective γγh 0 coupling strength normalized to the SM, for representative parameter space configurations, comparing the 2HDM (left panel) and MSSM (right panel). The 2HDM calculation is carried out assuming type-I Higgs/fermion Yukawa couplings, λ 5 = 0 and the same set of Higgs boson masses as inFig. 1. The yellow strips on the left plot denote the lower and upper bounds ensuing from unitarity, while the grey vertical band displays the restrictions stemming from B 0 d −B 0 d . As for the MSSM parameter setup, we employ tan β = 2, M A 0 = 600 GeV, µ = 500 GeV, A t = 1800 GeV, M 2 = 500 GeV. The dashed area is ruled out by b → sγ. The linac center-of-mass energy is kept at √ s = 500 GeV.
For related work in the context of MSSM Higgs boson production see e.g.[11].3 Analogue studies for the γγ → hh mode are available e.g. in Ref.[16].
The phenomenological implications of this kind of Yukawa, and Yukawa-like couplings have been addressed
Acknowledgements It is a pleasure to thank Joan Solà for the fruitful and enduring collaboration over the past years. I would also like to express my gratitude to the organizers of the LC 2011 workshop at ETC-Trento for the kind invitation to present this review, and for the kind atmospheare and enlightening time we all shared at the meeting.
ATLAS-CONF-2011-157. CMS-PAS-HIG-11- 03ATLAS Collaboration, ATLAS-CONF-2011-157; CMS Collaboration, CMS-PAS-HIG-11- 03.
The Higgs hunter's guide. J F Gunion, H E Haber, G L Kane, S C Dawson ; G, P M Branco, L Ferreira, M N Lavoura, M Rebelo, J P Sher, Silva, arXiv:1106.0034Theory and phenomenology of two-Higgs-doublet models. Menlo-ParkAddison-WesleyJ.F. Gunion, H.E. Haber, G.L. Kane, S. Dawson, The Higgs hunter's guide, Addison-Wesley, Menlo-Park, 1990; G. C. Branco, P.M. Ferreira, L. Lavoura, M.N. Rebelo, M. Sher, J. P. Silva, Theory and phenomenology of two-Higgs-doublet models, arXiv:1106.0034.
M Moretti, F Piccinini, R Pittau, J Rathsman, ; G Bhattacharyya, P Leser, H Pas, arXiv:1104.3178JHEP 1011 (097) 2010, M. Aoki et al. 8311701M. Moretti, F. Piccinini, R. Pittau, J. Rathsman, JHEP 1011 (097) 2010, M. Aoki et al, arXiv:1104.3178; G. Bhattacharyya, P. Leser, H. Pas, Phys. Rev. D83 (2011) 011701;
. S Chang, J A Evans, M A Luty, Phys. Rev. D. 8495030S. Chang, J. A. Evans, M. A. Luty, Phys. Rev. D 84 (2011) 095030
. H Nilles, Phys. Rept. 1101H.P Nilles, Phys. Rept. 110 (1984) 1;
. H E Haber, G L Kane, Phys. Rept. 11775H.E. Haber, G.L. Kane, Phys. Rept. 117 (1985) 75;
S Ferrara, Supersymmetry. World Scientific1S. Ferrara, ed., Supersymmetry, vol. 1-2 (North Holland, World Scientific, 1987).
. D López-Val, J Solà, Fortsch. Phys. 8145PoSD. López-Val, J. Solà, Phys. Rev. D81 (2010) 033003; Fortsch. Phys. G58 (2010) 660; PoS RADCOR2009, 045 (2010).
arXiv:0709.1893Physics at the ILC. 2ILC Reference Design Report Volume 2: Physics at the ILC, arXiv:0709.1893;
Physics interplay of the LHC and the ILC. G Weiglein, hep-ph/0410364Phys. Rept. 426G. Weiglein et al., Physics interplay of the LHC and the ILC., Phys. Rept. 426 (2006) 47, hep-ph/0410364.
. G Ferrera, J Guasch, D López-Val, J Solà, Phys. Lett. 659297G. Ferrera, J. Guasch, D. López-Val, J. Solà, Phys. Lett. B659 (2008) 297;
. arXiv:0801.3907PoS RAD-COR2007. 043PoS RAD- COR2007, 043 (2007), arXiv:0801.3907.
. R N Hodgkinson, D López-Val, J Solà, Phys. Lett. 67347R. N. Hodgkinson, D. López-Val, J. Solà, Phys. Lett. B673 (2009) 47.
. A Arhrib, G Moultaka, Nucl. Phys. 5583A. Arhrib, G. Moultaka, Nucl. Phys. B558 (1999) 3;
. A Arhrib, M Peyranère, W Hollik, G Moultaka, Nucl. Phys. 58134A. Arhrib, M. Capdequi Peyranère, W. Hollik, G. Moultaka, Nucl. Phys. B581 (2000) 34;
. J Guasch, W Hollik, A Kraft, Nucl. Phys. 59666J. Guasch, W. Hollik, A. Kraft, Nucl. Phys. B596 (2001) 66.
. D López-Val, J Solà, N Bernal, Phys. Rev. 81113005D. López-Val, J. Solà, N. Bernal, Phys. Rev. D81 (2010) 113005;
. D López-Val, J Solà, PoS. 200945D. López-Val, J. Solà, PoS RADCOR2009, 045 (2010);
. Fortsch. Phys. 58660Fortsch. Phys. 58 (2010) 660.
. P Chankowski, S Pokorski, J Rosiek, Nucl. Phys. 423437See e.g. P. Chankowski, S. Pokorski, J. Rosiek, Nucl. Phys. B423 (1994) 437;
. V Driesen, W Hollik, Zeitsch. f. Physik. 68485V. Driesen, W. Hollik, Zeitsch. f. Physik C68 (1995) 485;
. A Djouadi, H E Haber, P M Zerwas, Phys. Lett. 375A. Djouadi, H.E. Haber, P.M. Zerwas, Phys. Lett. B375 (2003) 1996;
. A Djouadi, W Kilian, M Mühlleitner, P M Zerwas, Eur. Phys. J. 1027A. Djouadi, W. Kilian, M. Mühlleitner, P. M. Zerwas, Eur. Phys. J C10 (1999) 27;
. S Heinemeyer, W Hollik, J Rosiek, G Weiglein, Int. J. of Mod. Phys. 19535S. Heinemeyer, W. Hollik, J. Rosiek and G. Weiglein, Int. J. of Mod. Phys. 19 (2001) 535;
. H E Logan, S Su, Phys. Rev. 6635001H. E. Logan, S.-f. Su, Phys. Rev. D66 (2003) 035001;
. E Coniavitis, A Ferrari, Phys. Rev. 7515004E. Coniavitis, A. Ferrari, Phys. Rev. D75 (2007) 015004;
. O Brein, T Hahn, Eur. Phys. J. 52397O. Brein, T. Hahn, Eur. Phys. J C52 (2007) 397.
. D L Borden, D A Bauer, D O Caldwell, Phys. Rev. D. 484018D. L. Borden, D. A. Bauer, D. O. Caldwell, Phys. Rev. D 48, 4018 (1993);
. P Niezurawski, A F Żarnecki, M Krawczyk, Acta Phys. Polon. B. 34177P. Niezurawski, A. F.Żarnecki, M. Krawczyk, Acta Phys. Polon. B 34, 177 (2003)
. N Bernal, D López-Val, J Solà, Phys. Lett. 67738N. Bernal, D. López-Val, J. Solà, Phys. Lett. B677 (2009) 38.
. D López-Val, J Solà, Phys. Lett. 702246D. López-Val, J. Solà, Phys. Lett. B702 (2011) 246;
. J Solà, D López-Val, Nuovo Cim. 3457J. Solà, D. López-Val, Nuovo Cim. C34 (2011) 57.
. B Grzadkowski, J F Gunion, Phys. Lett. 294361B. Grzadkowski, J.F. Gunion, Phys. Lett. B294 (1992) 361;
. J F Gunion, H E Haber, Phys. Rev. 485J. F. Gunion, H.E. Haber, Phys. Rev. D48 (1993) 5;
. C-S. S-H. Zhu, C-S Li, Gao, Chin. Phys. Lett. 152S-h. Zhu, C-s. Li, C-s. Gao, Chin. Phys. Lett. 15 (1998) 2;
. M , M.
. M Mühlleitner, M Krämer, P Spira, Zerwas, Phys. Lett. 508311Mühlleitner, M. Krämer, M. Spira, P. Zerwas, Phys. Lett. B508 (2001) 311;
. D M Asner, J B Gronberg, J F Gunion, Phys. Rev. 6735009D. M. Asner, J. B. Gronberg, J.F. Gunion, Phys. Rev. D67 (2003) 035009;
. M Krawczyk, hep-ph/0307314M. Krawczyk, hep-ph/0307314;
. P Niezurawski, A F Żarnecki, M Krawczyk, Acta Phys. Polon. B. 371187P. Niezurawski, A.F.Żarnecki, M. Krawczyk, Acta Phys. Polon. B 37 (2006) 1187.
Cornet and W. Hollik. Phys. Lett. 66958see e.g. F. Cornet and W. Hollik, Phys. Lett. B669 (2008) 58;
. E Asakawa, D Harada, S Kanemura, Y Okada, K Tsumura, Phys. Lett. 672354E. Asakawa, D. Harada, S. Kanemura, Y. Okada and K. Tsumura, Phys. Lett. B672 (2009) 354;
. A Arhrib, R Benbrik, C.-H Chen, R Santos, Phys. Rev. 8015010A. Arhrib, R. Benbrik, C.-H. Chen, R. Santos, Phys. Rev. D80 (2009) 015010;
. E Asakawa, D Harada, S Kanemura, Y Okada, K Tsumura, Phys. Rev. 82115002E. Asakawa, D. Harada, S. Kanemura, Y. Okada, K. Tsumura, Phys. Rev. D82 (2010) 115002.
. M Carena, S Heinemeyer, C Wagner, Eur. Phys. J. 26601M. Carena, S. Heinemeyer, C. Wagner, Eur. Phys. J C26 (2003) 601.
. T Hahn, Comput. Phys. Commun. 140418T. Hahn, Comput. Phys. Commun. 140, 418 (2001);
. T Hahn, C Schappacher, Com. Phys. Comm. 14354T. Hahn, C. Schappacher,Com. Phys. Comm. G143 (2002) 54;
. T Hahn, M Pérez-Victoria, Com. Phys. Comm. 118153T. Hahn and M. Pérez-Victoria, Com. Phys. Comm. G118 (1999) 153.
. A Wahab El Kaffas, P Osland, O M Greid, Phys. Rev. 7695001A. Wahab El Kaffas, P. Osland, O. M. Greid, Phys. Rev. D76 (2007) 095001;
. H Flächer, M Goebel, J Haller, A Höcker, K Mönig, J Stelzer, Eur. Phys. J. 60543H. Flächer, M. Goebel, J. Haller, A. Höcker, K. Mönig, J. Stelzer, Eur. Phys. J C60 (2009) 543;
. N Mahmoudi, O Stål, Phys. Rev. 8135016N. Mahmoudi, O. Stål, Phys. Rev. D81 (2010) 035016;
. S R Juárez, D Morales, P Kielanowski, arXiv:1201.1876S. R. Juárez, D. Morales, P. Kielanowski, arXiv:1201.1876
. F Mahmoudi, F. Mahmoudi, http://superiso.in2p3.fr ;
. F Mahmoudi, Comput. Phys. Commun. 178745F. Mahmoudi, Comput. Phys. Commun. 178 (2008) 745;
. Comput. Phys. Commun. 1801579Comput. Phys. Commun. 180 (2009) 1579.
. S Kanemura, T Kubota, E Takasugi, Phys. Lett. 313155S. Kanemura, T. Kubota and E. Takasugi, Phys. Lett. B313 (1993) 155;
B490 (2000) 119. See also Sect. A Akeroyd, A Arhrib, E.-M Naimi, Phys. Lett. III of Ref. [5A. Akeroyd, A.Arhrib, E.-M. Naimi, Phys. Lett. B490 (2000) 119. See also Sect. III of Ref. [5].
. M Sher, Phys. Rept. 179273M. Sher, Phys. Rept. 179 (1989) 273;
. S Nie, M Sher, Phys. Lett. 44989S. Nie and M. Sher, Phys. Lett. B449 (1999) 89;
. S Kanemura, T Kasai, Y Okada, Phys. Lett. 471182S. Kanemura, T. Kasai, Y. Okada, Phys. Lett. B471 (1999) 182;
. P M Ferreira, D R T Jones, JHEP. 0869P.M. Ferreira, D.R.T. Jones, JHEP 08 (2009) 069.
. D Eriksson, J Rathsman, O Stål, Com. Phys. Comm. 181189D. Eriksson, J. Rathsman, O. Stål, Com. Phys. Comm. G181 (2010) 189, http://www.isv.uu.se/thep/MC/2HDMC/.
. P Bechtle, O Brein, S Heinemeyer, G Weiglein, K E Williams, arXiv:1102.1898Com. Phys. Comm. 181138P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, K. E. Williams, Com. Phys. Comm. G181 (2010) 138; arXiv:1102.1898, http://www.ippp.dur.ac.uk/HiggsBounds.
. V I Telnov, Acta Phys. Polon. B. 37633V. I. Telnov, Acta Phys. Polon. B 37 (2006) 633;
. A F Żarnecki, Acta Phys. Polon. 342741A. F.Żarnecki, Acta Phys. Polon. B34 (2003) 2741.
. S Heinemeyer, W Hollik, G Weiglein, Com. Phys. Comm. 12476S. Heinemeyer, W. Hollik and G. Weiglein, Com. Phys. Comm. G124 (2000) 76;
. S Heinemeyer, W Hollik, G Weiglein, Eur. Phys. J. 9343S. Heine- meyer, W. Hollik and G. Weiglein, Eur. Phys. J C9 (1999) 343;
. G Degrassi, S Heinemeyer, W Hollik, P Slavich, Eur. Phys. J. 28133G. Degrassi, S. Heinemeyer, W. Hollik, P. Slavich, Eur. Phys. J C28 (2003) 133;
. M Frank, JHEP. 0247M. Frank et al., JHEP 02 ((2007)) 047.
. J A Coarasa, D Garcia, J Guasch, R A Jiménez, J Solà, Eur. Phys. J. 2373J.A. Coarasa, D. Garcia, J. Guasch, R.A. Jiménez, J. Solà, Eur. Phys. J C2 (1998) 373;
. Phys. Lett. 425329Phys. Lett. B425 (1998) 329;
. D Garcia, W Hollik, R A Jiménez, J Solà, Nucl. Phys. 42753D. Garcia, W. Hollik, R.A. Jiménez, J. Solà, Nucl. Phys. B427 (1994) 53;
. S Béjar, J Guasch, D López-Val, J Solà, Phys. Lett. 668364S. Béjar, J. Guasch, D. López-Val, J. Solà, Phys. Lett. B668 (2008) 364.
. H Haber, J. Phys. Conf. Ser. 25912017H. Haber, J. Phys. Conf. Ser. G259 (2010) 012017.
|
[] |
[
"Resonant Spin and Charge Hall Effects in 2D Electron Gas with Unequal Rashba and Dresselhaus Spin-Orbit Couplings under a Perpendicular Magnetic Field",
"Resonant Spin and Charge Hall Effects in 2D Electron Gas with Unequal Rashba and Dresselhaus Spin-Orbit Couplings under a Perpendicular Magnetic Field"
] |
[
"Degang Zhang \nInstitute of Solid State Physics\nCollege of Physics and Electronic Engineering\nSichuan Normal University\n610101ChengduChina\n\nSichuan Normal University\n610101ChengduChina\n\nTexas Center for Superconductivity\nDepartment of Physics\nUniversity of Houston\n77204HoustonTXUSA\n",
"C S Ting \nTexas Center for Superconductivity\nDepartment of Physics\nUniversity of Houston\n77204HoustonTXUSA\n"
] |
[
"Institute of Solid State Physics\nCollege of Physics and Electronic Engineering\nSichuan Normal University\n610101ChengduChina",
"Sichuan Normal University\n610101ChengduChina",
"Texas Center for Superconductivity\nDepartment of Physics\nUniversity of Houston\n77204HoustonTXUSA",
"Texas Center for Superconductivity\nDepartment of Physics\nUniversity of Houston\n77204HoustonTXUSA"
] |
[] |
We have investigated the complex two-dimensional electron system with unequal Rashba and Dresselhaus spin-orbit interactions in the presence of a perpendicular magnetic field. The spin polarizations are obtained in a wide range of magnetic fields. It is shown that such a system is hard to be magnetized. We also find that the resonant charge and spin Hall conductances occurs simultaneously at a certain magnetic field, at which two (nearly) degenerate Landau levels are filled partly. The resonant Hall effects are universal in this type of semiconductor materials, and could have potential application for semiconductor spintronics.
| null |
[
"https://arxiv.org/pdf/1510.01012v1.pdf"
] | 119,111,315 |
1510.01012
|
10159d9c5378be0c429e90282e2dfeaada226741
|
Resonant Spin and Charge Hall Effects in 2D Electron Gas with Unequal Rashba and Dresselhaus Spin-Orbit Couplings under a Perpendicular Magnetic Field
5 Oct 2015
Degang Zhang
Institute of Solid State Physics
College of Physics and Electronic Engineering
Sichuan Normal University
610101ChengduChina
Sichuan Normal University
610101ChengduChina
Texas Center for Superconductivity
Department of Physics
University of Houston
77204HoustonTXUSA
C S Ting
Texas Center for Superconductivity
Department of Physics
University of Houston
77204HoustonTXUSA
Resonant Spin and Charge Hall Effects in 2D Electron Gas with Unequal Rashba and Dresselhaus Spin-Orbit Couplings under a Perpendicular Magnetic Field
5 Oct 2015
We have investigated the complex two-dimensional electron system with unequal Rashba and Dresselhaus spin-orbit interactions in the presence of a perpendicular magnetic field. The spin polarizations are obtained in a wide range of magnetic fields. It is shown that such a system is hard to be magnetized. We also find that the resonant charge and spin Hall conductances occurs simultaneously at a certain magnetic field, at which two (nearly) degenerate Landau levels are filled partly. The resonant Hall effects are universal in this type of semiconductor materials, and could have potential application for semiconductor spintronics.
The spin Hall effect in two-dimensional electron gases (2DEG) has been the focus of theoretical and experimental investigations in the condensed matter community due to its potential application in designing quantum devices [1][2][3][4][5][6]. Such an effect is usually generated by the Rashba or Dresselhaus spin-orbit coupling in semiconductor materials [7,8,9]. In Ref. [10], Shen et al. discovered an interesting resonant spin Hall conductance in 2DEG with the Rashba spin-orbit coupling in the presence of a perpendicular magnetic field due to the crossing of two nearest neighbor Landau levels. The resonant phenomenon could be observed only if this degeneracy happens at the Fermi level. We note there are a lot of semiconductor materials possessing simultaneously the Rashba and Dresselhaus spin-orbit interactions, which originate from the lack of structure and bulk inversion symmetries [7,8], respectively. Therefore, the competition among the Rashba and Dresselhaus spin-orbit couplings and the Zeeman energy leads to a complex energy spectrum [11], which could produce novel physical properties in such the materials. The eigenfunctions associated with the Landau levels are infinite series, distinguishing from those with finite terms for the pure Rashba or Dresselhaus spin-orbit system. In Ref. [12], we calculated the transport properties of the special 2DEG with equal Rashba and Dresselhaus couplings, which possesses an equal-distant-like energy spectrum. The resonant spin Hall effect and accompanying resonant charge Hall effect also exhibit near the crossing between the nearest neighbor Landau levels. However, the most complex 2DEG with unequal Rashba and Dresselhaus couplings is not investigated up to now. In this work, we study the transport properties of this 2DEG, so that such semiconductor materials can be understood thoroughly.
The Hamiltonian for a single electron with spin-1 2 in a plane under a perpendicular magnetic field is described by
H = 1 2m * Π 2 − 1 2 g s µ B Bσ z + α (σ y Π x − σ x Π y ) + β (σ x Π x − σ y Π y ),(1)
where Π = p + e c A, σ i (i = x, y, z) are the Pauli matrices for electron spin, g s is the Lande g-factor, µ B is the Bohr magneton, α and β represent the Rashba and the Dresselhaus spin-orbit couplings, respectively. Here we have chosen the Landau guage A = yBx. Because [p x , H] = 0, p x = k is a good quantum number.
The Hamiltonian (1) with a pure Rashba coupling (i.e. β = 0) was solved by Rashba over fifty years ago [7]. The Landau levels of 2DEG with a pure Dresselhaus coupling (i.e. α = 0) can be obtained by the unitary transformation σ x → σ y , σ y → σ x and σ z → −σ z , which maps a 2DEG with Rashba coupling α, Dresselhaus coupling β, and Lande g-factor g s to a 2DEG with Rashba coupling β, Dresselhaus coupling α, and Lande g-factor −g s [13,14]. When the Rashba and Dresselhaus couplings coexist, the Hamiltonian (1) becomes Extremely complicated and its exact solution had been not obtained for a long time. Fortunately, the eigenvalue problem was solved by using unitary transformations and introducing two bosonic annihilation operators b kσ = 1 √ 2lc [y + c eB (k + ip y ) + 2|a R a D |u σ ] and the cor-
responding creation operators b † kσ = (b kσ ) † , with the cy- clotron radius l c = c eB , a R = αm * lc 2 , a D = βm * lc 2 , u σ = √ 2 2 σ[1 − isgn(αβ)]
, and the orbital index σ = ±1 [11]. Different from the pure Rashba or Dresselhaus coupling case, the orbital space of electrons is divided into two independent infinite-dimensional subspaces described by the occupied number representations Γ σ associated with b kσ and b † kσ . Then the Hamiltonian (1) can be rewritten as H = H −1 ⊕ H 1 , where H σ is the sub-Hamiltonian in Γ σ . The eigenstates for the Hamiltonian (1) can be expressed as an infinite series in terms of the free Landau
levels φ mkσ (i.e. b † kσ φ mkσ = √ m + 1φ m+1kσ , b kσ φ mkσ = √ mφ m−1kσ and < φ m ′ kσ ′ |φ mkσ >= δ mm ′ δ σσ ′ )
in each Γ σ and the physical parameters. Now the exact solution of the Hamiltonian (1) has been popularly accepted [14,15].
We note that the Rashba coupling α can be adjusted by a gate voltage perpendicular to the 2DEG plane [16,17]. Therefore, in experiments, an arbitrary ratio of two kinds of spin-orbit couplings can be obtained in different samples by changing the gate voltage. The relative strength of the Rashba and Dresselhaus couplings can be extracted from the photocurrent measurements [18,19]. Usually the coefficients α and β have the same order of magnitude in quantum wells such as GaAs while in narrow gap compounds such as InAs, α dominates [16][17][18][19].
When |β| = ρ|α|, the eigenvalue for the nth Landau level with the spin index s and the orbital index σ is given by E ns = ωǫ ns [11], where ω = eB m * c and
ǫ ns = n − ρ[2ρa 2 R (1 − ∆ 2 ns ) − g∆ ns ] 1 − ρ 2 ∆ 2 ns + 1 2 (−1) s × [1 − 4ρa 2 R (1 − ρ 2 )∆ ns + g(1 + ρ 2 ∆ 2 ns ) 1 − ρ 2 ∆ 2 ns ] 2 + 8na 2 R (1 − ρ) 2 .
(2) Here the spin index s = 0, 1 for n = 0 while s = 0 for n = 0, g = gsm * 2me , and the energy level parameter ∆ ns is determined by the following equation
{(1−ρ)[2(1+ρ)(1−ρ∆ 2 ns )− g∆ ns a 2 R ][ǫ ns −n− 1 2 +a 2 R (1+ρ 2 )] +(1 + ∆ ns )(1 − ρ∆ ns )(1 − ρ 2 − g 2a 2 R )} √ ρ|a R | = 0. (3)
Obviously, when ρ = 0, the eigenvalue (2) is nothing but that in 2DEG with pure Rashba spin-orbit interaction [7]. If ρ = 1, the energy difference E n0 − E n1 is independent of n, which means that the energy spectrum has an equaldistant-like structure. The corresponding eigenstate is
|nksσ >= 1 N nsσ +∞ m=0 1 √ ρ∆ ns T * σ − √ ρ∆ ns T σ 1 × α nσ ms T σ β nσ ms u m σ φ mkσ ,(4)
where N nsσ is the normalized constant,
|N nsσ | 2 = (1 + ρ∆ 2 ns ) ∞ m=0 (|α nσ ms | 2 + |β nσ ms | 2 ), T σ = √ 2 2 σ(sgna D + isgna R )
, and the components α nσ ms and β nσ ms satisfy
α nσ ns = √ 2n|a R |C ns (C ns − B ns ) C ns D ns (ǫ ns − n − λ) + C ns ζ ns + ρA ns η ns β nσ n−1s , β nσ ns = B ns D ns (ǫ ns − n − λ + 1) + B ns ζ ns + ρA ns η ns √ 2n|a R |B ns (B ns − C ns ) α nσ n−1s (5) at m = n = 0, and √ 2mρ|a R |A ns (C ns − B ns ) √ 2m|a R |C ns (C ns − B ns ) −B ns D ns (ǫ ns − m − λ + 1) − B ns ζ ns − ρA ns η ns √ ρ[A ns D ns (ǫ ns − m − λ + 1) − A ns ζ ns + B ns η ns ] α nσ m−1s β nσ m−1s = C ns D ns (ǫ ns − m − λ) + C ns ζ ns + ρA ns η ns − √ ρ[A ns D ns (ǫ ns − m − λ) − A ns ζ ns + C ns η ns ] √ 2mρ|a R |A ns (C ns − B ns ) √ 2m|a R |B ns (C ns − B ns ) α nσ ms β nσ ms ,(6)
at m = n and m = n = 0. Here, we have defined
A ns = (1 − ∆ ns )(1 − ρ∆ ns ), B ns = ρ(1 − ∆ 2 ns ), C ns = 1 − ρ 2 ∆ 2 ns , D ns = 1 + ρ∆ 2 ns , ζ ns = 1 2 g(1 − ρ∆ 2 ns ) + 4ρa 2 R (1 + ρ)∆ ns , η ns = 2a 2 R (1 + ρ)(1 − ρ∆ 2
ns ) − g∆ ns , and λ = 2ρa 2 R + 1 2 . Note that α nσ −1s = β nσ −1s = 0. When m → +∞, α nσ ms = β nσ ms = 0. Solving Eqs. (2) and (3), we can obtain the energy level parameter ∆ ns and the corresponding Landau level E ns at arbitrary magnetic field. In our calculations, we have chosen the typical material parameters ρ = 0.5, α = 4.0 × 10 −11 eVm, n e = 1.9 × 10 16 /m 2 , g s = 4, and m * = 0.05m e [18,19]. In Fig. 1(a), we plot the Landau levels (n ≤ 12) as a function of inverse magnetic field 1/B. Obviously, two nearest neighbor energy levels E n0 and E n+11 have a crossing at a certain magnetic field. The larger n, the stronger the magnetic field at the crossing. Fig. 1(b) shows the energy level parameter ∆ ns associated with E ns . We note that ∆ n1 (n > 0) and ∆ 00 vary slowly with increasing 1/B while ∆ n0 (n > 0) has a sharp change. When 1/B → 0, ∆ n0 = −1.1298 and ∆ n1 = 1.1298. Interestingly, the curves of ∆ ns with n > 0 meet at a point with a value of 0.9985 when Substituting ∆ ns and E ns into Eqs. (5) and (6), we get α nσ ns and β nσ ns , i.e. the eigenstates (4) for the complex 2DEG with ρ = 0.5 in a wide range of magnetic fields. Then the expectation value of spin polarization per electron in this system is
S z = 4ν nmsσ |a R |f (E ns ) |N nsσ | 2 {(1 − ρ∆ 2 ns )(|α nσ ms | 2 − |β nσ ms | 2 ) +2 √ ρ∆ ns [α nσ ms (β nσ ms ) * + β nσ ms (α nσ ms ) * ]},(7)
where ν is the filling factor and f (x) is the Fermi distribution function. The first term of the expression of S z is the contribution by both Rashba and Dresselhaus spin-orbit couplings while the second one is due to the coexistence of them, which plays a crucial role in the magnetization of this 2DEG. In Fig. 2(a), we present the curve of spin polarization S z with increasing the inverse magnetic field at zero temperature. Because the Rashba and Dresselhaus spin-orbit interactions have different symmetries and compete with the Zeeman energy, this 2DEG has a complex magnetization behavior, shown in Fig. 2(a). We note that when 1/B → 0, S z ∼ −0.28 . Therefore, such a system is hard to be magnetized. In order to calculate the spin and charge transport properties in the 2DEG, we apply a small electric field E along the y axis. So an electron acquires an extra potential energy H ′ = eEy, which can be treated as a perturbation term. The operator y can be expressed in terms of the bosonic operators b σ and b † σ in each subspace Γ σ , i.e. y =
√ 2 2 l c (b † kσ + b kσ ) − ck eB − √ 2ρ|a R |u σ .
The charge current operator of a single electron in Γ σ reads j cσ = −ev xσ while the corresponding out of plane spin current operator in Γ σ is j sz σ = 4 (σ z v xσ + v xσ σ z ). Here, the electron velocity in Γ σ along
x axis v xσ = 1 i [x, H σ + H ′ σ ] = √ 2m * lc [b † kσ + b kσ + √ 2|a R |(sgnασ y + ρsgnβσ x − √ ρσ lc )].
Up to the first order in the electric field E in the expectation value of the spin or charge current operator, the spin or charge Hall conductance, i.e. the coefficient of the linear term, can be expressed as [10]
G sz ,c = 1 E nn ′ kss ′ σ (H ′ σ ) n ′ ks ′ σ nksσ (j sz σ,cσ ) nksσ n ′ ks ′ σ E ns − E n ′ s ′ f (E ns )+h.c., (8) where the matrix elements (H ′ σ ) n ′ ks ′ σ nksσ =< n ′ ks ′ σ|H ′ σ |nksσ >, (j sz σ ) n ′ ks ′ σ nksσ =< n ′ ks ′ σ|j sz σ |nksσ >, and (j cσ ) n ′ ks ′ σ nksσ
=< n ′ ks ′ σ|j cσ |nksσ > can be obtained by using the eigenvalue (2), the energy level parameter (3), and the eigenstate (4). Obviously, G sz ang G c are highly nonlinear functions in terms of the material parameters ρ, |a R |, g, and the magnetic field B, which reveal complex transport characteristics in this system.
After a tedious but straight calculation we depict the curves of G sz and G c as a function of 1/B at zero temperature in Figs. 2(b) and 2(c), respectively. It is clear that G sz and G c possess simultaneously a resonant peak at near B c ∼ (1/0.13)T , at which the Landau levels E 40 and E 51 cross. When B → (1/0.115)T , the electrons begin to fill the energy level E 51 , and G sz and G c sharply increase. However, at B c , E 40 and E 51 are filled fully while E 61 is filled partly, and G sz and G c decrease rapidly. Therefore, the resonant spin and charge Hall conductances are produced by filling partly two nearest neighbor Landau levels near their crossing point. Such resonant phenomena also happen in the 2DEG with pure Rashba or equal Rashba and Dresselhaus spin-orbit coupling [10,12].
In summary, we have calculated the spin polarization, spin and charge Hall conductances in 2DEG in the presence of unequal Rashba and Dresselhaus spin-orbit interactions under a perpendicular magnetic field by employing the exact solution for the Hamiltonian (1). The coexistence of the resonant spin and charge Hall effects appears at the vicinity of the crossing point of two nearest neighbor Landau levels, which are not filled fully by electrons. We also note that the spin polarization and both Hall conductances are determined only by the mag- nitudes of the spin-orbit couplings α and β and are independent of their signs. It is expected that these resonant phenomena could have potential application for semicon-ductor spintronics.
This work was supported by the Sichuan Normal University, the "Thousand Talented Program" of Sichuan Province, China, and the Texas Center for Superconductivity at the University of Houston and by the Robert A. Welch Foundation under grant No. E-1146.
FIG. 1 :
1(Color online). (a) Landau energy levels (n ≤ 12) of an electron as a function of inverse magnetic field 1/B in the unit of ω. The real line and dash line denote s = 0 and s = 1, respectively. (b) The corresponding energy level parameters. The material parameters used are ρ = 0.5, α = 4.0 × 10 −11 eV m, m * = 0.05me, and g = 0.1. 1/B ∼ 0.1487T −1 .
FIG. 2 :
2SZ (unit: /2), Gs z and Gc as a function of 1/B at zero temperature. Here the electron density ne = 1.9 × 10 16 m −2 and the other physical parameters are same with those inFig. 1.
. S Murakami, N Nagaosa, S C Zhang, Science. 3011348S. Murakami, N. Nagaosa, and S. C. Zhang, Science 301, 1348 (2003).
. J Sinova, D Culcer, Q Niu, N A Sinitsyn, T Jungwirth, A H Macdonald, Phys. Rev. Lett. 92126603J. Sinova, D. Culcer, Q. Niu, N. A. Sinitsyn, T. Jung- wirth, and A. H. MacDonald, Phys. Rev. Lett. 92, 126603 (2004).
. Y K Kato, R C Myers, A C Gossard, D D Awschalom, Science. 3061910Y. K. Kato, R. C. Myers, A. C. Gossard, and D. D. Awschalom, Science 306, 1910 (2004).
. J Wunderlich, B Kaestner, J Sinova, T Jungwirth, Phys. Rev. Lett. 9447204J. Wunderlich, B. Kaestner, J. Sinova, and T. Jungwirth, Phys. Rev. Lett. 94, 047204 (2005).
. V Sih, R C Myers, Y K Kato, W H Lau, A C Gossard, D D Awschalom, Nature Physics. 131V. Sih, R. C. Myers, Y. K. Kato, W. H. Lau, A. C. Gossard and D. D. Awschalom, Nature Physics, 1, 31 (2005).
. S O Valenzuela, M Tinkham, Nature. 442176S. O. Valenzuela and M. Tinkham, Nature 442, 176 (2006).
. E I Rashba, Sov, Phys. Solid State. 21109E. I. Rashba, Sov. Phys. Solid State 2, 1109 (1960).
. G Dresselhaus, Phys. Rev. B. 100580G. Dresselhaus, Phys. Rev. B 100, 580 (1955).
. S Datta, B Das, Appl. Phys. Lett. 56665S. Datta and B. Das, Appl. Phys. Lett. 56, 665 (1990).
. Shun-Qing, Michael Shen, X C Ma, Fu-Chun Xie, Zhang, Phys. Rev. Lett. 92256603Shun-Qing Shen, Michael Ma, X. C. Xie, and Fu-Chun Zhang, Phys. Rev. Lett. 92, 256603 (2004).
. Degang Zhang, J. Phys. A: Math. Gen. 39477Degang Zhang, J. Phys. A: Math. Gen. 39, L477 (2006).
. Degang Zhang, Yao-Ming Mu, C S Ting, Appl. Phys. Lett. 92212103Degang Zhang, Yao-Ming Mu, and C. S. Ting, Appl. Phys. Lett. 92, 212103 (2008).
. Shun-Qing, Yun-Juan Shen, Michael Bao, X C Ma, Fu-Chun Xie, Zhang, Phys. Rev. B. 71155316Shun-Qing Shen, Yun-Juan Bao, Michael Ma, X. C. Xie, and Fu-Chun Zhang, Phys. Rev. B 71, 155316 (2005).
. Chun Fu, Shun-Qing Zhang, Shen, Inter. J. Mod. Phys. B. 2294Fu-Chun Zhang and Shun-Qing Shen, Inter. J. Mod. Phys. B 22, 94 (2008).
. B Estienne, S M Haaker, K Schoutens, New J. Phys. 1345012B. Estienne, S. M. Haaker, and K. Schoutens, New J. Phys. 13, 045012 (2011).
. J Nitta, T Akazaki, H Takayanagi, T Enoki, Phys. Rev. Lett. 781335J. Nitta, T. Akazaki, H. Takayanagi, and T. Enoki, Phys. Rev. Lett. 78, 1335 (1997).
. J B Miller, D M Zumbuhl, C M Marcus, Y B Lyanda-Geller, D Goldhaber-Gordon, K Campman, A C Gossard, Phys. Rev. Lett. 9076807J. B. Miller, D. M. Zumbuhl, C. M. Marcus, Y. B. Lyanda-Geller, D. Goldhaber-Gordon, K. Campman, and A. C. Gossard, Phys. Rev. Lett. 90, 076807 (2003).
. S D Ganichev, V V Bel'kov, L E Golub, E L Ivchenko, P Schneider, S Giglberger, J Eroms, J Boeck, G Borghs, W Wegscheider, D Weiss, W Prettl, Phys. Rev. Lett. 92256601S. D. Ganichev, V. V. Bel'kov, L. E. Golub, E. L. Ivchenko, P. Schneider, S. Giglberger, J. Eroms, J. De Boeck, G. Borghs, W. Wegscheider, D. Weiss, and W. Prettl, Phys. Rev. Lett. 92, 256601 (2004).
. S Giglberger, L E Golub, V V Bel'kov, S N Danilov, D Schuh, Ch Gerl, F Rohlfing, J Stahl, W Wegscheider, D Weiss, W Prettl, S D Ganichev, Phys. Rev. B. 7535327S. Giglberger, L. E. Golub, V. V. Bel'kov, S. N. Danilov, D. Schuh, Ch. Gerl, F. Rohlfing, J. Stahl, W. Wegschei- der, D. Weiss, W. Prettl, and S. D. Ganichev, Phys. Rev. B 75, 035327 (2007).
|
[] |
[
"Analysis of the MOST light curve of the heavily spotted K2IV component of the single-line spectroscopic binary II Pegasi ⋆",
"Analysis of the MOST light curve of the heavily spotted K2IV component of the single-line spectroscopic binary II Pegasi ⋆"
] |
[
"Michal Siwak \nDepartment of Astronomy and Astrophysics\nUniversity of Toronto\n50 St. George StM5S 3H4TorontoOntarioCanada\n",
"† ",
"Slavek M Rucinski \nDepartment of Astronomy and Astrophysics\nUniversity of Toronto\n50 St. George StM5S 3H4TorontoOntarioCanada\n",
"Jaymie M Matthews \nDepartment of Physics & Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverB.CCanada\n",
"Rainer Kuschnig \nDepartment of Physics & Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverB.CCanada\n\nInstitut für Astronomie\nUniversität Wien\nTürkenschanzstrasse 17A-1180WienAustria\n",
"David B Guenther \nInstitute for Computational Astrophysics\nDepartment of Astronomy and Physics\nSaint Marys University\nB3H 3C3HalifaxN.SCanada\n",
"Anthony F J Moffat \nDépartment de Physique\nCentre de Recherche en Astrophysique du Québec\nUniversité de Montréal\nSuccursale: Centre-VilleC.P.6128, H3C 3J7MontréalQCCanada\n",
"Dimitar Sasselov \nHarvard-Smithsonian Center for Astrophysics\n60 Garden Street02138CambridgeMAUSA\n",
"Werner W Weiss \nInstitut für Astronomie\nUniversität Wien\nTürkenschanzstrasse 17A-1180WienAustria\n"
] |
[
"Department of Astronomy and Astrophysics\nUniversity of Toronto\n50 St. George StM5S 3H4TorontoOntarioCanada",
"Department of Astronomy and Astrophysics\nUniversity of Toronto\n50 St. George StM5S 3H4TorontoOntarioCanada",
"Department of Physics & Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverB.CCanada",
"Department of Physics & Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverB.CCanada",
"Institut für Astronomie\nUniversität Wien\nTürkenschanzstrasse 17A-1180WienAustria",
"Institute for Computational Astrophysics\nDepartment of Astronomy and Physics\nSaint Marys University\nB3H 3C3HalifaxN.SCanada",
"Départment de Physique\nCentre de Recherche en Astrophysique du Québec\nUniversité de Montréal\nSuccursale: Centre-VilleC.P.6128, H3C 3J7MontréalQCCanada",
"Harvard-Smithsonian Center for Astrophysics\n60 Garden Street02138CambridgeMAUSA",
"Institut für Astronomie\nUniversität Wien\nTürkenschanzstrasse 17A-1180WienAustria"
] |
[
"Mon. Not. R. Astron. Soc"
] |
Continuous photometric observations of the visible component of the single-line, K2IV spectroscopic binary II Peg carried out by the MOST satellite during 31 consecutive days in 2008 have been analyzed. On top of spot-induced brightness modulation, eleven flares were detected of three distinct types characterized by different values of rise, decay and duration times. The flares showed a preference for occurrence at rotation phases when the most spotted hemisphere is directed to the observer, confirming previous similar reports. An attempt to detect a grazing primary minimum caused by the secondary component transiting in front of the visible star gave a negative result. The brightness variability caused by spots has been interpreted within a cold spot model. An assumption of differential rotation of the primary component gave a better fit to the light curve than a solid-body rotation model.
|
10.1111/j.1365-2966.2010.17109.x
|
[
"https://arxiv.org/pdf/1009.1171v1.pdf"
] | 119,202,227 |
1009.1171
|
df68513cf4a98fca8985f14bdd82e2f2fe64cd70
|
Analysis of the MOST light curve of the heavily spotted K2IV component of the single-line spectroscopic binary II Pegasi ⋆
2010. September 2010
Michal Siwak
Department of Astronomy and Astrophysics
University of Toronto
50 St. George StM5S 3H4TorontoOntarioCanada
†
Slavek M Rucinski
Department of Astronomy and Astrophysics
University of Toronto
50 St. George StM5S 3H4TorontoOntarioCanada
Jaymie M Matthews
Department of Physics & Astronomy
University of British Columbia
6224 Agricultural RoadV6T 1Z1VancouverB.CCanada
Rainer Kuschnig
Department of Physics & Astronomy
University of British Columbia
6224 Agricultural RoadV6T 1Z1VancouverB.CCanada
Institut für Astronomie
Universität Wien
Türkenschanzstrasse 17A-1180WienAustria
David B Guenther
Institute for Computational Astrophysics
Department of Astronomy and Physics
Saint Marys University
B3H 3C3HalifaxN.SCanada
Anthony F J Moffat
Départment de Physique
Centre de Recherche en Astrophysique du Québec
Université de Montréal
Succursale: Centre-VilleC.P.6128, H3C 3J7MontréalQCCanada
Dimitar Sasselov
Harvard-Smithsonian Center for Astrophysics
60 Garden Street02138CambridgeMAUSA
Werner W Weiss
Institut für Astronomie
Universität Wien
Türkenschanzstrasse 17A-1180WienAustria
Analysis of the MOST light curve of the heavily spotted K2IV component of the single-line spectroscopic binary II Pegasi ⋆
Mon. Not. R. Astron. Soc
0002010. September 2010Accepted -28 May 2010, Received -; in original form -(MN L A T E X style file v2.2)stars: individual: II PegRS CVn-typeflarestar spotsrotation
Continuous photometric observations of the visible component of the single-line, K2IV spectroscopic binary II Peg carried out by the MOST satellite during 31 consecutive days in 2008 have been analyzed. On top of spot-induced brightness modulation, eleven flares were detected of three distinct types characterized by different values of rise, decay and duration times. The flares showed a preference for occurrence at rotation phases when the most spotted hemisphere is directed to the observer, confirming previous similar reports. An attempt to detect a grazing primary minimum caused by the secondary component transiting in front of the visible star gave a negative result. The brightness variability caused by spots has been interpreted within a cold spot model. An assumption of differential rotation of the primary component gave a better fit to the light curve than a solid-body rotation model.
INTRODUCTION
Studies of II Peg (HD 224085, Lalande 46867) began with the spectroscopic analysis of Sanford (1921). He found II Peg to be a late-type (K2), single-line spectroscopic binary (SB1) and determined its first orbital elements. Well defined, regular light variations were noticed by Chugainov (1976) who explained them by rotation of the primary component of II Peg with a cold spot on its surface. He also observed flares and concluded that this is a BY Dra-type binary system. However, subsequent photometric and spectroscopic observations by Rucinski (1977) and Vogt (1979) led these authors ⋆ based on data from the MOST satellite, a Canadian Space Agency mission, jointly operated by Dynacon Inc., the University of Toronto Institute of Aerospace Studies, and the University of British Columbia, with the assistance of the University of Vienna. † E-mail: [email protected] to conclude that the star is more akin to RS CVn-type systems, although with an invisible less-massive component; the distinguishing features of the BY Dra and RS CVn type binaries (with, respectively, dwarf and sub-giant components) were being defined at that time. Changes of equivalent width of the Hα line with orbital phase analyzed by Bopp & Noah (1980a) confirmed the RS CVn classification.
Since the 1980's, II Peg was one of the most frequently observed RS CVn-type stars. A model of multiple spots was used for analysis of the available light-curve data sets for the first time by Bopp & Noah (1980b). The most detailed and complete study of II Peg was presented in a series of four papers by Berdyugina et al. (1998aBerdyugina et al. ( ,b, 1999a). In the current paper we utilize most of the parameters derived in the first paper of this series which was based on high-resolution spectra used to define a high-quality radial velocity orbit. In brief, the essential physical parameters of the primary (visible) star were found to be: T ef f = 4, 600 ± 100 K, log g = 3.2 ± 0.2, [F e/H] = −0.4 ± 0.1, v sin i = 22.6 ± 0.5 km/s, R1 = 3.4 ± 0.2 R⊙, spectral type K2IV, with ephemeris for conjunction (visible star behind), Tconj = 2, 449, 582.9268(48) + 6.724333(10) × E, where E is an integer number of orbits. From the analysis of TiO bands and simultaneous photometric observations, the fictitious, entirely unspotted visual magnitude of the primary star was estimated at a relatively bright level Vu = 6.9; we return to this matter later in the paper as it affects the results of our spot modelling. The orbital inclination was estimated at 60 • ± 10 • , leading to the primary mass M1 = 0.8 ± 0.1 M⊙ and implying that the secondary star is probably a main-sequence, late-type dwarf (M0-M3V) with mass M2 ≈ 0.4 ± 0.1 M⊙. The presence of a white dwarf in this binary system was previously excluded by Udalski & Rucinski (1982) on the basis of ultraviolet observations made by the IUE spacecraft. Berdyugina et al. (1998b) presented multi-epoch images of the primary component, obtained by means of the Doppler imaging technique. They found that the spot distribution and spot parameters obtained from the spectral analysis are in good accordance with those derived solely from analysis of photometric observations. Berdyugina et al. (1999b) discussed the "flip-flop" phenomenon, i.e. a shift of the maximum spot-activity to the opposite side of the stellar surface. The authors also concluded that -because the largest active area tends to be located on the hemisphere facing the secondary star -this component may play an important role in the magnetic phenomena in the system.
The current paper presents analysis of continuous observations of II Peg conducted using the MOST satellite during 31 days in September and October 2008 (Section 2), a circumstance which permitted us to address the following issues: (1) Study of frequency and orbital-phase localization/orientation of flares in the system (Section 3); (2) A search for grazing eclipses caused by the secondary (Section 4); (3) Determination of the differential rotation of the visible star as its minute signatures are better defined for a long observing run (Section 5).
OBSERVATIONS AND DATA REDUCTION
The optical system of the MOST satellite consists of a Rumak-Maksutov f/6 15 cm reflecting telescope. The custom broad-band filter covers the spectral range of 380 -700 nm with effective wavelength falling close to Johnson's V band. The pre-launch characteristics of the mission are described by Walker et al. (2003) and the initial post-launch performance by Matthews et al. (2004).
II Peg was observed from 15th September to 16th October 2008, in HJD = 2, 454, 725 − 2, 454, 756, during 439 satellite orbits over 30.877 days. The individual exposures were 30 sec long. Only low stray-light orbital segments were used, lasting typically 25 min of the full 103 min satellite orbit. In spite of the high background, telemetry and South Atlantic Anomaly breaks, the almost continuous light curve is very well defined (Figure 1).
Because II Peg is usually close to or slightly fainter than 7th magnitude, it was observed in the direct-imaging mode of the satellite (Walker et al. 2003 sibilities of obtaining calibration frames, as it is commonly practised during ground-based observations. However, Rowe et al. (2006a,b) proposed an excellent calibration procedure: Because the background level caused by the Earth stray light usually changes very significantly during the orbital motion of the satellite, it is possible to determine both the dark-level and the flat-field information for pixels within small images (rasters) around stars on a perpixel basis. We removed first the background gradient visible in most frames and caused by nonuniform level of the stray light illumination and then reconstructed the dark and flat-field information for individual pixels on the basis of all available frames. The final steps were standard dark and flat-field corrections. This approach resulted in a considerable improvement of the photometric quality of the data. The implementation used our own scripts written in the IDL software environment. Aperture photometry was made by means of DAOPHOT II procedures (Stetson 1987), as distributed by the IDL-astro library 1 . In spite of the above careful reductions, we still observed linear correlations between the star flux and the sky background level, most probably caused by a small photometric nonlinearity of the electronic system. The correlations showed a trend with time which could be approximated by simple linear functions of time. Corrections for the correlations produced a smooth light curve of II Peg with formal scatter of about 0.002 -0.004 mag. However, the light curve may contain slow (10 days or longer), smooth, systematic trends at a level of about 0.01 magnitude which cannot be characterized and eliminated using the available data.
The nearby, constant, simultaneously observed stars in the unvigneted region of the CCD, GSC 02258-01385 and GSC 02258-01152, and the low amplitude δ Scuti-type star GSC 02258-00981, discovered by MOST, were used to determine transformations between the MOST and Johnson V magnitude systems. The VT and BT magnitudes taken from the TYCHO-2 catalogue were used, after conversion to the standard Johnson BV system. The maximum brightness magnitude of II Peg during the first half of the MOST observations was estimated at Vmax = 7.45 ± 0.02 (random) with the additional uncertainty of the system transformation of ±0.03 mag. We estimate that the combined uncertainty of the maximum V-magnitude of II Peg during the MOST observations does not exceed ±0.06. We note that the unspotted model prediction of Berdyugina et al. (1998a) was appreciably brighter, Vu = 6.9. We present the light curve of II Peg in Figure 1, where the V magnitudes are as determined above while the orbital phases were calculated by means of the ephemeris determined by Berdyugina et al. (1998a), as quoted in Section 1. The accumulated uncertainty of the orbital period over E = 765 − 769 epochs between the original determination and the MOST observations results in a very small uncertainty of the phase, ±0.002, which can be neglected in the present context.
We note that during the MOST observations, the upper envelope of the light curve corresponding to the Vmax level slowly decreased from 7.45 to 7.46, while the amplitude of light changes, ∆V , decreased from 0.145 to 0.12 magnitude. It is interesting to note that the light curve obtained by MOST is similar in its shape, maximum level and amplitude to the light curve obtained by Kaluzny (1984): Vmax=7.46, ∆V =0.12, as presented by Byrne et al. (1989) and Mohin & Raveendran (1993).
FLARES
Previous observations
The astronomical literature contains a few previous reports of several very different flares observed for II Peg, including cases of non-detection even for long monitoring intervals:
(1) Bopp & Noah (1980a) observed sudden Hα enhancements which slowly decayed on time scales of days;
(2) Doyle et al. (1991) simultaneously detected a flare in Xrays and the Johnson U filter -the latter had a duration of more than 36 min;
(3) Mathioudakis et al. (1992) detected ten flares during 57.4 h of optical monitoring in the Johnson U and B filters, finding the rate of one flare per 5.9 h; (4a) Doyle et al. (1993) observed two flares in their ultraviolet spectra, with one lasting about 3 hours; (4b) The same work reported three optical flares, lasting 10.52, 101.00, and 9.08 min, with amplitudes 0.066 (Bband), 0.371 (U -band) and 0.207 (U -band) magnitude, respectively; (5) Mohin & Raveendran (1993) found one flare in their optical spectra; they summarized the results obtained by other authors and concluded that II Peg shows a tendency to flare mainly when close to its minimum light; (6) Henry et al. (1996) estimated a flaring rate of one flare per 4.45 h, what agrees well with the Mathioudakis et al. (1992) result. They noted that Byrne et al. (1994) monitored II Peg in 1992 in a U filter and found no optical flares. Henry et al. (1996) concluded that II Peg appears to exhibit long-term changes in the level of optical flare activity; (7) Berdyugina et al. (1999a) observed two flares in optical spectra, with rise times of a few hours, and very long decline times of 1.5 and 3 d. From Hα emission line profiles, they estimated that the flares had taken place above the visible pole, probably in connection with a large, single active region; (8) Frasca et al. (2008) found a strong flare in spectra obtained close to light minimum, with duration time of at least 2 d.
MOST results
Three types of flares (Fig. 2) were detected during the MOST observations: -Four "short" flares (nos. 1, 2, 9, 10) lasting about one or two MOST orbits (2 -4 h), and with an amplitude of about 0.01 mag; -Six "long" flares, very similar in shape, rise and decay time, and with amplitudes of about 0.04 mag; these flares were used to form a "mean flare" (see below); -One particularly long-lasting flare, with duration of about one full day. Because of the high-background data gaps at 103 min intervals and the typical MOST orbit coverage of 25 min, none of the ten flares of the first two types was observed from start to end. In particular, the four short flares could not be analyzed sufficiently thoroughly. We attempted to construct a "mean flare" (in flux units) from the six, apparently more commonly occurring, longduration flares, as shown in Figure 3. Because flare no.6 is affected by the preceding unusual flare no.5, we used the remaining five flares (i.e. nos. 3, 4, 7, 8, 11) for the construction. First, we expressed their intensities in contiunuum flux units (as defined by the underlying slow light variations caused by rotation of the spotted star, removed by dividing the data by low-order polynomials fitted to the quiescent parts of the light curve) and then we matched the individual flare start times manually to an uncertainty of about ±2 min. Then all flares were simply plotted together without any further scaling; the partially observed flares nos. 3 and 8 contributed only the decaying parts to such a mean flare.
The estimated rise time from the flat continuum to the maximum of the mean flare was found to be 25 ± 4 min. The decline time, from the maximum back to the flat continuum, varied in the range of 5 to 10 h. The half-maximum duration time of the mean flare was one hour (T0.5 ≈ 59 min, as defined in Kunkel (1973)) spanning the range between 50 min for flare no.8 and 74 min for flare no.7. The duration time was shorter for flare no.11, but it could not be uniquely determined from the available data. All these flares considered here most probably did not occur on the secondary component of II Peg, the M-dwarf, for which values of T0.5 of the order of hundreds of seconds would be expected (Kunkel 1973); the location was most likely the primary component or somewhere in the space between the stars. In general, the first two types of flares observed by MOST, were similar to those observed before by Doyle et al. (1993) (items (4a) and (4b) in the previous section). The long-lasting flare (no.5), which started at HJD = 2, 454, 738.6, close to light minimum, may be an analogue of flares observed to date as enhancements of optical spectral lines by Bopp & Noah (1980a), Berdyugina et al. (1999a) and Frasca et al. (2008). It differs markedly in its shape, rise time (6 h) and decay time (at least 18 h, possibly 24 h) from the remaining ten flares observed by MOST. It is shown among the other flares in Figure 2 and magnified in Figure 4.
The rate of eleven flares in the time span of 30.877 d gives a flaring rate of about one flare per 2.8 d. However, due to the breaks in the MOST observations, we cannot neglect the possibility of overlooking very short flares lasting only a few minutes; these would be flares similar to the two shortest observed by Doyle et al. (1993) and all of those observed by Mathioudakis et al. (1992).
The phase distribution of flares
The phase distribution of flares in relation to the spotmodulated light curve can be inspected in detail in Figure 5. The flares appear not to be uniformly distributed in orbital phase: as many as five flares appeared within the light minimum, in the orbital phase interval 0.75 -0.85. This supports the conclusion of Mohin & Raveendran (1993) that flares in II Peg are concentrated close to light minimum when the most heavily spotted side of the visible star is directed toward the observer. However, a Kolmogorov-Smirnov test for the deviation of the phase distribution from uniformity gave the probability that the distributions appear to be identical of 0.28. While this is a small number, it is not small enough to prove this assertion, which still requires confirmation.
A SEARCH FOR ECLIPSES
High-precision photometry from space carries a potential for detection of eclipses caused by transits of the undetected secondary companion over the visible star. We estimated the expected depths and durations of primary eclipse in the Johnson V-band for several values of inclination using the Wilson-Devinney light curve synthetic code (Wilson 1996) for the physical parameters obtained by Berdyugina et al. (1998a); this is shown in Fig. 6 and Fig. 7 A careful inspection of the MOST data revealed no indication of any eclipses, as can be seen in Figure 7. To analyze the conjunction segments of the light curve, trends introduced by spots were fitted within phase ranges 0.93 -0.97 and 1.03 -1.07 and then normalized light curves were analyzed in great detail. The data do not reveal any systematic, localized deviations which would have depths similar to those predicted in Fig. 6 to less than 0.1 per cent.
LIGHT CURVE ANALYSIS
In this investigation we modeled the MOST light curve in terms of dark spots which are internally invariable in time. We used the program StarSpotz, which was successfully applied to ǫ Eri (Croll et al. 2006) and κ 1 Cet (Walker et al. 2007), where differential rotation of the stars was found. The reader is directed to these papers for details of the model. The program is based on the program SpotModel (Ribárik 2002) which utilizes the analytical models developed by Budding (1977) and Dorren (1987).
The assumption of the internal invariability of spots is most likely not fulfilled in the case of II Peg. As Doppler images obtained during 1994-2002 reveal (Berdyugina et al. 1998b(Berdyugina et al. , 1999b, the spots may constantly change their properties over time, in time scales of several rotation periods. The shortest time scales for appreciable spot changes could be as short as 2 months, which is comparable with the length of the MOST run of 31 days. The global, progressive light curve shape changes ( Fig. 1) can be easily explained by differential rotation of the stellar surface with small, random changes of the spots well averaged over the time of observations. A more detailed investigation of differential rotation should be supported by simultaneous high-resolution spectroscopic observations. Without them, as in the current investigation, we are unable to assert whether spots really remained sufficiently constant during the MOST observations. Additionally, whatever method of spot shape restoration is used (including the maximum entropy method), the results will always be subject to limitations imposed by the instrumental effects mentioned in Section 2.
Light curve modelling
After first trial runs it turned out that at least two cold spots on the hemisphere directed to the observer are necessary to give a reasonable explanation of the light variations. Compatible with the non-detection of eclipses, we assumed: i = 60 • , R1 = 3.4 R⊙ and v sin i = 22.6 km/s, as determined by Berdyugina et al. (1998a) (see also fig. 6 of their paper). Also, as fixed parameters for the light curve models, we assumed the linear limb-darkening coefficient u = 0.817, adopted using the tables of Díaz-Cordovés et al. (1995). We also fixed the photospheric and spot temperatures at T phot = 4, 600 K and Tspot = 3, 600 K, as based on the results of Berdyugina et al. (1998bBerdyugina et al. ( , 1999b. This assumption was required to fix the spot-to-photosphere flux Table 1. Results of the light curve models for a rigidly and a differentially rotating star for i = 60 • . The ranges in resulting parameters for the two extreme values of the normalization parameter Fu, 1.19 and 1.33 (see the text) are given in brackets as estimates of parameter uncertainties. a -assumed as constant during modelling, b -determined using the constraints R 1 = 3.4 R ⊙ and v sin i = 22.6 km/s, c -calculated using Eq.(1).
Prox. effects: neglected (1) neglected (2) accounted (3) accounted ( ratio f for the bandpass of the MOST observations. f = 0.077 ± 0.015 has been evaluated by means of the SPEC-TRUM programme (Gray 2001) and Kurucz's atmosphere models (Kurucz 1993) using the MOST filter bandpass. The same value of f was assumed for all spots. The remaining parameters of the model were, for each spot: (1) the initial moment t (in hjd ≡ HJD − 2, 454, 700), when the spot is exactly facing the observer, (2) the rotation period of the spot p in days, (3) the latitude φ, (4) the diameter r in degrees and (5) the value of unspotted flux Fu = 1.26. The latter value was calculatted assuming the value of unspotted magnitude equal to Vu = 7.20, as determined by Chugainov (1976) at the time when he observed a flat maximum, and also adopted by Mohin & Raveendran (1993). We note that Berdyugina et al. (1998b) suggested Vu = 6.9, but for such a high brightness it is impossible to obtain a physically plausible fit to the light curve: We would have to postulate that we observed II Peg with almost the whole surface covered by black spots. As we described in Section 2, we observed Vmax = 7.45±0.06, based on the MOST instrumental system, after transformation to the V-band using nearby stars. If the unspotted magnitude for II Peg is Vu = 7.20, the unspotted flux at the time of the MOST observations would be Fu = 1.26 ± 0.07 (using normalization of Fu = 1 for Vmax = 7.45). This leads however to the difficulty of large radii of both spots with some overlap, which is not admitted by the model. We solved this problem assuming a third, very large (r3 = 89 • ) circular spot, covering practically the whole hemisphere directed away from the observer. This spot remained constant and -because of the low inclination -only partially visible. It represented the non-variable part of the spotted photosphere. This assumption is strongly supported by results obtained from the Doppler imaging technique: According to Neff et al. (1995), O'Neal & Neff (1997, Marino et al. (1999) andO'Neal et al. (1998) spots are always visible and they cover between 35 to 64 per cent of the hemisphere projected toward the observer. As mentioned in Section 2, the amplitude of light changes observed by MOST was ∆V = 0.145 − 0.12 magnitude. This was close to the smallest value noticed to date which, according to Mohin & Raveendran (1993), means that during the MOST observations spots covered a large fraction of the stellar surface.
p 3 [d] - - - - t 3 [hjd] - - - - φ 3 [ • ] -90 a -90 a -90 a -90 a r 3 [ • ] 89
The light curve used in the model utilized mean points formed for individual MOST orbits, after removal of all stellar flares. Typically 40 -70 points contributed to one MOSTorbit average point. The formal normalized flux errors (σ) per point is about 0.0011 (median) and the full range of 0.0007 -0.0022. Please note, that as discussed in Section 2, the light curve may contain a roughly 10 day-long smooth trend, a few times larger in amplitude than the formal normalized flux errors given above.
The spot model using the StarSpotz program assumes spherical stars. II Peg is a binary system where some light modulation is expected from ellipsoidal and reflection effects, so-called proximity effects. They are small in an absolute sense (less than 0.02 mag, see Figure 6) but cannot be neglected as, for the assumed i = 60 • , they reach almost 9 per cent (0.012 mag) of the observed spot-modulation amplitude. Because of the uncertainty with the inclination (±10 • ), all physical parameters of II Peg are not fully known; this affects predictions of the proximity effects. To assess the impact of the proximity effects on the final results, particularly on the differential rotation of the visible star, we used two light curves, corrected and uncorrected for the proximity ef-fects. Each of the two light curves was analyzed assuming a rigidly and a differentially rotating stellar surface.
The individual spot rotation periods p1,2 were assumed to depend on the stellar latitude φ1,2, the rotational period on the equator Peq and differential rotation coefficient k through:
pi(φi) = Peq/(1 − k sin 2 φi),(1)
where i=1,2. The search procedure for k consisted of two steps: first, for the assumed R = 3.4 R⊙, the proper value of Peq = 6.5940±0.0005 d was found, providing the observed v sin i = 22.6 km/s; then -for the value of unspotted flux level Fu = 1.26 -the differential rotation coefficients k, returning the smallest value of the reduced and weighted χ 2 was derived. The formula given by Eq.
(1) corresponds to the solar-type differential rotation law. Due to the low quality of our fits, most probably dominated by the inadequacies of the spot model (see Section 5.2), we did not consider other possible types of differential rotation.
Results of the light curve modelling
The results of modelling are presented in Table 1. One can see in Figure 8 that the fit obtained for the case of differential rotation describes the light curve better than the solidbody rotation model. This applies to the general light curve evolution, and in particular to the progressive amplitude decrease, as discussed in Section 2. We also note that only for the differential-rotation models does the larger spot face the secondary star, similarly as obtained by Berdyugina et al. (1998bBerdyugina et al. ( , 1999b.
To estimate the systematic errors of our models resulting from the large uncertainty of Vmax of ±0.06 mag, we repeated our solutions for two values of the unspotted flux level, Fu = 1.19 and Fu = 1.33. This choice affects the solutions strongly and the resulting spread in parameters can be taken as an indication of the uncertainty in our solutions. Note that the order in the range limits given in Table 1 is sometimes inverted but the first value always corresponds to the smaller value of Fu.
In general, the residuals -typically at a level of 0.004 of the mean flux -are much larger than formal errors of individual data points (typically 0.001). This has driven values of the formally derived, reduced, weighted χ 2 (Table 1) to values well above unity indicating systematic trends in residuals, most probably reflecting the difference between the true and the circular shape of spots which was assumed in the model. Because of the dominance of the systematic deviations over the random noise in the values of χ 2 , this parameter has only an indicative utility. Nevertheless, for each pair of solutions, with included (columns 3, 4) or excluded (columns 1, 2) proximity effects, the differential-rotation solution appears to be always better than the solid-rotation one. Taking these considerations into account, we select the solution in the last column of Table 1 as the final one, and we plot it in Figure 8.
Comparison with other results
Henry et al. (1995) determined the differential rotation pa-rameter, k = 0.005 ± 0.001, for II Peg using several multiepoch light curves. Our result of k = 0.0245 +0.0155 −0.0020 is in better accordance with the linear relation between parameters of RS CVn-type stars (Eq. 9 in Henry et al. (1995)): log k = −2.12(12) + 0.76(6) × log Prot − 0.57(16) × F , where F = Rstar/R Roche . Using the parameters listed in Table 1, we have Prot ≈ 6.7 d, F = 3.4/7.1 ≈ 0.48, leading to a prediction of k = 0.017. However, when we take into account the scatter visible in fig. 28 of Henry et al. (1995), a broad range of 0.002 < k < 0.066 is admitted for this value of Prot. Interestingly, the value of k determined in this paper for II Peg is similar to that estimated for the apparently single, but even faster rotating giant, FK Com (k = 0.016 for P = 2.4 d) by Korhonen et al. (2002).
CONCLUSIONS
Analysis of the almost-continuous, one month-long photometric monitoring of II Pegasi by the MOST satellite permits us to formulate the following conclusions:
(i) Eleven flares were observed, one lasting about 24 h and six flares moderately long, lasting typically 5 to 10 hours. The characteristics of the four shortest flares were difficult to estimate.
(ii) The primary eclipse of the visible star by its companion (probably M-dwarf) was not detected, which gives an upper limit for the orbital inclination of the system of 76 • .
(iii) From the analysis of the dark-spot modulated light curve, assuming i = 60 • , R1 = 3.4 R⊙, v sin i = 22.6 km/s (Berdyugina et al. 1998a) and absence of internal variability of spots during the MOST observations, we obtained an estimate of the parameter measuring the differential rotation of the primary component of II Peg: k = 0.0245 +0.0155 −0.0020 . The error of k reflects the major uncertainty in the unspotted brightness of the star so that the value of k remains preliminary; it will improve with future ameliorations in values of the assumed stellar parameters which enter the model. Table 1) and with differential rotation (k = 0.0245, column 4).
Figure 2 .
2Enlargements of eight light-curve segments with eleven flares of II Peg. Groups of data points within each figure correspond to individual MOST orbits.
Figure 3 .
3The mean flare made from five similar, long-lasting flares (numbers: 3, 4, 7, 8, 11), expressed in flux units. Time zero and maximum (1.037 continuum flux units) correspond to the estimated moment and mean maximum amplitude of the flare.
Figure 4 .
4The long-lasting flare no.5 together with the subsequent flare no.6 .
Figure 5 .
5Individual flares are indicated by vertical marks along the horizontal (orbital phase) axis. Two sections of the light curve of II Peg, from the beginning (orbital phases 0.9 -2.1) and the end (phases 3.9 -5.1) of the MOST monitoring are shown for comparison. The light curve evolved mainly due to a drift in longitude of the spots at a rate faster than the orbital motion.
Figure 6 .
6Synthetic light curves for II Peg computed using the stellar parameters obtained byBerdyugina et al. (1998a) for four different values of the orbital inclination.
Figure 8 .
8The fit to the light curve of II Peg (corrected for proximity effects) by the model with solid-body rotation (k = 0, column 3 of
). The CCD camera does not have a mechanical shutter which limits pos-7.44
7.46
7.48
7.5
7.52
7.54
7.56
7.58
7.6
25
30
35
40
45
50
55
1
1.5
2
2.5
3
3.5
4
4.5
5
magnitude
HJD -2454700
phase
Figure 1. The light curve of II Peg in magnitudes. The horizontal
scale is in heliocentric Julian Days (lower edge) and in orbital
phase units (upper edge) with zero phase for conjunction with
the visible star behind. The phases were calculated by means of
the ephemeris determined by Berdyugina et al. (1998a) as given
in Section 1.
estimated for i = 90 • and i = 77 • are represented by four vertical lines. Note that, as discussed in Sec. 2, the current spectroscopic ephemeris is very accurate and predicts the conjunction phase to ±0.002.7.56
7.58
7.6
7.62
7.64
7.66
7.68
7.7
7.72
7.74
0.875 0.9 0.925 0.95 0.975 1 1.025 1.05 1.075 1.1 1.125
magnitude
phase
+0.03 mag
+0.06 mag
+0.09 mag
+0.12 mag
90s
77s
77e
90e
Figure 7. Segments of the MOST data close to predicted times of
primary eclipse. The segments for four consecutive conjunctions
(which show spot evolution in time) have been shifted down for
clarity by the indicated magnitude amounts; the first conjunction
is on top. The start (90s, 77s) and end (77e, 90e) phases of the
eclipse,
http://idlastro.gsfc.nasa.gov/contents.html
ACKNOWLEDGMENTSMS acknowledges the Canadian Space Agency Post-Doctoral position grant to SMR within the framework of the Space Science Enhancement Program. The Natural Sciences and Engineering Research Council of Canada supports the research of DBG, JMM, AFJM, and SMR. Additional support for AFJM comes from FQRNT (Québec). RK is supported by the Canadian Space Agency and WWW is supported by the Austrian Space Agency and the Austrian Science Fund. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France and NASA's Astrophysics Data System (ADS) Bibliographic Services.Special thanks are due to Drs. Dorota Kozie l-Wierzbowska and Staszek Zo la for their attempts to detect the primary eclipse using photometric observations of II Peg at the Jagiellonian University Observatory in Cracow, Poland, and to Mr. Bryce Croll for his permission to use his spot modelling software.
. B W Bopp, P V Noah, PASP. 92333Bopp B.W., Noah P.V., 1980a, PASP, 92, 333
. B W Bopp, P V Noah, PASP. 92717Bopp B.W., Noah P.V., 1980b, PASP, 92, 717
. S V Berdyugina, S Jankov, I Ilyin, I Tuominen, F C Fekel, A&A. 334863Berdyugina S.V., Jankov S., Ilyin I., Tuominen I., Fekel F.C., 1998a, A&A, 334, 863
. S V Berdyugina, A V Berdyugin, I Ilyin, I Tuominen, A&A. 340437Berdyugina S.V.,Berdyugin A.V., Ilyin I., Tuominen I., 1998b, A&A, 340, 437
. S V Berdyugina, I Ilyin, I Tuominen, A&A. 349863Berdyugina S.V., Ilyin I., Tuominen I., 1999a, A&A, 349, 863
. S V Berdyugina, A V Berdyugin, I Ilyin, I Tuominen, A&A. 350626Berdyugina S.V., Berdyugin A.V., Ilyin I., Tuominen I., 1999b, A&A, 350, 626
. E Budding, Ap&SS. 48207Budding E., 1977, Ap&SS, 48, 207
. P B Byrne, P Panagi, J G Doyle, C A Englebrecht, R Mcmahan, F Marang, G Wegner, A&A. 214227Byrne P.B., Panagi P., Doyle J.G., Englebrecht C.A., McMahan R., Marang F., Wegner G., 1989, A&A, 214, 227
. P B Byrne, A C Lanzafame, L M Sarro, R Ryans, MNRAS. 270427Byrne P.B., Lanzafame A.C., Sarro L.M., Ryans R., 1994, MNRAS, 270, 427
. P F Chugainov, Krymskaia Astrof. Obs. 5489IzvestiiaChugainov P.F., 1976, Krymskaia Astrof. Obs., Izvestiia, 54, 89
. B Croll, G Walker, R Kuschnig, J Matthews, J Rowe, A Walker, S Rucinski, A Hatzes, W Cochran, R Robb, D Guenther, A Moffat, D Sasselov, W Weiss, ApJ. 648607Croll B., Walker G., Kuschnig R., Matthews J., Rowe J., Walker A., Rucinski S., Hatzes A., Cochran W., Robb R., Guenther D., Moffat A., Sasselov D., Weiss W., 2006, ApJ, 648, 607
. J Díaz-Cordovés, A Claret, A Giménez, A&AS. 110329Díaz-Cordovés J., Claret A., Giménez A., 1995, A&AS, 110, 329
. J D Dorren, ApJ. 320756Dorren J.D., 1987, ApJ, 320, 756
. J G Doyle, B J Kellett, P B Byrne, S Avgoloupis, L N Mavridis, J H Seiradakis, G E Bromage, T Tsuru, K Makishima, I M Mchardy, MNRAS. 248503Doyle J.G., Kellett B.J., Byrne P.B., Avgoloupis S., Mavridis L.N., Seiradakis J.H., Bromage G.E., Tsuru T., Makishima K., McHardy I.M., 1991, MNRAS, 248, 503
. J G Doyle, M Mathioudakis, H M Murphy, S Avgoloupis, L N Mavridis, J H Seiradakis, A&A. 278499Doyle J.G., Mathioudakis M., Murphy H.M., Avgoloupis S., Mavridis L.N., Seiradakis J.H., 1993, A&A, 278, 499
. A Frasca, K Biazzo, G Tas, S Evren, A C Lanzafame, A&A. 479557Frasca A., Biazzo K., Tas G., Evren S., Lanzafame A.C., 2008, A&A, 479, 557
. R O Gray, ApJSS. Henry G.W., Eaton J.A., Hamer J., Hall, D.S.97513Department of Physics and Astronomy, Appalachian State UniversityGray R.O., 2001, http://phys.appstate.edu/spectrum/spe- ctrum.html, Department of Physics and Astronomy, Ap- palachian State University Henry G.W., Eaton J.A., Hamer J., Hall, D.S., 1995, ApJSS, 97, 513
. G W Henry, M S Newsom, PASP. 108242Henry G.W., Newsom M.S., 1996, PASP, 108, 242
. J Kaluzny, 2627Kaluzny J., 1984, IBVS, No. 2627
. H Korhonen, S V Berdyugina, I Tuominen, A&A. 390179Korhonen H., Berdyugina S.V., Tuominen I., 2002, A&A, 390, 179
. W E Kunkel, ApJSS. 21325Kunkel W.E., 1973, ApJSS, 213,25
Atomic data for opacity calculations. R Kurucz, Ku- rucz CD-ROM No. 1.-18.Smithsonian Astrophysical Observatory. Kurucz R., 1993, Atomic data for opacity calculations. Ku- rucz CD-ROM No. 1.-18., Cambridge Mass., Smithsonian Astrophysical Observatory
. G Marino, M Rodonò, G Leto, G Cutispoto, A&A. 352189Marino G., Rodonò M., Leto G., Cutispoto G., 1999, A&A, 352, 189
. J M Matthews, R Kusching, D B Guenther, G A H Walker, A F J Moffat, S M Rucinski, D Sasselov, W W Weiss, Nature. 43051Matthews J.M., Kusching R., Guenther D.B., Walker G.A.H., Moffat A.F.J., Rucinski S.M., Sasselov D., Weiss W.W., 2004, Nature, 430, 51
. M Mathioudakis, J G Doyle, S Avgoloupis, L N Mavridis, J H Seiradakis, MNRAS. 25548Mathioudakis M., Doyle J.G., Avgoloupis S., Mavridis L.N., Seiradakis J.H., 1992, MNRAS, 255, 48
. S Mohin, A V Raveendran, A&A. 277155Mohin S., Raveendran A.V., 1993, A&A, 277, 155
. D O'neal, J E Neff, AJ. 1131129O'Neal D., Neff J.E., 1997, AJ, 113, 1129
. D O'neal, S H Saar, J E Neff, ApJ. 50173O'Neal D., Saar S.H., Neff J.E., 1998, ApJ, 501, L73
. J E Neff, D O'neal, S H Saar, ApJ. 452879Neff J.E., O'Neal D., Saar S.H., 1995, ApJ, 452, 879
. A Udalski, S M Rucinski, AcA. 32315Udalski A., Rucinski S.M., 1982, AcA, 32, 315
Occasional Technical Notes from Konkoly Observatory. G Ribárik, Ribárik G., 2002, Occasional Technical Notes from Konkoly Observatory, No.12
. S M Rucinski, PASP. 89280Rucinski S.M., 1977, PASP, 89, 280
. J F Rowe, J M Matthews, R Kusching, Mem S.A.It77282Rowe J.F., Matthews J.M., Kusching R., et al., 2006a, Mem S.A.It., 77, 282
. J F Rowe, J M Matthews, S Seager, ApJ. 6461241Rowe J.F., Matthews J.M., Seager S., et al., 2006b, ApJ, 646, 1241
. R F Sanford, ApJ. 53201Sanford R.F., 1921, ApJ, 53, 201
. P B Stetson, PASP. 99191Stetson P.B., 1987, PASP, 99, 191
. S S Vogt, PASP. 91616Vogt S.S., 1979, PASP, 91, 616
. G Walker, J Matthews, R Kuschnig, R Johnson, S Rucinski, J Pazder, G Burley, A Walker, PASP. 1151023Walker G., Matthews J., Kuschnig R., Johnson R., Rucin- ski S., Pazder J., Burley G., Walker A., et al., 2003, PASP, 115, 1023
. G Walker, B Croll, R Kuschnig, A Walker, S Rucinski, J Matthews, D Guenther, A Moffat, D Sasselov, W Weiss, ApJ. 6591611Walker G., Croll B., Kuschnig R., Walker A., Rucinski S., Matthews J., Guenther D., Moffat A., Sasselov D., Weiss W., 2007, ApJ, 659, 1611
. R E Wilson, Documentation of Eclipsing Binary Computer ModelWilson R.E., 1996, Documentation of Eclipsing Binary Computer Model
|
[] |
[
"GROUPS OF OUTER TYPE E 6 WITH TRIVIAL TITS ALGEBRAS",
"GROUPS OF OUTER TYPE E 6 WITH TRIVIAL TITS ALGEBRAS"
] |
[
"Skip Garibaldi ",
"Holger P Petersson "
] |
[] |
[] |
In two 1966 papers, J. Tits gave a construction of exceptional Lie algebras (hence implicitly exceptional algebraic groups) and a classification of possible indexes of simple algebraic groups. For the special case of his construction that gives groups of type E6, we connect the two papers by answering the question: Given an Albert algebra A and a separable quadratic field extension K, what is the index of the resulting algebraic group? 1 2.1. Lemma. The element a(G) depends only on the isomorphism class of G (and not on the choice of η).Proof. Fix a particular η. Every inverse image of ν is of the form ζ · η for some ζ ∈ Z 1 (k, Z(G)). Write τ for the "twisting" isomorphism H 1 (k, G) ∼ − → H 1 (k, G η ). The centers of G and G η are canonically identified, and we have τ (ζ · η) = ζ · τ (η) = ζ · 1.Since the Rost invariant is compatible with twisting, r G (ζ · η) = r Gη (τ (ζ · η)) + r G (η) = r Gη (ζ · 1) + r G (η).But G η is quasi-split, so the image ζ · 1 of ζ in H 1 (k, G η ) is trivial.
|
10.1007/s00031-006-0051-2
|
[
"https://arxiv.org/pdf/math/0511229v1.pdf"
] | 18,890,432 |
math/0511229
|
70278732fbfb8acf75ed2bb45658758991e4ad3e
|
GROUPS OF OUTER TYPE E 6 WITH TRIVIAL TITS ALGEBRAS
9 Nov 2005
Skip Garibaldi
Holger P Petersson
GROUPS OF OUTER TYPE E 6 WITH TRIVIAL TITS ALGEBRAS
9 Nov 2005
In two 1966 papers, J. Tits gave a construction of exceptional Lie algebras (hence implicitly exceptional algebraic groups) and a classification of possible indexes of simple algebraic groups. For the special case of his construction that gives groups of type E6, we connect the two papers by answering the question: Given an Albert algebra A and a separable quadratic field extension K, what is the index of the resulting algebraic group? 1 2.1. Lemma. The element a(G) depends only on the isomorphism class of G (and not on the choice of η).Proof. Fix a particular η. Every inverse image of ν is of the form ζ · η for some ζ ∈ Z 1 (k, Z(G)). Write τ for the "twisting" isomorphism H 1 (k, G) ∼ − → H 1 (k, G η ). The centers of G and G η are canonically identified, and we have τ (ζ · η) = ζ · τ (η) = ζ · 1.Since the Rost invariant is compatible with twisting, r G (ζ · η) = r Gη (τ (ζ · η)) + r G (η) = r Gη (ζ · 1) + r G (η).But G η is quasi-split, so the image ζ · 1 of ζ in H 1 (k, G η ) is trivial.
The exceptional simple algebraic groups are organized in a chain of inclusions A 1 ⊂ A 2 ⊂ G 2 ⊂ D 4 ⊂ F 4 ⊂ E 6 ⊂ E 7 ⊂ E 8 . One approach to proving something about a group of type, say, E 6 , is to attempt to make use of known facts about the groups of types appearing earlier in the chain. Essentially everything is known about groups of types A 1 (corresponding to quaternion algebras), A 2 by [KMRT,§19], and G 2 (corresponding to octonion algebras). Quite a lot is known about groups of type F 4 , corresponding to Albert algebras. In contrast, very little is known Date: October 29, 2018. 2000Secondary 17C40, 17B25. about groups of type E 6 . Two of the main results are Tits's construction in [Ti 66a] and the classification of possible indexes [Ti 66b].
The version of Tits's construction studied here takes an Albert algebra A and a quadraticétale algebra K and produces a simply connected group G(A, K) of type E 6 . (See §3 below for background on Albert algebras.) The purpose of this paper is to answer the question:
(0.1)
What is the index of the group G(A, K)?
The hard part of answering this question is treated by the following theorem. We fix an arbitrary base field k.
0.2. Theorem. The following are equivalent:
(1) The group G(A, K) is isotropic.
(2) k × K is (isomorphic to) a subalgebra of A.
(3) A is reduced and there exists a 2-Pfister bilinear form γ such that γ · f 3 (A) = f 5 (A) and γ · [K] = 0.
Conditions (1)-(3) are implied by:
(4) A has a nonzero nilpotent element. Furthermore, if A is split by K, then (1)-(3) are equivalent to (4).
Once one knows that G(A, K) is isotropic, it is not difficult to determine the index of G(A, K)-see Prop. 2.3 and 4.8-so we have completely settled Question (0.1).
When K is "split" (i.e., K = k × k, equivalently, G(A, K) has type 1 E 6 ), the theorem is a triviality. Indeed, conditions (1) through (3) are equivalent to the statement "A is reduced". When K is split, we define the statement "A is split by K" to mean that A is split as an Albert k-algebra.
The main theorem shows the flavor of the paper; it mixes algebraic groups (in (1)), Jordan algebras (in (2) and (4)), and-essentially-quadratic forms (in (3)). The core of our proof is Jordan-theoretic. We prove (1) implies (2) or (4) in Cor. 5.3 and Propositions 7.2 and 8.1. We prove that (4) implies (2) in 6.3, (2) implies (3) in 6.2, and (3) implies (1) in 9.7. The last claim is proved in Example 5.4 and 10.1.
As side benefits of the proof, we obtain concrete descriptions of the projective homogeneous varieties for groups of type 2 E 6 in §5 and we easily settle an open question from a 1969 paper of Veldkamp [V 69] in 5.5.
Notation and reminders
Recall that the (Tits) index [Ti 66b,2.3] of an (affine, semisimple) algebraic group is its Dynkin diagram plus two other pieces of information: the Galois action on the diagram and circles indicating the maximal k-split torus in the group.
The Tits algebras of an algebraic group G are the k-algebras End G (V ) as V varies over k-irreducible representations of G. We say that G has trivial Tits algebras if End G (V ) is a (commutative) field for every V .
Below, K will always denote a quadraticétale k-algebra with nontrivial k-automorphism ι, and A is an Albert k-algebra.
Throughout, we use the notation α 1 , . . . , α n for the diagonal matrix with α i in the (i, i) entry and for the symmetric bilinear form with that Gram matrix. We write α 1 , . . . , α n for the Pfister bilinear from 1, −α 1 ⊗ · · · ⊗ 1, −α n . Pfister form means "Pfister quadratic form". We write I n k for the module of quadratic forms generated by the n-Pfister forms over the Witt ring of symmetric bilinear forms (this agrees with the usual notation in characteristic = 2). We write [K] for the 1-Pfister form given by the norm K → k; a similar convention applies to the norm of an octonion k-algebra.
For n ∈ N, we write H q (k, n) for the group denoted by H q (k, Z/nZ(q −1)) in [GMS,. When n is not divisible by the characteristic of k, it is the Galois cohomology group H q (k, µ ⊗(q−1) n ). There is a bijection between q-fold Pfister forms (up to isomorphism) and symbols in H q (k, 2). In characteristic different from 2, this is a direct consequence of Voevodsky's proof of the Milnor Conjecture, and in characteristic 2 it is in [AB]. We will write, for example, [K] also for the symbol in H 1 (k, 2) corresponding to the norm K → k, and similarly for an octonion k-algebra.
Rost invariants
Let G be a quasi-simple, simply connected group over a field k. There is a canonical map r G : H 1 (k, G) → H 3 (k, n G ) known as the Rost invariant, where n G is a natural number depending on G, see [GMS] for details. The map is "functorial in k". In this section, we relate the Rost invariant with the index of isotropic groups of type 2 E 6 with trivial Tits algebras.
There is a class ν ∈ H 1 (k, Aut (G) • ) such that the twisted group G ν is quasi-split. Moreover, this property uniquely determines ν, as can be seen from a twisting argument and the fact that the kernel of the map H 1 (k, Aut (G ν
) • ) → H 1 (k, Aut(G ν )) is zero.
If G has trivial Tits algebras, then there is an η ∈ H 1 (k, G) that maps to ν and we write a(G) ∈ H 3 (k, n G ) for the Rost invariant r G (η).
2.2.
Examples. In the following examples, G always denotes a quasi-simple, simply connected group with trivial Tits algebras.
(a) Let G be of type 1 D n for n = 3 or 4, so that G is isomorphic to Spin(q) for some 2n-dimensional quadratic form q in I 3 k. The invariant a (G) is the Arason invariant of q.
In the case n = 3, the Arason-Pfister Hauptsatz implies that q is hyperbolic, so G is split and a(G) is zero.
In the case n = 4, q is similar to a 3-Pfister form. It follows that a (G) is zero if and only if G is split if and only if G is isotropic. (b) Let G be of type 2 D n for n = 3 or 4, with associated separable quadratic extension K/k. The group G is isomorphic to Spin(q) for q a 2n-dimensional quadratic form such that
q − [K] is in I 3 k; the Arason invariant of q − [K] is a(G).
We claim that a(G) is a symbol in H 3 (k, 2). When n = 3, q − [K] is an 8-dimensional form in I 3 k, so it is similar to a Pfister form. In the case n = 4, q − [K] is a 10-dimensional form in I 3 k, hence it is isotropic [Ti 90,4.4.1(ii)]. The Hauptsatz implies that q − [K] is isomorphic to α γ ⊥ H for some α ∈ k × and some 3-Pfister γ, where H denotes a hyperbolic plane. This shows that a(G) is a symbol in both cases.
We next observe that a (G) is not killed by K if and only if n = 4 and G is k-anisotropic. To see this, we may assume that n = 4 and G is k-anisotropic by (a); we will show that G is anisotropic over K. Suppose not, i.e., that G is split by K, that is, q is hyperbolic over K. It follows that γ is isomorphic to β[K] for some 2-Pfister bilinear form β [HL,4.2(iii)]. Since α γ ⊥ [K] is isomorphic to q ⊥ H, γ represents −αλ for some nonzero λ represented by [K]. The roundness of γ and [K] gives that α γ is isomorphic to −1 β[K]. Thus, q ⊥ H has Witt index at least 2. This contradicts the hypothesis that G is k-anisotropic, completing the proof of the observation. (c) Let G be anisotropic of type 2 A 5 with associated separable quadratic extension K/k; G is isomorphic to the special unitary group of a K/k-hermitian form deduced from a 6-dimensional symmetric bilinear form β over k. Note that a(G) lives in H 3 (k, 2) and is the Arason
invariant of β[K]. Suppose that a(G) is a symbol. Since G is split by K, a(G) is of the form [K] · (λ) · (µ) for some λ, µ ∈ k × ; that is, β[K]
is congruent to the corresponding 3-Pfister γ modulo I 4 k. The function field of γ makes the 12-dimensional form β[K] hyperbolic by the Hauptsatz, so the anisotropic part of β[K] is isomorphic to τ ⊗ γ for some bilinear form τ . In particular, β[K] is isotropic, contradicting the anisotropy of G. We conclude that a (G) is not a symbol.
The purpose of the preceding examples was to prepare the proof of the following proposition.
2.3. Proposition. Let G be an isotropic group of type 2 E 6 with trivial Tits algebras; write K for the associated quadratic extension of k. Then a(G) is in H 3 (k, 2) and the index of G is given by Table 2 A group G from the first three rows of the table is completely determined by the value of a(G).
.4. index condition quasi-split a(G) = 0 r r r r r r ☛ ✡ ✞ ✝ ☎ ✆ ❢ a(G) is a nonzero symbol killed by K r r r r r r ☛ ✡ ✞ ✝ ☎ ✆ a(G) is a symbol not killed by K r r r r r r ☛ ✡ ❢ a(G) is not a symbol
2.5. Proposition. Let G and G ′ be quasi-simple, simply connected groups of type 2 E 6 whose indexes are in the first three rows of Table 2.4. If a(G) equals a(G ′ ), then G and G ′ are isomorphic.
Proof. Fix a maximal k-split torus S in G, a maximal k-torus T containing it, and a set of simple roots for G with respect to T . Since G is simply connected, the group of cocharacters of T is identified with the coroot lattice. Write S 1 for the rank 1 torus corresponding toα 1 +α 6 ("corresponding to the circle around the α 1 and α 6 vertices in the index"); it is k-defined by [BT,Cor. 6.9]. Put G 1 for the derived subgroup of Z G (S 1 ); it is simply connected of type 2 D 4 with trivial Tits algebras.
Define a subgroup G ′ 1 of G ′ in an analogous manner. Since G 1 and G ′ 1 are strongly inner forms of each other, G ′ 1 is isomorphic to G 1 twisted by some 1-cocycle α ∈ Z 1 (k, G 1 ). The semisimple anisotropic kernel of G ′ lies in G ′ 1 , hence is the semisimple anisotropic kernel of G twisted by α. Tits's Witt-type theorem implies that G ′ is isomorphic to G twisted by α.
Since a(G) equals a(G ′ ), the Rost invariant r G (α) is zero by a twisting argument. But the inclusion of G 1 into G arises from the natural inclusion of root systems and so has Rost multiplier one. Hence r G 1 (α) is also zero. Moreover, since G 1 has trivial Tits algebras, r G 1 (ζ · α) is zero for every 1-cocycle ζ with values in the center of G 1 .
Fix an isomorphism of G 1 with Spin(q) for some 8-dimensional quadratic form q and write q ζ·α for the quadratic form obtained by twisting q via the image of ζ · α in Z 1 (k, SO(q)); the forms so obtained are precisely the forms λ q for λ ∈ k × . Pick a ζ (and hence a λ) so that q and λ q α represent a common element of k. Then q − λ q α is an isotropic 16-dimensional form in I 4 k, hence it is hyperbolic. Thus λ q α is represented by the trivial class in H 1 (k, SO(q)); it follows that α is in the image of the map H 1 (k, Z(G 1 )) → H 1 (k, G 1 ) and that G ′ 1 is isomorphic to G 1 . Applying Tits's Witt-type theorem again, we find that G and G ′ are isomorphic.
2.6. Remark (char k = 2). Let G be as in Prop. 2.5 and let γ be the 3-Pfister form corresponding to a (G). We claim that the group G 1 in the proof of the proposition is isomorphic to Spin(γ K ), where γ K denotes the K-associate of γ as defined in [KMRT,p. 499]. In particular, the semisimple anisotropic kernel of G is the semisimple anisotropic kernel of Spin(γ K ). To prove the claim, let G ′ be the quasi-split strongly inner form of G, so G ′ 1 is the spin group of a quasi-split quadratic form q. Fix α ∈ H 1 (k, Spin(q)) such that q α is isomorphic to γ K . The twisted group G ′ α is as in the first three lines of the table and a(G ′ α ) equals the Arason invariant of q α − q. This form is Witt-equivalent to δ γ for δ ∈ k × such that K = k( √ δ), hence a(G ′ α ) equals a(G) and G ′ α is isomorphic to G by Prop. 2.5. Since the semisimple anisotropic kernels of G ′ α and Spin(q α ) agree, the claim follows. The following remarks on the proposition make some forward references, but we will not refer to them elsewhere in the paper.
2.7.
Remark. The hypothesis that the indexes of both G and G ′ are in the first three rows of the table is crucial. For example, take G to be the real Lie group EIII and let G ′ be the compact real Lie group of type E 6 . The index of G is in the second row of the table, but G ′ is anisotropic. Nonetheless, a(G) and a(G ′ ) both equal the unique nonzero element of H 3 (R, 2), as can be seen by combining 4.8 and [J 71, p. 120].
2.8. Remark. Given a symbol γ ∈ H 3 (k, 2), there is a unique corresponding octonion algebra C. The index of the group G := G(H 3 (C, 1, 1, −1 ), K) appears in the first three rows of Table 2.4 by the main theorem for every K, and a(G) = γ by 4.8. Combined with Propositions 2.3 and 2.5, we conclude that every group whose index appears in the first three rows of Table 2.4 is isomorphic to G(H 3 (C, 1, 1, −1 ), K) for some octonion algebra C and some K. This is approximately the content of [V 68, 3.3] and [V 69, 3.2].
Assuming the main theorem, we can rephrase the conclusion above as:
Conditions (1)-(3) in Th. 0.2 are equivalent to (2.9) G(A, K) is isomorphic to G(A ′ , K) for some Albert k-algebra A ′ with nonzero nilpotents.
3. Albert algebra reminders 3.1. Arbitrary Albert algebras. Albert algebras are Jordan algebras of degree 3 and hence may all be obtained from cubic forms with adjoint and base point in the sense of [McC 69]. More specifically, given an Albert algebra A over k, there exist a cubic form N : A → k (the norm) and a quadratic map ♯ : A → A, x → x ♯ (the adjoint) which, together with the unit element 1 ∈ A, satisfy the relations N (1) = 1, 1 ♯ = 1,
x ♯♯ = N (x)x , (3.2) (DN )(x)y = T (x ♯ , y) , 1 × x = T (x)1 − x (3.3) in all scalar extensions, where T = −(D 2 log N )(1) : A × A → k (the trace form)
stands for the logarithmic hessian of N at 1, T (x) = T (x, 1) and x × y = (x + y) ♯ − x ♯ − y ♯ is the bilinearization of the adjoint. The Uoperator of A is then given by the formula
U x y = T (x, y)x − x ♯ × y . (3.4)
The quadratic trace, defined by S(x) := T (x ♯ ), is a quadratic form with bilinearization S(x, y) = T (x)T (y) − T (x, y) .
(3.5) It relates to the adjoint by the formula
x ♯ = x 2 − T (x)x + S(x)1 , (3.6)
where, as in [J 68, 1.5], the powers of x ∈ A are defined by x 0 = 1, x 1 = 1, and x n+2 = U x x n for n ≥ 0. For future use, we recall the following identities from [McC 69].
x ♯ × (x × y) = N (x)y + T (x ♯ , y)x , (3.7) T (x ♯ , x) = 3N (x) , (3.8) x ♯ × x = [S(x)T (x) − N (x)]1 − S(x)x − T (x)x ♯ , (3.9)
and (3.10)
x ♯ × (y × z) + (x × y) × (x × z) = T (x ♯ , y)z + T (x ♯ , z)y + T (y × z, x)x .
Just as in general Jordan rings, the Jordan triple product derives from the U -operator (3.4) through linearization:
{x, y, z} := U x,z y := U x+y − U x − U y z (3.11) = T (x, y)z + T (y, z)x − (z × x) × y .
Finally, the generic minimum polynomial of x in the sense of [JK] is
m x (t) = t 3 − T (x)t 2 + S(x)t − N (x) ∈ k[t] , (3.12)
so by [JK,p. 219] we have
x 3 − T (x)x 2 + S(x)x − N (x)1 = 0 = x 4 − T (x)x 3 + S(x)x 2 − N (x)x , (3.13)
where the second equation is a trivial consequence of the first only for char k = 2 because then we are dealing with linear Jordan algebras. An element
x ∈ A is invertible if and only if N (x) = 0. At the other extreme, x ∈ A is said to be singular if x ♯ = 0 = x.
As an ad-hoc definition, singular idempotents will be called primitive.
The following lemma collects well-known properties of the Peirce decomposition relative to primitive idempotents, using the labelling of [Lo 75]. We refer to [Fa,Lemma 1.5 3.14. Lemma. Let e ∈ A be a primitive idempotent and put f = 1 − e.
(a) The Peirce components of A relative to e are described by the relations
A 2 (e) = ke , A 1 (e) = {x ∈ A | T (x) = 0, e × x = 0} , A 0 (e) = {x ∈ A | e × x = T (x)f − x} . (b)
The restriction S e of S to A 0 (e) is a quadratic form with base point f over k whose associated Jordan algebra agrees with A 0 (e) as a subalgebra of A.
(c) x ♯ = S(x)e for all x ∈ A 0 (e).
3.15. Reduced Albert algebras. An Albert k-algebra A is reduced if it is not division, i.e., if it contains nonzero elements that are not invertible. Every such algebra is isomorphic to a Jordan algebra H 3 (C, Γ) as defined in the following paragraph.
Let C be an octonion (or Cayley) k-algebra (see [KMRT,§33.C] for the definition and elementary properties) and fix a diagonal matrix Γ := γ 1 , γ 2 , γ 3 ∈ GL 3 (k). We write H 3 (C, Γ) for the vector space of 3-by-3 matrices x with entries in C that are Γ-hermitian (x = Γ −1 x t Γ) and have scalars down the diagonal. It is spanned by the diagonal unit vectors e ii (1 ≤ i ≤ 3) and by the hermitian matrix units x i [jl] := γ l x i e jl + γ j x i e lj (x i ∈ C), where (ijl) here and in the sequel always varies over the cyclic permutations of (123). It has a Jordan algebra structure derived from a cubic form with adjoint and base point as in 3.1: Writing N C for the norm, T C for the trace and x → x for the conjugation of C, we let
x = α i e ii + x i [jl] , y = β i e ii + y i [jl] , (α i , β i ∈ k, x i , y i ∈ C)
be arbitrary elements of A and set
N (x) = α 1 α 2 α 3 − γ j γ l α i N C (x i ) + γ 1 γ 2 γ 3 T C (x 1 x 2 x 3 ) , (3.16) x ♯ = α j α l − γ j γ l N C (x i ) e ii + (γ i x j x l − α i x i )[jl] , (3.17)
as well as 1 = e ii . Then (N, ♯, 1) is a cubic form with adjoint and base point whose associated trace form is
T (x, y) = α i β i + γ j γ l N C (x i , y i ) . (3.18)
The Jordan algebra structure on H 3 (C, Γ) is defined to be the one associated with (N, ♯, 1).
Specializing y to 1 in (3.18) yields
T (x) = α i , (3.19)
whereas linearizing (3.17) leads to the relation
x × y = α j β l + β j α l − γ j γ l N C (x i , y i ) e ii (3.20) + (γ i x j y l + y j x l − α i y i − β i x i )[jl] .
Furthermore, the quadratic trace S of A and its polarization by (3.17), (3.19), (3.20) have the form
S(x) = α j α l − γ j γ l N C (x i ) , (3.21) S(x, y) = α j β l + β j α l − γ j γ l N C (x i , y i ) .
The triple F := (e 11 , e 22 , e 33 ) is a frame, i.e., a complete orthogonal system of primitive idempotents, in A with Peirce components J ii (F ) = Re ii , J jl (F ) = C [jl]. Comparing this with (3.21), we obtain a natural isometry
S| J jl (F ) ∼ = −γ j γ l . N C . (3.22)
3.23. The split Albert algebra. Of particular interest is the split Albert algebra A d = H 3 (C d , 1), where C d stands for the split octonion algebra of Zorn vector matrices over k [KMRT, Ch. VIII, Exercise 5] and 1 is the 3-by-3 unit matrix. Albert algebras are exactly the k-forms of A d , i.e., they become isomorphic to A d over the separable closure of k. For any Albert algebra A over k as in 3.1, the equation N = 0 defines a singular hypersurface in P(A), its singular locus being given by the system of quadratic equations x ♯ = 0 in P(A).
3.24. The invariants f 3 and f 5 . Letting A be a reduced Albert algebra over k as in 3.15, we briefly describe the cohomological mod-2 invariants of A in a characteristic-free manner; for char k = 2, see [GMS,]. Since C is uniquely determined by A up to isomorphism [Fa,Th. 1.8], so is its norm N C , which, being a 3-fold Pfister form, gives rise to the first mod-2 invariant f 3 (A) ∈ H 3 (k, 2) of A. On the other hand, Γ is not uniquely determined by A since, for example, it may be multiplied by nonzero scalars and its components may be permuted arbitrarily as well as multiplied by nonzero square factors without changing the isomorphism class of A. This allows us to assume γ 2 = 1. Then, as in [P, 2.1, 4.1], we may consider the 5-fold Pfister form −γ 1 , −γ 3 N C , giving rise to the second mod-2 invariant [J 68,p. 381,Th. 6] to the cohomological setting, it follows in all characteristics that reduced Albert algebras are classified by their mod-2 invariants f 3 , f 5 .
f 5 (A) = f 3 (A) · (−γ 1 ) · (−γ 3 ) ∈ H 5 (k, 2) (3.25) of A. Translating Racine's characteristic-free version [Ra 72, Th. 3] of Springer's classical result
3.26. Nilpotent elements. Let A be an Albert algebra over k. An element x ∈ A is said to be nilpotent if x n = 0 for some n ∈ N. Combining [JK,p. 222,Th. 2(vi)] with (3.12), we conclude that
x ∈ A is nilpotent if and only if T (x) = S(x) = N (x) = 0. (3.27)
In this case x 3 = x 4 = 0 by (3.13). Hence A contains nonzero nilpotents if and only if x 2 = 0 for some nonzero element x ∈ A. We also conclude from (3.6), (3.27) that x ∈ A satisfies x 2 = 0 if and only if x ♯ = 0 and T (x) = 0. Finally, by [P, 4.4], an Albert algebra contains nonzero nilpotent elements if and only if it is reduced and f 5 (A) = 0.
In order to bring Jordan-theoretic techniques to bear in the proof of Theorem 0.2, we will translate hypothesis (1) into a condition on the Albert algebra A. To do so, we extend the results of [CG,§13] to the case where k is arbitrary.
3.28. Definition. A subspace X in an Albert algebra A is an inner ideal if U x A ⊆ X for all x ∈ X. A subspace X of A is singular if x ♯ = 0 for all x ∈ X. A subspace X of A is a hyperline if it is of the form x × A for some nonzero x with x ♯ = 0.
The nonzero, proper inner ideals of an Albert algebra A are the singular subspaces (of dimensions 1 through 6) and the hyperlines (of dimension 10), see [McC 71,p. 467] and [Ra 77, Th. 2].
For an inner ideal X of A, we define ψ(X) to be the set of all y ∈ A satisfying (3.29)
{X, y, A} ⊆ X and (3.30)
U A U y X ⊆ X.
The definition of ψ in [CG] consisted of only condition (3.29). Condition (3.30) was suggested by Erhard Neher, for the purposes of including characteristic 2. He points out that when k has characteristic not 2, (3.30) follows from (3.29) by applying the identity JP13. (The notation JPxx refers to the Jordan pair identities as numbered in [Lo 75].) 3.31. Lemma (Neher). If X is an inner ideal of A, then ψ(X) is also an inner ideal.
Proof. Clearly ψ(X) is closed under scalar multiplication. Let y, z be elements of ψ(X). To prove that y + z is in ψ(X), it suffices to show that U A U y,z X ⊆ X, which follows from JP13:
U a U y,z x = {a, y, {a, z, x}} − {U a y, z, x} ∈ X.
Fix y ∈ ψ(X) and a ∈ A. By JP7, we have
{X, U y a, A} ⊆ {{a, y, X}, y, A} + {a, U y X, A} ⊆ X,
So U y a satisfies condition (3.29). Also, by JP3,
U A U Uya X = U A U y U a U y X ⊆ X, so U y a satisfies condition (3.30).
We now check that the descriptions of ψ(X) for various X given in [CG,§13] are valid in all characteristics. Consider first the case where X is 1-dimensional singular, i.e., an "α 1 -space" in the language of [CG]. For y ∈ X × A, we have {X, y, A} ⊆ X by [CG,13.8]. The argument in the last paragraph of the proof of [CG,Prop. 13.19] shows that U y X is zero. Therefore, ψ(X) contains X × A, and as in [CG,13.6]-using that ψ(X) is an inner ideal-we conclude that ψ(X) equals X × A.
Similar arguments easily extend the proofs of 13.9-13.19 of [CG] to include the case where k has arbitrary characteristic. (See also Remarks 5.5 and 13.22 of that paper.) For every x in a proper inner ideal X and every y ∈ ψ(X), we find a posteriori that U y x is zero. That is, (3.30) is superfluous. 3.32. In summary, for a proper, nonzero inner ideal X in A, the set ψ(X) of y ∈ A satisfying (3.29) is an inner ideal of A whose dimension is given by the following table. The 5-dimensional inner ideals come in two flavors; we write "5 ′ " to indicate a 5-dimensional maximal singular subspace. dim X 1 2 3 5 ′ 6 10 dim ψ(X) 10 5 ′ 3 2 6 1 For X of dimension = 6, the space ψ(X) is the set of all y ∈ A such that {X, y, A} is zero. For X of dimension 6, ψ(X) is a 6-dimensional inner ideal such that {X, ψ(X), A} equals X.
The groups G(A, K)
This section defines the groups G(A, K) using hermitian Jordan triples. These triples were previously studied in an analytic setting [Lo 77, 2.9]; translating to a purely algebraic situation in a natural way, we arrive at the following concept. 4.1. Definition. A hermitian Jordan triple over k is a triple (K, V, P ) consisting of a quadraticétale k-algebra K (with conjugation ι), a free Kmodule V of finite rank, and a quadratic map P :
V → Hom K (V ι , V ), where V ι is the K-module V with scalar multiplication twisted by ι, such that (V, P )
is an ordinary Jordan triple over k in the sense of, e.g., [Lo 75,1.13]. In particular,
P v : V → V is an ι-semilinear map depending K-quadratically on v ∈ V . A homomorphism (K, V, P ) → (K ′ , V ′ , P ′ ) of hermitian Jordan triples is a pair (φ, h) such that φ : K ∼ − → K ′ is a k-isomorphism and h : V → V ′ is a φ-semilinear map such that h(P x y) = P h(x) h(y) for all x, y ∈ V .
It is easy to see that Jordan pairs are basically the same as hermitian Jordan triples (K, V, P ) with K split.
4.2.
Example. Starting with a Jordan k-algebra J and a quadraticétale kalgebra K with conjugation ι, we obtain a hermitian Jordan triple as output. Namely, take V = J ⊗ k K and define P by
P x y := U x (Id J ⊗ι)y.
We denote this hermitian Jordan triple by T (J, K).
A hermitian Albert triple is a triple T (A, K), where A is an Albert kalgebra. We call the triple T d := T (A d , k × k) the split Albert triple or simply the split triple.
We will now explicitly describe the automorphism group of a hermitian Jordan triple T (J, K). Following [J 81, 1.7, p. 1.23], the structure group of J, denoted by Str (J), is the subgroup of GL (J) consisting of all bijective linear maps g : J → J that may be viewed as isomorphisms from J onto an appropriate isotope or, equivalently, that have the property that there exists a bijective linear map g † : J → J satisfying the relation U g(x) = gU x (g † ) −1 for all x ∈ J; obviously it is a group scheme. The assignment g → g † is an order 2 automorphism of the structure group of J.
4.3.
Lemma. The group of k-automorphisms of T (J, K) is generated by the element (ι, Id J ⊗ι) of order 2 and by the elements (Id K , g) for g ∈ Str(J)(K) satisfying g † = (Id J ⊗ι)g(Id J ⊗ι).
Proof. (ι, Id J ⊗ι) is clearly an automorphism of T (J, K). Conversely, multiplying any automorphism (φ, h) of T (J, K) by (ι, Id J ⊗ι) if necessary, we are allowed to assume that φ is the identity on K. The equation
h(P x y) = P h(x) h(y) is equivalent to U h(x) = hU x (Id J ⊗ι)h −1 (Id J ⊗ι)
, which gives the claim.
4.4.
Since the trace form of an Albert algebra A is a non-degenerate symmetric bilinear form, it follows from [McC 69,p. 502] that the structure group of A agrees with its group of norm similarities; viewed as a group scheme, it is a reductive algebraic group with center of rank 1 and its semisimple part is simply connected of type E 6 [S, 14.2].
Below, we only consider hermitian Jordan triples T = (V, K, P ) that become isomorphic to T d over a separable closure k sep of k. We write G(T ) for the semisimple part of the identity component of Aut(T ); when T is of the form T (A, K), we will write G(A, K) for short. This group is simply connected of type E 6 because it is isomorphic to the semisimple part of Str(A d ) over k sep . It acts on the vector space underlying T , and over a separable closure this representation is isomorphic to a direct sum of the usual representation of E 6 on A d and its contragradient. Since the direct sum of these two representations is defined over k, the group has trivial Tits algebras. Also, G(T ) is of type 2 E 6 if and only if K is a field.
Relation with Tits's construction.
We now observe that the Lie algebra g of G(A, K) is precisely the one obtained from A and K using Tits's construction from [Ti 66a]. For the duration of this subsection, we assume that k has characteristic = 2, 3. (Tits makes this assumption in his paper, so it is harmless for our purposes.) Suppose first that K is k × k. From the first paragraph of 4.4, we see that g consists of those linear transformations on A that leave the cubic form N "Lie invariant". By [J 59,p. 189,Th. 5], this is the same as the Lie algebra R 0 (A) ⊕ D(A), where R 0 (A) is the vector space generated by the transformations "right multiplication by a trace zero element of A" and D(A) denotes the derivation algebra of A; this is what one obtains from Tits's construction. Now consider the case where K is a field. We may view the vector space A ⊗ K underlying T (A, K) as the subspace of A K × A K fixed by the map (x, y) → (ιy, ιx). In this way, we can view G(A, K) as the group whose k-points are those (f,
f † ) ∈ G(A K , K × K)(K) such that ιf † ι = f . The differential of f → f † is d → −d * .
It follows that the Lie algebra g is the subalgebra of R 0 (A K ) ⊕ D(A K ) consisting of elements fixed by the map d → −ιd * ι. When d is right multiplication by an element of A K , we have d * = d because the bilinear form T is associative [J 68,p. 227,Cor. 4]. The derivation algebra D(A K ) is spanned by commutators of right multiplications, hence is fixed elementwise by the map d → −d * . Consequently, the Lie algebra of G
(A, K) is identified with √ λ R 0 (A) ⊕ D(A), where λ ∈ k × satisfies K = k( √ λ)
. This is the Lie algebra constructed by Tits. (Jacobson [J 71] and Ferrar [Fe 69] denote this Lie algebra by E 6 (A) λ and L (A) λ respectively.) By Galois descent, the map T → G(T ) produces (up to isogeny) all groups of type E 6 with trivial Tits algebras over every field k. But over some fields, it suffices to consider only the triples of the form T (A, K).
4.6.
Example. Every group of type 1 E 6 with trivial Tits algebras can be realized as the group of linear transformations on an Albert algebra A preserving the cubic norm, hence is of the form G(A, k × k). (In characteristic = 2, 3, this follows from [G, 3.4].) We now list a few examples of fields k such that every group of type 2 E 6 with trivial Tits algebras is-up to isogeny-of the form G(A, K).
(a) When k is a local field [Kne] or a finite field, every group of type 2 E 6 is quasi-split. (b) For k the real numbers, up to isogeny the three groups of type 2 E 6 are of the form G(A, C) as A varies over the three isomorphism classes of Albert R-algebras [J 71, pp. 119, 120]. (c) When k is a number field, as can be seen by using the Hasse Principle, cf. [Fe 78, 3.2, 6.4].
For f an automorphism of an Albert algebra A, we have f † = f , so Aut(A) is a subgroup of G(A, K) for all K.
4.7.
Example. The group G(A d , K) contains the split group Aut(A d ) of type F 4 , hence has k-rank at least 4. The classification of possible indexes gives that G(A d , K) is quasi-split for all K, including the case K = k × k.
Rost invariant of G(A, K). The inclusion of Aut(A) in G(A, K)
has Rost multiplier one [D, p. 194] in the language of [GMS], hence the composition
H 1 (k, Aut(A)) − −−− → H 1 (k, G(A, K)) r G(A,K) − −−−− → H 3 (k, n G(A,K) )
agrees with the Rost invariant relative to Aut(A). In the definition of a(G) at the start of §2, one can take the image of A d as η by the preceding example. We find: a(G(A, K)) = −r F 4 (A) ∈ H 3 (k, 6), where the right side denotes the negative of the usual Rost invariant of the Albert algebra A. In particular, the mod-2 portion of a(G(A, K)) is the symbol f 3 (A).
If one knows that a group G(A, K) is isotropic, then combining the previous paragraph and Prop. 2.3 gives the index of G(A, K).
4.9.
Example. Fix a group G of type 2 E 6 with trivial Tits algebras whose index is from the bottom row of Table 2
.4. By Prop. 2.3, a(G) is not a symbol, hence G is not of the form G(A, K) for any Albert algebra A.
We remark that the use of the Rost invariant simplifies this example dramatically, compare [Fe 69, pp. 64, 65].
Homogeneous projective varieties
In this section, we relate the Tits index of a group G(T ) to inner ideals in the triple T . This is done by describing the k-points on the homogeneous projective varieties for the groups.
A projective variety Z is homogeneous for G(T ) if G(T ) acts on Z and G(T )(k sep ) acts transitively on Z(k sep ), for k sep a separable closure of k.
There is a bijection between the collection of such varieties defined over k (up to an obvious notion of equivalence) and Galois-stable subsets of vertices in the Dynkin diagram [BT,6.4]. There are two common ways of normalizing the bijection: we normalize so that the trivial variety corresponds to the empty set and the largest homogeneous variety (i.e., the variety of Borel subgroups) corresponds to the full Dynkin diagram.
Write T = (K, V, P ). A K-submodule X of V is an inner ideal in T if P x V ⊆ X for every x ∈ X. We write [ ] for the bilinearization of P , i.e., [x, y, z] := P x+z y − P x y − P z y.
Since P is a quadratic map and P x is ι-semilinear for all x ∈ V , the ternary product [ ] is linear in the outer two slots and ι-semilinear in the middle slot.
5.1. Example. When K is a field, the inner ideals of T (A, K) are the same as the inner ideals of the Albert K-algebra A K . To see this, note that for a K-submodule X of A K , we have P
X A K = U X (Id A ⊗ι)A K = U X A K .
When K is not a field, i.e., when K is k × k, the K-module underlying
T (A, K) is A × A. A K-subspace X of A × A is of the form X 1 × X 2 for X i a k-subspace of A.
The same argument as in the previous paragraph gives that X is an inner ideal of T (A, K) if and only if X 1 and X 2 are inner ideals of A.
Below, we only consider inner ideals that are free K-modules, and by rank we mean the rank as a K-module. In the case where K is a field, the previous example gives: The rank one (resp., 10) inner ideals of T (A, K) are of the form Kx (resp., x × A K ) where x ∈ A K is singular.
Proposition. The projective homogeneous variety for G(T ) corresponding to the subset S of the Dynkin diagram has k-points the inner ideals as in the table below.
S k-points are inner ideals {α 1 , α 6 } X ⊂ Y s.t. rank X = 1, rank Y = 10, and [X,
Y, V ] = 0 {α 3 , α 5 } X ⊂ Y s.t. rank X = 2, rank Y = 5, and [X, Y, V ] = 0 {α 4 } X s.t. rank X = 3 and [X, X, V ] = 0 {α 2 } X s.t. rank X = 6 and [X, X, V ] = X
With the information from the table, it should be no trouble to describe the homogeneous varieties corresponding to other opposition-stable subsets of the Dynkin diagram using the recipe in [CG,9.4]. (The opposition involution is the unique nonidentity automorphism of the Dynkin diagram.)
Proof. Since the claimed collections of subspaces form projective k-varieties on which G(T ) acts, it remains only to show that the action is transitive on k sep -points and that the stabilizer of a k sep -point is a parabolic subgroup of the correct type. Therefore, we assume that k is separably closed-in particular that T is split, i.e., isomorphic to T d -and note that G(T ) is the split simply connected group of type E 6 . The projective homogeneous variety Z 0 associated with the split group and S was described in sections 7 and 9 of [CG]. We prove the proposition by producing a G(T )(k)-equivariant bijection Z(k) ∼ − → Z 0 (k). Let X ⊂ Y be inner ideals as in the first line of the table. As in Example 5.1, we write X = X 1 × X 2 and similarly for Y , where X i and Y i are inner ideals in A d . The set Z 0 (k) consists of pairs X ′ ⊂ Y ′ where X ′ and Y ′ are inner ideals in A d of dimension 1 and 10 respectively. Define f :
Z → Z 0 by f (X ⊂ Y ) = (X 1 ⊂ Y 1 ); it is G-equivariant.
On the other hand, a pair X ′ ⊂ Y ′ in Z 0 (k) is the image of (X ′ , ψ(Y ′ )) ⊂ (Y ′ , ψ(X ′ )) under f . Indeed, ψ(Y ′ ) and ψ(X ′ ) are inner ideals of the appropriate dimension by 3.32, and
X ′ ⊂ Y ′ implies ψ(Y ′ ) ⊂ ψ(X ′ ).
The second and third lines of the table follow by similar reasoning. Now let X be as in the last line of the table. The k-points of Z 0 are the 6-dimensional singular subspaces of A d . Again writing X as (X 1 , X 2 ), we define f by f (X) = X 1 . The equation [X, X, V ] = X is equivalent to the equations {X i , X i+1 , A d } = X i for i = 1, 2. Clearly, X i+1 is a subset of ψ(X i ). But ψ(X i ) also has dimension 6 by 3.32, hence X i+1 = ψ(X i ). That is, X 2 is determined by X 1 . The conclusion now follows as in the case of the first line.
We now use the description of the projective homogeneous varieties of G(A, K) given in the proposition to give a concrete criterion for determining whether G(A, K) is isotropic.
Corollary. The group G(A, K) is isotropic if and only if there is a nonzero
x ∈ A K such that x ♯ = 0 and x ∈ ι(x) × A K .
Proof. Combining 4.8 and Prop. 2.3, we find that G(A, K) is isotropic if and only if the vertices α 1 and α 6 in its index are circled, i.e., if and only if the projective homogeneous variety corresponding to {α 1 , α 6 } has a k-point [BT,6.4]. Hence G(A, K) is isotropic if and only if there are inner ideals X ⊂ Y in A K of ranks 1 and 10 such that [X, Y, A K ] = 0.
Suppose first that such inner ideals X ⊂ Y exist. We have X = Kx for some nonzero x ∈ A K satisfying x ♯ = 0 and
{X, ι(Y ), A K } = [X, Y, A K ] = 0. Since (3.29) implies (3.30), ι(Y ) is contained in ψ(X) = x × A K . Comparing ranks using 3.32, we conclude that X ⊆ Y = ι(x) × A K .
Conversely, suppose that x ∈ A K satisfies the conditions displayed in the corollary. Then X := Kx and Y := ι(x) × A K are inner ideals of A K of the desired ranks such that X ⊂ Y . Also, [x, ι(x) × A K , A K ] = {x, x × A K , A K }, which is zero by (3.10) and (3.11).
Example. If A has nonzero nilpotent elements, then G(A, K) is isotropic.
Indeed, A has a nonzero element x such that x ♯ = 0 and T (x) = 0 by 3.26. Then ι(x) × (−1) = x by (3.3).
This example shows that in Theorem 0.2, (4) implies (1). We now resolve an open question from [V 69]. 5.5. Example. Consider again a group G as in Example 4.9, and write K for the associated quadratic extension. The group is of the form G(T ) for a triple T = (V, K, P ) where T contains an "α 2 -space", i.e., an inner ideal X of V such that dim X = 6 and [X, X, V ] = X.
Over K, T ⊗ K is isomorphic to a triple T (A, K × K) for some Albert K-algebra A. However, A contains a 6-dimensional singular subspace, so it is split [Ra 77, Th. 1]. That is, T ⊗ K is isomorphic to the split triple T (A d K , K × K). (Alternatively, one can see this using Tits's list of possible indexes for groups of type 1 E 6 .) By Galois descent, we may identify V with [CG,13.19].
{(a, tιa) | a ∈ A d K }, where t is some K-linear transformation of A d K . (Our tι is Veldkamp's T .) As in the proof of Prop. 5.2, we have tιX = ψ(X), where we have identified X with its first component in A K × A K . For x ∈ X, we have T (x, tιx) = 0 by
Veldkamp was concerned with geometries whose points were 1-dimensional singular K-subspaces of A d K . In his language, the last sentence of the previous paragraph shows that Kx is a weakly isotropic point when x is nonzero. Veldkamp asked on page 291 of [V 69] if it is possible to have a weakly isotropic point and no strongly isotropic point. We now observe that there is no strongly isotropic point in this example. Suppose that z ∈ A d K is nonzero and strongly isotropic, i.e., {z, tιz, A d K } is zero. In that case, taking Z = Ktιz and Y = z × A d K we find a pair of inner ideals of dimension 1 and 10 in T such that Z ⊂ Y and [Z, Y, V ] = 0. But this contradicts the hypothesis on the index of G(T ), so no such z can exist, i.e., we have produced an explicit example of a situation where there is a weakly isotropic point and no strongly isotropic point.
Embeddings of k × K
In this section, we assume that K is a separable quadratic field extension of k, and write ι for its nontrivial k-automorphism. We fix a representative δ ∈ k × for the discriminant of K/k. (We take the naive definition of discriminant from, say, [Lang]. Consequently, in characteristic 2, we can and do take δ = 1.) The purpose of this (long) section is to prove the following proposition, which in turn shows that (2) implies (3) in the main theorem, see 6.2.
6.1. Proposition. Let A be an Albert k-algebra. Then k × K is a subalgebra of A if and only if A is isomorphic to H 3 (C, r, 1, δN K (s) ) for some octonion algebra C and elements r ∈ k × and s ∈ K × such that T K (s) = 0.
The assumption that K is a field is harmless. If K is k × k, then we can take δ = 1 and the proposition is easily seen to hold so long as k is not the field with two elements. 6.2. Proof that (2) implies (3). Before proceeding with the proof of the proposition, we first observe that it shows that (2) implies (3) in the main theorem. Prop. 6.1 gives that A is isomorphic to H 3 (C, r, 1, δN K (s) ).
That is, f 3 (A) is [C] and f 5 (A) equals [C] · (−r) · (−δN K (s)) by (3.25). The 2-Pfister bilinear form γ := −r, −δN K (s) satisfies (3).
6.3. Proof that (4) implies (2). The proposition also gives a proof that (4) implies (2) in the main theorem. Specifically, if A has a nilpotent, it is isomorphic to H 3 (C, −1, 1, δN K (s) ) for every δ ∈ k × and every s ∈ K × with nonzero trace, since both algebras have the same invariants f 3 = [C] and f 5 = 0. Then A contains k × K by the proposition.
6.4. Proof of 6.1: the easy direction. We suppose that A is H 3 (C, Γ) for Γ as in the proposition and produce an explicit embedding of k × K in A. Fix t ∈ K × such that t 2 = δ, which implies T K (t) = 0 and N K (t) = −δ, (6.5) and define a map ϕ : k × K → A by ϕ(α, a) :=αe 11 + T K (s) −1 N K (s, a)e 22 + N K s, ι(a) e 33 + t −1 ι(a) − a 1 C [23] for α ∈ k, a ∈ K. Note that t −1 (ι(a) − a) is in k in all characteristics by (6.5), i.e., the image of ϕ really is in A and not just in A K . Also, ϕ sends 1 to 1. Hence it suffices to show that ϕ preserves norms. Plugging in to the explicit formula for the norm on J from (3.16), we find
N [ϕ(α, a)] = α T K (s) 2 N K (s, a)N K s, ι(a) − N K (s) ι(a) − a 2 .
Writing out, for example, N K (s, a) as sι(a) + ι(s)a, we find that N [ϕ(α, a)] is αN K (a) as desired.
The converse implication of Prop. 6.1 takes a bit more work. Fortunately, we can rely on the proof of [PR 84, Th. 3.2] to simplify the task. For the sake of clarity, we still indicate the main steps of that proof insofar they are relevant for our purposes. 6.6. Proof of 6.1: the difficult direction. Assume now that E := k × K is a subalgebra of A as in the statement of Prop. 6.1. We let ι act on A K := A ⊗ K and E K := E ⊗ K ⊂ A K through the second factor. Since A inherits zero divisors from E, it is reduced. We write C for its coordinate algebra, forcing C K := C ⊗ K to be the coordinate algebra of A K .
Step 1. A coordinatization for E K . Since E K is split but E is not (because K is a field by hypothesis), there is a frame (e 1 , e 2 , e 3 ) of A K satisfying E K = Ke 1 ⊕ Ke 2 ⊕ Ke 3 with ι(e 1 ) = e 1 , ι(e 2 ) = e 3 , ι(e 3 ) = e 2 . This setup will remain fixed till the end of the proof. It implies E = {αe 1 + ae 2 + ι(a)e 3 | α ∈ k, a ∈ K} .
Step 2. A coordinatization for A K . Following [PR 84, Lemma 3.5], we find a coordinatization of A K having the form A K = H 3 (C K , 1, s, ι(s) ) for some s ∈ K × (6.7)
such that e i = e ii for all i, the trace T K (s) is not zero, and there is an ι-semilinear involution τ K of C K that commutes with the conjugation of C K and satisfies the relation
ι(x) = ι(a 1 )e 1 + ι(a 3 )e 2 + ι(a 2 )e 3 + τ K (x 1 )[23] + τ K (x 3 )[31] + τ K (x 2 )[12]
for all x = a i e ii + x i [jl] ∈ A K . In particular,
(6.8) A = αe 1 + ae 2 + ι(a)e 3 +x 1 [23] + x 2 [31] + τ K (x 2 )[12] α ∈ k, a ∈ K, x 1 ∈ C τ K K , x 2 ∈ C K ,
where C τ K K denotes the subspace of C K consisting of elements fixed by τ K . is an ι-semilinear automorphism of C K , forcing B := C τ K to be a composition algebra over k such that C K = B ⊗ K.
Step 4. Peirce components of A. As in 6.4, we now fix t ∈ K × satisfying t 2 = δ, hence (6.5). Then the following relations hold:
A 12 (F ) = {x 2 [31] + x 2 [12] | x 2 ∈ B} , (6.11) A 31 (F ) = {(st)x 2 [31] + ι(st)x 2 [12] | x 2 ∈ B} .
(6.12) While (6.11) was established in [PR 84, Lemma 3.6b], the proof of (6.12) is a bit more involved and runs as follows. Standard facts about Peirce components and (6.8), (6.9) imply
A 31 (F ) = A 1 (d 3 ) ∩ A 1 (d 1 ) = A 1 (d 3 ) ∩ A ∩ (C K [31] + C K [12]) = A 1 (d 3 ) ∩ {x 2 [31] + τ K (x 2 )[12] | x 2 ∈ C K } . For x 2 ∈ C K , the element x := x 2 [31] + τ K (x 2 )[12] satisfies T K (s)(d 3 × x) = ι(s)e 22 + se 33 − 1[23] × (x 2 [31] + τ K (x 2 )[12]) = − sτ K (x 2 ) + ι(s)x 2 [31] − ι(s)x 2 + sτ K (x 2 ) [12]
by (3.20) and (6.7), and x belongs to A 1 (d 3 ) if and only if this expression is zero (Lemma 3.14a). Since ι(t) = −t by (6.5), we may apply (6.10) to conclude that this in turn is equivalent to the equation stτ (x 2 ) = ι(st)x 2 , i.e., to x 2 ∈ (st)B.
Step 5. Conclusion of proof. Using (3.21) and (6.7), it is straightforward to check the relations
S(x 2 [31] + x 2 [12]) = −T K (s)N B (x 2 ) , (6.13) S (st)x 2 [31] + ι(st)x 2 [12] = −T K (s)N K (s)δN B (x 2 ) (6.14)
for all x 2 ∈ B. Following [McC 66, p. 1077], we now choose a coordinatization of A that has the form
A = H 3 (C, Γ 0 ) for Γ 0 = γ 0 1 , 1, γ 0 3 ∈ GL 3 (k)
, and that matches F with the diagonal frame of H 3 (C, Γ 0 ). Comparing (6.13), (6.14) with (3.22), we conclude (6.16) In particular, the composition algebras B, C over k have similar norm forms and hence are isomorphic, allowing us to identify B = C from now on. Then (6.15) and (6.16) imply that A and H 3 (C, T K (s), 1, δN K (s) ) have the same mod-2 invariants, hence are isomorphic.
−γ 0 1 N C ∼ = S| A 12 (F ) ∼ = −T K (s) N B , (6.15) −γ 0 3 γ 0 1 N C ∼ = S| A 31 (F ) ∼ = −T K (s)N K (s)δ N B .
7. The case where T (x) = 0
In this and the following section, we will prove that (1) implies (2) in the main theorem. Suppose that (1) holds, so that by Cor. 5.3 A K contains a nonzero element x satisfying (7.1)
x ♯ = 0 and x ∈ ι(x) × A K .
In this section, we treat the case where x satisfies (7.1) and has trace zero. The next section treats the case where the trace of x is nonzero.
7.2. Proposition. Let A be an Albert k-algebra. There is a nonzero element x ∈ A K satisfying
x ♯ = 0, x ∈ ι(x) × A K , and T (x) = 0
if and only if A contains nonzero nilpotent elements.
Proof. We suppose that A K contains an x as in the statement of the proposition; the other direction was treated in Example 5.4. If K is equal to k × k, then A K = A × A and x = (x 1 , x 2 ) for x i ∈ A not both zero with x ♯ i = 0 and T (x i ) = 0. Therefore one of the elements x 1 , x 2 ∈ A is a nonzero nilpotent by 3.26. We are left with the case where K is a field.
For sake of contradiction, we further assume that A has no nonzero nilpotents. Write K = k[d] where d ∈ K has trace 1. Set δ := N K (d) ∈ k × , so that d 2 = d − δ. Write x = y + dz with y, z ∈ A not both zero. Because x ♯ = 0, we have:
(7.3) y ♯ = δz ♯ and z ♯ = −y × z.
Clearly,
(7.4) T (y) = T (z) = 0.
We first argue that neither y nor z is invertible in A. By (3.2) we have:
(7.5) N (y)y = (y ♯ ) ♯ = δ 2 (z ♯ ) ♯ = δ 2 N (z)z .
On the other hand, comparing
y ♯ × (y × z) = N (y)z + T (y ♯ , z)y (by (3.7)) = N (y)z + δT (z ♯ , z)y (by (7.3)) = N (y)z + 3δN (z)y (by (3.8)) with y ♯ × (y × z) = −δz ♯ × z ♯ (by (7.3)) = −2δz ♯♯ = −2δN (z)z , (by (3.2)) we obtain −2δN (z)z = N (y)z + 3δN (z)y . (7.6)
Similarly, δz ♯ × (z × y) = δN (z)y + δT (z ♯ , y)z (by (3.7)) = δN (z)y + T (y ♯ , y)z (by (7.3)) = δN (z)y + 3N (y)z (by (3.8))
and
δz ♯ × (z × y) = −δz ♯ × z ♯ (by (7.3)) = −2δz ♯♯ = −2δN (z)z (by (3.2))
imply −2δN (z)z = 3N (y)z + δN (z)y . (7.7) Subtracting (7.6) from (7.7), we conclude 2N (y)z = 2δN (z)y, which implies N (y)z = δN (z)y (7.8) for char k = 2, while this follows directly from (7.6) for char k = 2. Thus (7.8) holds in full generality. By (7.5), assuming that one of the elements y, z is invertible, both are, and
N (y) 2 z ♯ = N (y)z ♯ = δN (z)y ♯ (by (7.8)) = δ 2 N (z) 2 y ♯ = δ 3 N (z) 2 z ♯ (by (7.3))
implies N (y) 2 = δ 3 N (z) 2 . (7.9) On the other hand,
δN (y)z ♯ = −δy × [N (y)z]
(by (7.3)) = −δ 2 N (z)y × y (by (7.8))
= −2δ 2 N (z)y ♯ = −2δ 3 N (z)z ♯ , (by (7.3))
which yields N (y) = −2δ 2 N (z) . (7.10) Hence char k = 2, and comparing the square of (7.10) with (7.9) shows δ = 1 4 . But then the minimum polynomial of d over k becomes X 2 −X + 1 4 = (X − 1 2 ) 2 , contradicting the fact that K is a field. We have thus established the relation N (y) = N (z) = 0 . (7.11)
By assumption, A does not contain nonzero nilpotent elements. Hence (7.4), (7.11) imply that S(y), S(z) cannot both be zero since, otherwise, y, z were both nilpotent by (3.27). But since S(y) = δS(z) by (7.3), we conclude S(y) and S(z) are both nonzero. On the other hand, (7.11) combines with (3.2) to yield y ♯♯ = z ♯♯ = 0, forcing
e = S(y) −1 y ♯ = S(z) −1 z ♯ (by (7.3))
to be a primitive idempotent of A. The relation
e × x = S(y) −1 (y × y ♯ ) + dS(z) −1 (z × z ♯ )
combined with (3.9), (7.4), and (7.11) gives:
e × x = −(y + dz) = T (x)(1 − e) − x.
Lemma 3.14a shows that x belongs to (A K ) 0 (e), hence so does ι(x). Now fix v ∈ A K such that x = ι(x) × v and write v = ae + v 1 + v 0 for a ∈ K and v i ∈ (A K ) i (e) with i = 0, 1. Since ι(x) is in (A K ) 0 (e), we may apply Lemma 3.14a again to conclude ι(x) × (ae) = −aι(x) ∈ (A K ) 0 (e). On the other hand, Lemma 3.14c gives ι(x) × v 0 = S(ι(x), v 0 )e ∈ (A K ) 2 (e). Finally, using the circle product a • b := {a, 1, b} = U a,b 1, we obtain
ι(x) × v 1 = ι(x) • v 1 − T ι(x) v 1 − T (v 1 )ι(x) + [T ι(x) T (v 1 ) − T ι(x), v 1 ]1
by (3.5) and (3.6) linearized. But, as the Peirce decomposition is orthogonal relative to the generic trace, this is just ι(x) • v 1 , which is in (A K ) 1 (e). Decomposing a = α + βd with α, β ∈ k and comparing Peirce components of x = ι(x) × v relative to e, a short computation yields
y + dz = ι(x) × v = −aι(x) = − αy + (α + δβ)z − d(βy − αz) ,
hence
(1 + α)y = −(α + δβ)z and (1 − α)z = −βy . (7.12)
To complete the proof, it suffices to show z = −2y, because this implies 0 = x ♯ = (1 − 2d) 2 y ♯ , hence 2d = 1, a contradiction. Now, if β = 0, then α = 1 and 2y = −z by (7.12). But if β = 0, then
βz ♯ = −βy × z = (1 − α)z × z
(by (7.3) and (7.12)) = 2(1 − α)z ♯ , which yields β = 2(1 − α), hence βz = 2(1 − α)z = −2βy by (7.12), and again we end up with z = −2y.
8. The case where T (x) = 0
We now treat the other possible consequence of hypothesis (1).
8.1.
Proposition. There is a nonzero element x ∈ A K satisfying
x ♯ = 0, x ∈ ι(x) × A K , and T (x) = 0 if and only if k × K is a subalgebra of A.
Proof. First suppose that K is not a field, i.e., K = k × k. Then A K is identified with A × A and ι acts via the switch. If A K contains an element x = (x 1 , x 2 ) as in the statement of the proposition, then one of the elements x 1 , x 2 ∈ A is singular, hence A is reduced. It follows that k × k × k is a subalgebra. Conversely, if k × k × k is a subalgebra of A, then the standard basis vectors e 1 , e 2 , e 3 in k 3 form a complete orthogonal system of primitive idempotents in A. Putting x = (e 1 , e 2 ), we find that x ♯ = 0, x has trace one, and ι(x) × e 3 = (e 2 × e 3 , e 1 × e 3 ) = x. We are left with the case where K is a field. Suppose first that A K contains an element x as in the statement of the proposition. Then putting b := T (x) ∈ K × , the element e := b −1 x is a primitive idempotent in A K and
ι(e) = ι(b) −1 ι(x) ∈ ι(b) −1 (x × A K ) = e × ι(b) −1 b A K ⊆ e × A K .
But since e × e = 2e ♯ = 0 and f = 1 − e is in (A K ) 0 (e), Lemma 3.14a implies that the map x → e × x kills (A K ) 2 (e) + (A K ) 1 (e) and stabilizes (A K ) 0 (e). So ι(e) is in (A K ) 2 (e) and e, ι(e), c are orthogonal primitive idempotents in A K where c := 1 − e − ι(e). The idempotent c, remaining fixed under ι, belongs to A. Thus
k × K → A via (α, a) → αc + ae + ι(a)ι(e) ,
is an embedding of Jordan algebras over k.
Conversely, suppose that K is a field and k × K is a subalgebra of A. By hypothesis, A ⊗ K contains K × (K ⊗ K) = K × K × K as a subalgebra whose unit vectors u 1 , u 2 , u 3 form a complete orthogonal system of primitive idempotents in A ⊗ K such that u 1 is in A and ι interchanges u 2 and u 3 . Thus u 2 ∈ A ⊗ K is a singular element with trace 1 that satisfies
u 2 = u 3 × u 1 ∈ ι(u 2 ) × A K .
Sufficient condition for isomorphism
Let A be a reduced Albert algebra over k with coordinate algebra C. By 3.24, the 5-Pfister form corresponding to f 5 (A) may be written as N C ⊗γ for some 2-Pfister bilinear form γ. Since reduced Albert algebras are classified by their mod-2 invariants, we can write A as H 3 (C, γ). By the same token, given octonion algebras C, C ′ and 2-Pfister bilinear forms γ, γ ′ over k, the algebras H 3 (C, γ) and H 3 (C ′ , γ ′ ) are isomorphic if and only if N C ∼ = N C ′ and N C ⊗ γ ∼ = N C ′ ⊗ γ ′ . The goal of this section is to prove a weak analogue of this statement for hermitian Albert triples. 9.1. Proposition. Let K be a quadraticétale k-algebra, let C be an octonion k-algebra, and let γ and γ ′ be 2-Pfister bilinear forms.
If γ ⊗ N K is isomor- phic to γ ′ ⊗ N K , then T (H 3 (C, γ), K) is isomorphic to T (H 3 (C, γ ′ ), K).
It is natural to wonder if a stronger result holds, namely if N C ⊗ N K ⊗ γ is isomorphic to N C ⊗ N K ⊗ γ ′ , are the triples T (H 3 (C, γ), K) and T (H 3 (C, γ ′ ), K) necessarily isomorphic? (The tensor products in the question only make sense in characteristic different from 2.) The answer is no, as Example 11.2 below shows.
The proposition will follow easily (see the end of this section) from an alternative construction of hermitian Albert triples that we now describe. First we claim that the general linear group GL 3 (k) acts on the split Albert
algebra A d = H 3 (C d , 1) via φ g (j) := gjg t (g ∈ GL 3 (k), j ∈ A d )
in such a way that N φ g (j) = (det g) 2 N (j) . (9.2) To see this, it suffices to show gA d g t ⊆ A d and (9.2) for elementary matrices g ∈ GL 3 (k), which follows easily by brute force using (3.16). (This is the argument given in [J 61, §5].) Also, we have φ † g = φ g −t by the argument in [J 61, p. 77].
In particular, the map φ g , for g ∈ SL 3 (k), is an automorphism of the split hermitian Albert triple T d via φ g · (j 1 , j 2 ) = (gj 1 g t , g −t j 2 g −1 ). This gives a map
(9.3) G 2 × (SL 3 ⋊Z/2Z) → Aut(T d ),
and a corresponding map
(9.4) H 1 (k, G 2 ) × H 1 (k, SL 3 ⋊Z/2Z) → H 1 (k, Aut(T d )).
This last map takes an octonion k-algebra C, a quadraticétale k-algebra K, and a rank 3 K/k-hermitian form as inputs, and it gives a hermitian Jordan triple as output.
G 2 × Aut(PSL 3 ) → Aut(E 6 ).
This leads to a construction of groups of type E 6 (with possibly non-trivial Tits algebras), with inputs an octonion algebra, say C, and a central simple associative algebra of degree 3 with unitary involution fixing k, say (B, τ ). Tits's other construction of Lie algebras of type E 6 (corresponding to the E 6 in the bottom row of the "magic square" [J 71, p. 98]) uses identical inputs, and it is natural to guess that it produces the Lie algebra of the group arising from the construction above. In any case, one can ask: What is the index of the group constructed from given C and (B, τ )?
We remark that some of the techniques in this paper can be adapted to attack this question. For §4, hermitian Jordan triples should be replaced by a new type of algebraic structure in a manner completely analogous to the replacement of quadratic forms by algebras with orthogonal involution as in [KMRT,§5]. The description of the homogenous projective varieties in §5 can be translated directly to this new structure. We leave the details to the interested reader.
10. The case where A is split by K 10.1. In the notation of the main theorem, we always have (4) implies (1) by Example 5.4. Conversely, suppose that K splits A and (3) holds. In particular, A is reduced and we may write f 3 (A) as [K] · τ for some 2-Pfister bilinear form τ , hence f 5 (A) = f 3 (A) · γ = τ · 0 = 0.
That is, A has nonzero nilpotents. This proves the final sentence of Theorem 0.2.
We now provide an example to show that one really needs the hypothesis that K splits A in the implication that (3) implies (4).
10.2.
Example. Take k = Q(x, y, z, u, d) to be the rational function field in five variables over Q, let C be the octonion k-algebra corresponding to the 3-Pfister form x, y, z , and put A := H 3 (C, d, 1, −u ) and K := k( √ d).
Then f 3 (A) = (x) · (y) · (z) and f 5 (A) = f 3 (A) · γ for γ = −d, u . Since (−d) · (d) = 0, we have γ · (d) = 0, i.e., (3) holds. On the other hand, the 5-Pfister x, y, z, −d, u is anisotropic by Springer's Theorem [Lam,§VI.1], so f 5 (A) is nonzero and A contains no nonzero nilpotents.
Final observations
We close this paper by applying our main theorem to give an easy criterion for when G(A, K) is isotropic over special fields. 11.1. Proposition. Let k be a SAP field of characteristic zero such that cd 2 k( √ −1) ≤ 2. A group G(A, K) is isotropic if and only if A is reduced and f 5 (A) · [K] = 0.
Every algebraic extension of Q (not necessarily of finite degree) and R((x)) satisfy the hypothesis of the proposition. See [BP, for a summary of basic properties of fields k as in the proposition.
The restriction on the characteristic is harmless. The prime characteristic analogue of the proposition is a corollary of the statement: If H 5 (k, 2) is zero and A is reduced, then G(A, K) is isotropic. To see this, note that the hypotheses imply that the subgroup Aut(A) of G(A, K) is itself isotropic.
Proof of the proposition. If G(A, K) is isotropic, then A is reduced and f 5 (A)· [K] is zero by the main theorem, so we assume that A is reduced and f 5 (A) · [K] is zero and prove the converse. By the SAP property, there is some γ 1 ∈ k × that is positive at every ordering where f 5 (A) is zero and negative at every ordering where f 5 (A) is nonzero. Put γ := −1, γ 1 . Because f 3 (A) divides f 5 (A), it follows that f 5 (A) equals f 3 (A) · γ and γ · [K] equals zero over every real-closure of k, which in turn implies those same equalities over k. We conclude that G(A, K) is k-isotropic by the main theorem.
We observed in Prop. 9.1 that one can change γ somewhat without changing the isomorphism class of G(H 3 (C, γ), K). Motivated by Prop. 11.1, one might hope that the main theorem still holds with (3) replaced by (3 ′ ) A is reduced and f 5 (A) · [K] = 0. (We remark that the expression f 5 (A) · [K] only makes sense when char k = 2.) Clearly, (3) implies (3 ′ ). We now give an explicit example where (3 ′ ) holds but (3) does not. That is, such a replacement is not possible. We thank Detlev Hoffmann for showing us this example. 11.2. Example. Fix k 0 to be the purely transcendental extension Q (x, y, z, u, v, d) of the rationals (say). Let k denote the function field over k 0 of the 6-Pfister form x, y, z, u, v, d . Define a k-group G(A, K) via A := H 3 (C, −u, −v, uv ) and K := k( √ d),
where C is the octonion k-algebra with norm x, y, z . Clearly, (3 ′ ) holds over k.
On the other hand, take E to be the function field of q := x, y, z ′ + d over k, where x, y, z ′ denotes the pure part of the 3-Pfister. Clearly, f 3 (A) = (x) · (y) · (z) is killed by EK, that is, [K] divides f 3 (A) over E. We now argue that f 5 (A) is not zero over E, which will show that (3) fails over E hence also over k. Note that f 5 (A) = (x) · (y) · (z) · (u) · (v) is nonzero over k 0 and remains nonzero over k by Hoffmann's Theorem [Lam,X.4.34].
For sake of contradiction, suppose that f 5 (A) is killed by E. Then by Cassels-Pfister [Lam,X.4.8], for a ∈ k × represented by q and b ∈ k × represented by f 5 (A), we have that ab q is a subform of f 5 (A). Taking a = b = −x, we find that q is a subform of f 5 (A) over k. Computing in the Witt ring of k, it follows that 1, −d + x, y, z −u, −v, uv is k-isotropic. But this form is k 0 -anisotropic and remains anisotropic over k because its dimension is 26 < 32 < 64 = dim x, y, z, u, v, d , again using Hoffmann's Theorem. This is a contradiction, which shows that (3) does not hold.
Step 3. A frame for A. By[PR 84, Lemma 3.6], F = (d 1 , d 2 , d 3 ), where d 1 := e 11 , d 2 := T K (s) −1 se 22 + ι(s)e 33 + 1[23] (6.9) d 3 := T K (s) −1 ι(s)e 22 + se 33 − 1[23] ,is a frame of A. Furthermore, the map τ : C → C defined by x → τ (x) := τ K (x) , (6.10)
Table 2 .
24. Tits indexes and their corresponding Rost invariantsProof. Consulting the list of possible indexes from [Ti 66b, p. 59], the only one missing from the table is But a group with such an index has nontrivial Tits algebras [Ti 71, 5.5.5], therefore the index of G is in the table.If G is not quasi-split, then the semisimple anisotropic kernel H of G is quasi-simple with trivial Tits algebras, and it follows from Tits's Witt-type theorem that a(G) equals a(H). The correspondence between the index and a(G) asserted by the table now follows from Example 2.2.r
r
r
r
r
r
☛
✡
❢
❢
], [Ra 72, (28), (31), Lemma 2], and [PR 95, Lemma 5.3d] for details.
Acknowledgements. The authors are indebted to Detlev Hoffmann, Ottmar Loos, Erhard Neher, and Michel Racine for useful discussions on the subject of this paper.
For every quadraticétale k-algebra K, octonion k-algebra C, and 3-dimensional symmetric bilinear form γ, construction (9.4) sends the hermitian form deduced from γ to. Lemma, Lemma. For every quadraticétale k-algebra K, octonion k-algebra C, and 3-dimensional symmetric bilinear form γ, construction (9.4) sends the hermitian form deduced from γ to T (H 3 (C, γ), K).
Prop. 3.2], so we omit it. One observes that construction (9.4) sends the hermitian form to T (A, K), where A is the γ-isotope of H 3 (C, 1). But A is isomorphic to H 3 (C, γ) by [J 68. Fe. 7861Sketch of proof. The proof is essentially the same as the proof of. Th. 14] or [McC 66, p. 1077Sketch of proof. The proof is essentially the same as the proof of [Fe 78, Prop. 3.2], so we omit it. One observes that construction (9.4) sends the hermitian form to T (A, K), where A is the γ-isotope of H 3 (C, 1). But A is isomorphic to H 3 (C, γ) by [J 68, p. 61, Th. 14] or [McC 66, p. 1077].
4) from an octonion k-algebra C and a cocycle in Z 1 (K/k, SL 3 ⋊Z/2Z) whose value at ι is ( 1, 1, −1 , 1). Veldkamp considers groups of type 2 E 6 constructed by. This cocycle corresponds to the K/k-hermitian form deduced from 1, 1, −1 . By the lemma. such a group is isomorphic to G(H 3 (C, 1, 1, −1 ), K), i.e., is one of the A is isomorphic to H 3 (C, γ) where γ ⊗ N K is hyperbolic. By Prop. 9.1, G(A, K) is isomorphic to G(H 3 (C, 1, −1, 19.6. Example. In [V 68] and [V 69], Veldkamp considers groups of type 2 E 6 constructed by (9.4) from an octonion k-algebra C and a cocycle in Z 1 (K/k, SL 3 ⋊Z/2Z) whose value at ι is ( 1, 1, −1 , 1). This cocycle corre- sponds to the K/k-hermitian form deduced from 1, 1, −1 . By the lemma, such a group is isomorphic to G(H 3 (C, 1, 1, −1 ), K), i.e., is one of the A is isomorphic to H 3 (C, γ) where γ ⊗ N K is hyper- bolic. By Prop. 9.1, G(A, K) is isomorphic to G(H 3 (C, 1, −1, 1 ), K), and
Vista. The center of SL 3 is identified with the center of the split group E 6 := Aut(T d ) via (9.3), and this gives a map References. Vista. The center of SL 3 is identified with the center of the split group E 6 := Aut(T d ) via (9.3), and this gives a map References
Milnor's K-theory and quadratic forms over fields of characteristic 2. R Aravire, R Baeza, Comm. Alg. 20R. Aravire and R. Baeza, Milnor's K-theory and quadratic forms over fields of characteristic 2, Comm. Alg. 20 (1992), 1087-1107.
Classical groups and the Hasse principle. E Bayer-Fluckiger, R Parimala, Ann. of Math. 2E. Bayer-Fluckiger and R. Parimala, Classical groups and the Hasse principle, Ann. of Math. (2) 147 (1998), 651-693.
Groupes réductifs. A Borel, J Tits, Inst. HautesÉtudes Sci. Publ. Math. 27A. Borel and J. Tits, Groupes réductifs, Inst. HautesÉtudes Sci. Publ. Math. 27 (1965), 55-150.
Geometries, the principle of duality, and algebraic groups. M Carr, S Garibaldi, to appear in Expo. MathM. Carr and S. Garibaldi, Geometries, the principle of duality, and algebraic groups, to appear in Expo. Math.
Semisimple subalgebras of semisimple Lie algebras. E B Dynkin, Amer. Math. Soc. Transl. 2Russian original: Mat. Sbornik N.S. 30(72E.B. Dynkin, Semisimple subalgebras of semisimple Lie algebras, Amer. Math. Soc. Transl. (2) 6 (1957), 111-244, [Russian original: Mat. Sbornik N.S. 30(72) (1952), 349-462].
Octonion planes defined by quadratic Jordan algebras. J R Faulkner, Mem. Amer. Math. Soc. 104J.R. Faulkner, Octonion planes defined by quadratic Jordan algebras, Mem. Amer. Math. Soc. (1970), no. 104.
Lie algebras of type E6. J C Ferrar, Lie algebras of type E6. 13J. AlgebraJ.C. Ferrar, Lie algebras of type E6, J. Algebra 13 (1969), 57-72. [Fe 78] , Lie algebras of type E6. II, J. Algebra 52 (1978), no. 1, 201-209.
The Rost invariant has trivial kernel for quasi-split groups of low rank. R S Garibaldi, Comment. Math. Helv. 764R.S. Garibaldi, The Rost invariant has trivial kernel for quasi-split groups of low rank, Comment. Math. Helv. 76 (2001), no. 4, 684-711.
S Garibaldi, A S Merkurjev, J-P Serre, Cohomological invariants in Galois cohomology. Amer. Math. Soc28S. Garibaldi, A.S. Merkurjev, and J-P. Serre, Cohomological invariants in Galois cohomology, University Lecture Series, vol. 28, Amer. Math. Soc., 2003.
Quadratic forms and Pfister neighbors in characteristic 2. D W Hoffmann, A Laghribi, Trans. Amer. Math. Soc. 356D.W. Hoffmann and A. Laghribi, Quadratic forms and Pfister neighbors in characteristic 2, Trans. Amer. Math. Soc. 356 (2004), 4019-4053.
Some groups of transformations defined by Jordan algebras. I. N Jacobson, J. Reine Angew. Math. 201Coll. Math. PapersN. Jacobson, Some groups of transformations defined by Jordan algebras. I, J. Reine Angew. Math. 201 (1959), 178-195, (= Coll. Math. Papers 63).
Some groups of transformations defined by Jordan algebras. III. J. Reine Angew. Math. 207Coll. Math. Papers61] , Some groups of transformations defined by Jordan algebras. III, J. Reine Angew. Math. 207 (1961), 61-85, (= Coll. Math. Papers 65).
Exceptional Lie algebras, Lecture notes in pure and applied mathematics. Providence, RI; New York; Fayetteville, ArkMarcel-Dekker39University of ArkansasJ68] , Structure and representations of Jordan algebras, AMS Coll. Pub., vol. 39, AMS, Providence, RI, 1968. [J 71] , Exceptional Lie algebras, Lecture notes in pure and applied mathe- matics, vol. 1, Marcel-Dekker, New York, 1971. [J 81] , Structure theory of Jordan algebras, Lecture Notes in Mathematics, vol. 5, University of Arkansas, Fayetteville, Ark., 1981.
Generically algebraic quadratic Jordan algebras. N Jacobson, J Katz, Scripta Math. 293-4N. Jacobson and J. Katz, Generically algebraic quadratic Jordan algebras, Scripta Math. 29 (1973), no. 3-4, 215-227.
The book of involutions. M.-A Knus, A S Merkurjev, M Rost, J.-P Tignol, Amer. Math. Soc44M.-A. Knus, A.S. Merkurjev, M. Rost, and J.-P. Tignol, The book of involu- tions, Colloquium Publications, vol. 44, Amer. Math. Soc., 1998.
Galois-Kohomologie halbeinfacher algebraischer Gruppenüber padischen Körpern. M Kneser, Math. Z. IIM. Kneser, Galois-Kohomologie halbeinfacher algebraischer Gruppenüber p- adischen Körpern. II, Math. Z. 89 (1965), 250-272.
Introduction to quadratic forms over fields. T Y Lam, Graduate Studies in Mathematics. 67American Mathematical SocietyT.Y. Lam, Introduction to quadratic forms over fields, Graduate Studies in Mathematics, vol. 67, American Mathematical Society, Providence, RI, 2005.
. S Lang, Algebra , Springer-Verlagthird ed.S. Lang, Algebra, third ed., Springer-Verlag, 2002.
O Loos, Jordan Pairs, Bounded symmetric domains and Jordan pairs. BerlinSpringer-Verlag460University of California-IrvineLo 77O. Loos, Jordan pairs, Lecture Notes in Mathematics, vol. 460, Springer- Verlag, Berlin, 1975. [Lo 77] , Bounded symmetric domains and Jordan pairs, University of California-Irvine, 1977.
A general theory of Jordan rings. K Mccrimmon, Proc. Nat. Acad. Sci. U.S.A. 56K. McCrimmon, A general theory of Jordan rings, Proc. Nat. Acad. Sci. U.S.A. 56 (1966), 1072-1079.
The Freudenthal-Springer-Tits constructions of exceptional Jordan algebras. 139Trans. Amer. Math. Soc.69] , The Freudenthal-Springer-Tits constructions of exceptional Jordan algebras, Trans. Amer. Math. Soc. 139 (1969), 495-510. [McC 71] , Inner ideals in quadratic Jordan algebras, Trans. Amer. Math. Soc. 159 (1971), 445-468.
Structure theorems for Jordan algebras of degree three over fields of arbitrary characteristic. H P Petersson, Comm. Alg. 32PR 84H.P. Petersson, Structure theorems for Jordan algebras of degree three over fields of arbitrary characteristic, Comm. Alg. 32 (2004), 1019-1049. [PR 84]
Springer forms and the first Tits construction of exceptional Jordan division algebras. H P Petersson, M L Racine, Manuscripta Math. 453J. AlgebraH.P. Petersson and M.L. Racine, Springer forms and the first Tits construction of exceptional Jordan division algebras, Manuscripta Math. 45 (1984), 249- 272. [PR 95] , On the invariants mod 2 of Albert algebras, J. Algebra 174 (1995), no. 3, 1049-1072.
Point spaces in exceptional quadratic Jordan algebras. M L Racine, Trans. Amer. Math. Soc. 3J. AlgebraM.L. Racine, A note on quadratic Jordan algebras of degree 3, Trans. Amer. Math. Soc. 164 (1972), 93-103. [Ra 77] , Point spaces in exceptional quadratic Jordan algebras, J. Algebra 46 (1977), 22-36.
Jordan algebras and algebraic groups. T A Springer, Ergebnisse der Mathematik und ihrer Grenzgebiete. 75Springer-VerlagTi 66aT.A. Springer, Jordan algebras and algebraic groups, Ergebnisse der Mathe- matik und ihrer Grenzgebiete, vol. 75, Springer-Verlag, 1973. [Ti 66a]
Classification of algebraic semisimple groups, Algebraic Groups and Discontinuous Subgroups. J Tits, Algèbres alternatives, algèbres de Jordan et algèbres de Lie exceptionnelles. I. Construction. AMS28J. AlgebraJ. Tits, Algèbres alternatives, algèbres de Jordan et algèbres de Lie exception- nelles. I. Construction, Nederl. Akad. Wetensch. Proc. Ser. A 69 = Indag. Math. 28 (1966), 223-237. [Ti 66b] , Classification of algebraic semisimple groups, Algebraic Groups and Discontinuous Subgroups, Proc. Symp. Pure Math., vol. IX, AMS, 1966, pp. 32-62. [Ti 71] , Représentations linéaires irréducibles d'un groupe réductif sur un corps quelconque, J. Reine Angew. Math. 247 (1971), 196-220. [Ti 90] , Strongly inner anisotropic forms of simple algebraic groups, J. Algebra 131 (1990), 648-677.
Unitary groups in projective octave planes. F D Veldkamp, Compositio Math. 19F.D. Veldkamp, Unitary groups in projective octave planes, Compositio Math. 19 (1968), 213-258.
Atlanta, GA 30322, USA E-mail address: [email protected]. Unitary groups in Hjelmslev-Moufang planes. Hagen, D-58084 Hagen, Germany E-mail address: [email protected] URL108Department of Mathematics & Computer Science, Emory University69] , Unitary groups in Hjelmslev-Moufang planes, Math. Z. 108 (1969), 288-312. Department of Mathematics & Computer Science, Emory University, At- lanta, GA 30322, USA E-mail address: [email protected] URL: http://www.mathcs.emory.edu/~skip/ Fachbereich Mathematik, FernUniversität in Hagen, D-58084 Hagen, Ger- many E-mail address: [email protected] URL: http://www.fernuni-hagen.de/MATHEMATIK/ALGGEO/Petersson/petersson.html
|
[] |
[
"Intermolecular interactions and substrate effects for an adamantane monolayer on the Au(111) surface",
"Intermolecular interactions and substrate effects for an adamantane monolayer on the Au(111) surface"
] |
[
"Yuki Sakai \nDepartment of Physics\nTokyo Institute of Technology\n2-12-1 Oh-okayama, Meguro-ku152-8551TokyoJapan\n\nDepartment of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA\n",
"Giang D Nguyen \nDepartment of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA\n",
"Rodrigo B Capaz \nDepartment of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA\n\nInstituto de Fisica\nUniversidade Federal do Rio de Janeiro\nCaixa Postal\n68528, 21941-972Rio de JaneiroRJBrazil\n",
"Sinisa Coh \nDepartment of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA\n\nMaterials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA\n",
"Ivan V Pechenezhskiy \nDepartment of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA\n\nMaterials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA\n",
"Xiaoping Hong \nDepartment of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA\n",
"Feng Wang \nDepartment of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA\n\nMaterials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA\n",
"Michael F Crommie \nDepartment of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA\n\nMaterials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA\n",
"Susumu Saito \nDepartment of Physics\nTokyo Institute of Technology\n2-12-1 Oh-okayama, Meguro-ku152-8551TokyoJapan\n",
"Steven G Louie \nDepartment of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA\n\nMaterials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA\n",
"Marvin L Cohen \nDepartment of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA\n\nMaterials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA\n"
] |
[
"Department of Physics\nTokyo Institute of Technology\n2-12-1 Oh-okayama, Meguro-ku152-8551TokyoJapan",
"Department of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"Department of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"Department of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"Instituto de Fisica\nUniversidade Federal do Rio de Janeiro\nCaixa Postal\n68528, 21941-972Rio de JaneiroRJBrazil",
"Department of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"Materials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA",
"Department of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"Materials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA",
"Department of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"Department of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"Materials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA",
"Department of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"Materials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA",
"Department of Physics\nTokyo Institute of Technology\n2-12-1 Oh-okayama, Meguro-ku152-8551TokyoJapan",
"Department of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"Materials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA",
"Department of Physics\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"Materials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA"
] |
[] |
We study theoretically and experimentally the infrared (IR) spectrum of an adamantane monolayer on a Au(111) surface. Using a new STM-based IR spectroscopy technique (IRSTM) we are able to measure both the nanoscale structure of an adamantane monolayer on Au(111) as well as its infrared spectrum, while DFT-based ab initio calculations allow us to interpret the microscopic vibrational dynamics revealed by our measurements. We find that the IR spectrum of an adamantane monolayer on Au(111) is substantially modified with respect to the gas-phase IR spectrum. The first modification is caused by the adamantane-adamantane interaction due to monolayer packing and it reduces the IR intensity of the 2912 cm −1 peak (gas phase) by a factor of 3.5. The second modification originates from the adamantane-gold interaction and it increases the IR intensity of the 2938 cm −1 peak (gas phase) by a factor of 2.6, and reduces its frequency by 276 cm −1 . We expect that the techniques described here can be used for an independent estimate of substrate effects and intermolecular interactions in other diamondoid molecules, and for other metallic substrates.
|
10.1103/physrevb.88.235407
|
[
"https://arxiv.org/pdf/1309.5090v1.pdf"
] | 51,816,273 |
1309.5090
|
c3854408c5dfddab84f788cac95a6d8d52521a6f
|
Intermolecular interactions and substrate effects for an adamantane monolayer on the Au(111) surface
19 Sep 2013
Yuki Sakai
Department of Physics
Tokyo Institute of Technology
2-12-1 Oh-okayama, Meguro-ku152-8551TokyoJapan
Department of Physics
University of California
94720BerkeleyCaliforniaUSA
Giang D Nguyen
Department of Physics
University of California
94720BerkeleyCaliforniaUSA
Rodrigo B Capaz
Department of Physics
University of California
94720BerkeleyCaliforniaUSA
Instituto de Fisica
Universidade Federal do Rio de Janeiro
Caixa Postal
68528, 21941-972Rio de JaneiroRJBrazil
Sinisa Coh
Department of Physics
University of California
94720BerkeleyCaliforniaUSA
Materials Sciences Division
Lawrence Berkeley National Laboratory
94720BerkeleyCaliforniaUSA
Ivan V Pechenezhskiy
Department of Physics
University of California
94720BerkeleyCaliforniaUSA
Materials Sciences Division
Lawrence Berkeley National Laboratory
94720BerkeleyCaliforniaUSA
Xiaoping Hong
Department of Physics
University of California
94720BerkeleyCaliforniaUSA
Feng Wang
Department of Physics
University of California
94720BerkeleyCaliforniaUSA
Materials Sciences Division
Lawrence Berkeley National Laboratory
94720BerkeleyCaliforniaUSA
Michael F Crommie
Department of Physics
University of California
94720BerkeleyCaliforniaUSA
Materials Sciences Division
Lawrence Berkeley National Laboratory
94720BerkeleyCaliforniaUSA
Susumu Saito
Department of Physics
Tokyo Institute of Technology
2-12-1 Oh-okayama, Meguro-ku152-8551TokyoJapan
Steven G Louie
Department of Physics
University of California
94720BerkeleyCaliforniaUSA
Materials Sciences Division
Lawrence Berkeley National Laboratory
94720BerkeleyCaliforniaUSA
Marvin L Cohen
Department of Physics
University of California
94720BerkeleyCaliforniaUSA
Materials Sciences Division
Lawrence Berkeley National Laboratory
94720BerkeleyCaliforniaUSA
Intermolecular interactions and substrate effects for an adamantane monolayer on the Au(111) surface
19 Sep 2013(Dated: May 22, 2014)
We study theoretically and experimentally the infrared (IR) spectrum of an adamantane monolayer on a Au(111) surface. Using a new STM-based IR spectroscopy technique (IRSTM) we are able to measure both the nanoscale structure of an adamantane monolayer on Au(111) as well as its infrared spectrum, while DFT-based ab initio calculations allow us to interpret the microscopic vibrational dynamics revealed by our measurements. We find that the IR spectrum of an adamantane monolayer on Au(111) is substantially modified with respect to the gas-phase IR spectrum. The first modification is caused by the adamantane-adamantane interaction due to monolayer packing and it reduces the IR intensity of the 2912 cm −1 peak (gas phase) by a factor of 3.5. The second modification originates from the adamantane-gold interaction and it increases the IR intensity of the 2938 cm −1 peak (gas phase) by a factor of 2.6, and reduces its frequency by 276 cm −1 . We expect that the techniques described here can be used for an independent estimate of substrate effects and intermolecular interactions in other diamondoid molecules, and for other metallic substrates.
I. INTRODUCTION
Diamondoids form a class of hydrocarbon molecules composed of sp 3 hybridized carbon atoms. They can be regarded as small pieces of diamond whose dangling bonds are terminated with hydrogen atoms. Diamondoids are known to exhibit negative electron affinity and they have possible applications as electron emitters, 1,2 and other nanoscale devices. 3 Diamondoids have also attracted much interest because of their possible appearance in the interstellar medium. 4,5 The smallest diamondoid is adamantane (C 10 H 16 ), which has a highly symmetric cage-like structure (T d point group) illustrated in Fig. 1(a). Various theoretical and experimental results on infrared (IR) spectroscopy of both gas and solid adamantane have been reported in the literature. [5][6][7][8][9][10][11][12][13] Self-assembled monolayers of larger diamondoids than adamantane (i. e. tetramantane) on a Au(111) surface system has been studied with scanning tunneling microscopy (STM). 3,14 The infrared spectrum of a functionalized adamantane on a Au(111) surface has been studied in Ref. 15; however the functionalization prevents adamantane molecules from being in direct contact with the Au(111) surface.
Therefore a detailed characterization of the adamantane monolayer-gold surface interaction is missing.
In this work we investigate a submonolayer of adamantane in direct contact with a Au(111) surface. We experimentally obtained the IR spectrum of a self-assembled adamantane island on a Au(111) surface by using a newly developed method which combines infrared spectroscopy and scanning tunneling microscopy. 14 The observed spectrum of the adamantane monolayer on Au(111) is significantly altered with respect to the gas and solid phase of adamantane. To account for this difference theoretically, we studied the IR spectrum within the framework of density functional theory (DFT) and density functional perturbation theory (DFPT). Our analysis reveals that intermolecular and molecule-substrate interactions cause mixing (hybridization) of the gas phase vibrational modes. As a result, IR-active vibrational modes of adamantane molecules on a gold substrate are found to be considerably different than those of the gas phase. For example, our calculations show that the intermolecular interaction in the adamantane monolayer reduces the IR intensity of one of the gas-phase IR peaks by a factor of 3.5. In addition, the interaction between adamantane molecules and the Au(111) substrate increases the IR intensity of another gas-phase mode by a factor of 2.6 and causes a significant redshift of 276 cm −1 for this mode. This paper is organized as follows: in Sec. II, we describe the details of our experiment.
In Sec. III we introduce the computational methods. In Sec. IV, we describe and analyze the theoretical and experimental IR spectra. In Sec. IV B 1, IV B 2 and IV B 3 we study theoretically the IR spectrum of a single adamantane molecule, the IR spectrum of an adamantane monolayer, and the IR spectrum of an adamantane monolayer on a Au (111) surface, respectively. In Sec. IV C we compare experimental and theoretical IR spectra.
Finally, in Sec. V we discuss our conclusions.
II. EXPERIMENTAL SETUP
Adamantane (Sigma-Aldrich, purity ≥ 99%) was deposited onto a clean Au(111) surface from adamantane powder held in a vacuum chamber by exposing the gold surface to adamantane vapor formed at room temperature. To achieve submonolayer molecule coverage the adamantane vapor flux was controlled with a leak valve. Before the deposition, it was necessary to precool a freshly cleaned gold crystal to 15 K to facilitate adsorption of the molecules onto the Au(111) surface. The precooled gold crystal was then transferred into a chamber with base pressure of ∼ 10 −11 Torr where the crystal was held in a room temperature manipulator during the adamantane deposition which lasted for about ten minutes (pressure rose to ∼ 10 −9 Torr when the adamantane valve was opened for deposition). After the deposition, the sample was immediately transferred into a homemade ultra-high vacuum variable temperature STM operating at T = 13-15 K for STM surface characterization. The adamantane molecules on Au(111) were observed to self-assemble into hexagonally packed molecular islands with a lattice constant of 7.5±0.2Å. Figure 1(b) shows a typical STM image of an adamantane island on a Au(111) surface.
IR absorption spectra of adamantane submonolayers on the Au(111) surface were obtained by using a recently developed technique referred to as infrared scanning tunneling microscopy (IRSTM). 14 IRSTM employs an STM tip in tunneling mode as a sensitive detector to measure the thermal expansion of a sample due to molecular absorption of monochromatic IR radiation. The surface thermal expansion of the sample, recorded as a function of IR frequency, yields the IR molecular absorption spectrum. Frequency-tunable IR excitation of the samples was achieved by using a homemade tunable mode-hop-free laser source based on a singly resonant optical parametric oscillator. 16 The detailed description of the IRSTM setup and the discussion of its performance are given elsewhere. 14
III. THEORETICAL CALCULATIONS
In this section, we describe the computational methods used in this work.
A. Geometry of adamantane on Au(111)
We start with a discussion of the orientation of an adamantane monolayer on the Au (111) surface as shown in Fig. 1. Based on the STM topography shown in Fig. 1(b), we model the molecular arrangement on a Au(111) surface. In our model, the adamantane molecules are arranged in a √ 7 × √ 7 structure as shown in Fig. 1(c). The intermolecular distance in this model is 7.40Å, which is close to the observed intermolecular distance 7.5 ± 0.2Å. In our calculations we place the adamantane molecules so that the three-fold axis of the molecule is perpendicular to the surface, with three bottom hydrogen atoms facing down toward the Au(111) surface. Because of its three-fold symmetry, this configuration is compatible with the hexagonal self-assembled island seen in the STM topography Fig. 1(b).
We determine the most stable adsorption geometry of adamantane by computing the total energy for various adsorption sites. We perform ab initio total energy calculations within Table I. The name of each geometry is based on the position of the center of the molecule and the positions of the three bottom hydrogen atoms shown in Fig. 1(a). In the most stable geometry (hollow-atop) the center of the molecule is on the hollow site and three hydrogen atoms are close to the gold atoms in the topmost layer (see Fig. 1(c)). The calculated distance between the bottom hydrogen atoms and the gold surface is 2.29Å after optimization of the atomic coordinates.
It is well known that LDA energy functionals do not correctly describe long-range interactions such as the van der Waals interaction. Thus we also cross-check our results with van der Waals density functionals (vdW-DFs). [25][26][27][28][29] In particular, we use an improved version 26 of the nonlocal vdW correlation functional together with Cooper exchange 27 . We optimize the four different structures to compare the energetics with those of the LDA. After the structural optimization with a vdW functional, the hollow-atop geometry remains the most stable configuration although the binding energy differences are reduced compared to the LDA (the binding energy difference between the hollow-atop geometry and the atop-bridge geometry is reduced from 111 meV/molecule (in LDA) to 44 meV/molecule). In addition, the distance between the molecule and surface using the vdW functional is increased from 2.29Å (in LDA) to 2.45Å in the hollow-atop geometry. Since both of these changes are not large, we expect that the use of the vdW-DF functionals throughout this work does not qualitatively affect the computed IR spectra.
B. Phonon and IR intensity calculation
Next, we describe the methods with which we compute the frequency and infrared intensity of the adamantane vibration modes, in various environments (gas, monolayer, and on the Au(111) substrate).
For this purpose we use density functional perturbation theory as described in Ref. 30.
All calculations are done only in the Brillouin zone center point (Γ). We define the phonon effective charge of the j th phonon branch as
Q α (ω j ) = β,s Z αβ s U β s (ω j ).(1)
Here Z is the Born effective charge tensor, U(ω j ) is the eigendisplacement vector, and ω j is the phonon frequency of the j th phonon branch. Cartesian vector components are represented by α and β, while s represents the atom index. The Born effective charge Z αβ s is defined as the first derivative of the force F β s acting on an atom s with respect to the electric field E α ,
Z αβ s = ∂F β s ∂E α .(2)
The IR intensity of the j th phonon branch can be computed from the phonon effective charge 31
I IR (ω j ) = α |Q α (ω j )| 2 .(3)
Once we obtain the IR intensities I IR (ω j ), we model the IR spectrum at any frequency ω by assuming a Lorentzian lineshape with a constant linewidth of 10 cm −1 (full width at half maximum). In general, all nine Cartesian components of the Born effective charge must be calculated to obtain the IR intensities. However, on the metallic surface, one can focus only on the components of the electric field perpendicular to the surface (α = z). [32][33][34] To compute the Born effective charge, we use a finite-difference approximation of Eq. 2, and we apply the electric field using a saw-tooth like potential in the direction perpendicular to the slab (z). Therefore we can obtain the Born effective charge by dividing the force induced by the electric field (∆F ) with the strength of the electric field (E z ).
Using this method we calculate the IR spectra of a single molecule, of a molecular monolayer without a substrate, and of molecules on a Au(111) substrate. To track the changes in the IR spectrum due to intermolecular interaction and due to substrate effects, we perform the following interpolation procedure. We define an interpolated dynamical matrix D int and Born effective charge Z int between any two configurations A and B as
D int = (1 − λ)D A + λD B , Z int = (1 − λ)Z A + λZ B .(4)
Here D int = D A for λ = 0 and D int = D B for λ = 1 and is continuously tuned from A to B for 0 < λ < 1 (same for Z int ). Diagonalizing D int for each λ and using interpolated Z int we obtain interpolated IR intensity for each λ with 0 < λ < 1.
For the start and end configurations A and B, we use either no subscript, subscript M, or subscript Au to denote either an isolated molecule, molecules in an isolated molecular monolayer, or in a monolayer on a Au (111) In the interpolation procedure, we take into account only dynamical matrix elements of carbon and hydrogen atoms (neglecting the displacements of gold atoms). We find that the effect of gold atom displacements on vibrational frequencies is only 0.3 cm −1 and on the IR intensity less than 10%.
In addition to the interpolation method we also quantitatively analyze the similarity of eigendisplacement vectors between various configurations. We do this by computing the norm of the inner product u A i |u B j 2 between i th phonon eigendisplacement vector u A i | in configuration A, and j th phonon eigendisplacement vector |u B j in configuration B. We use the same subscript convention as for the interpolation procedure (Au, M, or no subscript for molecules on the substrate, the molecular monolayer, or single molecule case respectively).
Since an adamantane molecule consists of 78 phonon modes, we simplify the analysis by only considering inner products between the 16 predominantly C-H stretching modes.
IV. RESULTS
This section is organized as follows. We present our experimentally obtained IR spectra of an adamantane submonolayer on a Au(111) surface in Sec. IV A. Next, in Sec. IV B we analyze theoretically obtained IR spectra. Finally, in Sec. IV C we compare theory and experiment.
A. Experimental IR spectra 13 and solution phase. 10 We label phonon modes by ω 1 -ω 7 with one label corresponding to one irreducible representation (irrep). Only T 2 modes can be observed in the gas IR spectroscopy. A 1 , E, and T 2 are Raman active, while T 1 modes are inactive (both in IR and Raman).
Irrep
Frequency
Q z Prev. experiment Ref. 13 a Ref. 10 b ω 1 A 1 2924 - - 2913 ω 2 A 1 2891 - - 2857 ω 3 E 2892 - - 2900 ω 4 T 1 2938 - - - ω 5T
B. Theoretical analysis of IR spectra
To understand the origin of IR spectrum modification of adamantane on Au(111) we compute IR spectra for an isolated adamantane molecule, for an adamantane monolayer, and for an adamantane monolayer placed on the Au(111) substrate. We discuss these three cases in the following three subsections of the paper.
Single molecule
We compute the vibrational frequencies and IR intensities of an isolated single molecule of adamantane by placing the molecule in a large unit cell (length of each side is 16Å) to minimize the interaction between periodic replicas. The calculated vibrational frequencies of the C-H bond stretching modes (and phonon effective charge) are listed in Table II and compared with the experimental frequencies from the literature. 10,13 Our calculation reproduces quite well the vibrational frequencies as compared to the experimental data. We find the largest discrepancy of ∼ 30 cm −1 for two modes (labeled ω 2 and ω 7 in Table II),
while the discrepancy for the other modes is only ∼ 10 cm −1 .
There are 16 C-H stretching modes in adamantane (equal to the number of C-H bonds).
Out of 16 modes, there are three modes corresponding to the T 2 irreducible representation, one T 1 , one E, and two A 1 modes. In the highly symmetric adamantane molecule (point group T d ) only these three T 2 modes are IR active while other C-H stretching modes are IR inactive. The IR active modes have the following approximate characters: asymmetric stretching of the C-H 2 bonds (labeled ω 5 in Table II) to simplify the comparison with the IR intensities of adamantane in molecular monolayer and monolayer on the Au(111) surface. We obtain three substantial IR absorption peaks of a single molecule adamantane in the frequency region from 2850 to 2950 cm −1 . The highest, middle, and lowest frequency peaks correspond to the ω 5 , ω 6 , and ω 7 modes, respectively.
The ω 6 peak has the largest IR intensity, followed by the ω 5 peak. This order of the IR intensities is consistent with a previous DFT result 12 .
Adamantane monolayer
Before analyzing the IR spectrum of an adamantane monolayer on a Au(111) surface, we first analyze the IR spectrum of a free adamantane monolayer (with the same intermolecular distance as in the monolayer on the gold substrate). with one label corresponding to one irreducible representation (irrep). Norm of the overlap between gas phase phonon eigenvector ( u j |) and monolayer phonon eigenvector (|u M i ) is also shown, indicating nature of the hybridization of the gas phase phonons into the monolayer phase phonons. In the table we neglect all overlaps whose norm is smaller than 0.1.
Gas phase overlap
Irrep Placing adamantane in the monolayer arrangement (see Sec. III A) lowers the symmetry of the system from T d (in the gas phase) to C 3v . This symmetry reduction is followed by splitting of the threefold degenerate T 2 representation (in T d ) to a twofold E representation and a onefold A 1 representation. The basis functions of the E representation are x and y, therefore they have no IR activity in the z direction (perpendicular to the monolayer). In contrast, the modes with A 1 representation are active along the z direction. In addition, symmetry reduction to the adamantane monolayer splits the T 1 representation into E and A 2 , both of which are IR inactive in the z direction (E is active in x and y). Therefore, the adamantane monolayer has in total five C-H stretching modes that are IR active along the z direction. We label these five modes corresponding to the A 1 representation with ω M 1 , ω M 2 , ω M 4 , ω M 5 , and ω M 6 .
Frequency Q z ω 1 ω 2 ω 3 ω 4 ω 5 ω 6 ω 7 ω
The blue dashed line in Fig. 4 shows the calculated IR spectrum of the adamantane monolayer. The changes in the IR spectrum of the adamantane monolayer compared to the single molecule are presented in more detail in Fig. 5 and in Table III. Figure 5 shows interpolated IR spectra between a single molecule and a molecular monolayer case (using the interpolation method described in Sec. III B). Table III shows the calculated vibrational frequencies, phonon effective charges, and inner products between phonon eigenvectors of gas and monolayer adamantane. The arrows indicate the peaks evolution from the isolated molecule case to the monolayer case. Table III, we find that there is one-to-one correspondence between the ω 2 , ω 4 , ω 5 , ω 7 modes of a single molecule adamantane phase and the ω M 2 , ω M 3 , ω M 4 , and ω M 6 modes of the monolayer adamantane phase, respectively. This correspondence is also evident in the similarity of the phonon eigendisplacement vectors of these two phases (compare Figs. 3(b), (c), (d), (f) and Figs. 6(b), (c), (d), (f)). Three of these modes (ω 2 , ω 5 , and ω 7 ) are redshifted by about 10 cm −1 in the monolayer phase (see arrows in Fig. 5) which is close to the redshifts found in the solid phase diamondoids. 5 The IR inactive mode ω 4 is redshifted by 7 cm −1 in the monolayer phase.
From Fig. 5 and the inner products in
The remaining IR active C-H stretching modes in the adamantane monolayer (ω M 1 and ω M 5 ) result from a strong mixing of the ω 1 and ω 6 modes in the gas phase (the inner products shown in Table III
Monolayer on Au(111)
Finally, we study the vibrational properties of the adamantane monolayer on a Au (111) surface. Introduction of the Au(111) surface further reduces the symmetry of the system from C 3v (in the monolayer) to C 3 . Due to symmetry reductions, both the IR active A 1 mode and the IR inactive A 2 mode having C 3v symmetry (monolayer) are changed to the IR active A representation having C 3 symmetry. The IR inactive E modes, remains inactive in the C 3 symmetry along the z direction. The red solid line in Fig. 4 shows the calculated IR spectrum of a molecular monolayer on a gold substrate. Vibrational frequencies and phonon effective charges are shown in Table IV. The most notable difference compared to the spectrum of the isolated adamantane monolayer and gas phase molecule is the significant redshift of one of the modes ω Au 6 (to 2664 cm −1 ) with a sizable increase in the phonon effective charge (0.40 e). Figure 7 shows the interpolated IR spectra between the isolated monolayer case and the monolayer on Au(111). This analysis shows that the ω Au 1 mode is nearly unaffected by the Au(111) substrate as it originates from the stretching of a topmost C-H bond, relatively far from the Au(111) surface. In fact, the eigendisplacement vectors of the ω Au 1 mode and ω M 1 mode are nearly the same (compare Fig. 6(a) and Fig. 8(a)).
Interpolation analysis of the remaining IR peaks is quite involved in this case. Therefore we turn to the analysis of the inner products of the phonon modes in the isolated monolayer and the monolayer of on Au(111). These inner products are shown in Table IV. From this TABLE IV. Vibrational frequencies (in cm −1 ), phonon effective charges Q z (in elementary charge e), and inner products of C-H stretching modes of adamantane molecules on a Au(111) surface.
Norm of the overlap between adamantane monolayer phase phonon eigenvector ( u M j |) and monolayer on gold phonon eigenvector (|u Au i ) is also shown, indicating the nature of the hybridization of the gas phase phonons into the monolayer phase phonons. In the Table we neglect all overlaps whose norm is smaller than 0.1.
Monolayer phase overlap
Irrep mode originates from the ω M 1 mode. In addition, we find that the ω Au 3 mode has one-to-one correspondence with the ω M 3 mode.
Frequency Q z ω M 1 ω M 2 ω M 3 ω M 4 ω M 5 ω M 6
For the remaining IR active modes (ω Au 2 , ω Au 4 , ω Au 5 , and ω Au 6 ) we find strong influence by the monolayer-substrate interaction. Analyzing Table IV Fig. 8).
On the other hand, Table IV qualitative agreement between the two spectra, both in the peak position and in their relative intensities (the experimental vertical scale is chosen so that the peak height at 2846 cm −1 matches the theoretical peak height at 2851 cm −1 ). Agreement is even better after applying a correction to the calculated phonon frequencies (red line in Fig. 9) as we describe below in Sec. IV C 1.
We assign the relatively large experimentally obtained IR peak at 2912 cm −1 to the theoretically obtained ω Au 1 mode (2922 cm −1 , corrected frequency 2914 cm −1 , phonon effective charge 0.17 e). Furthermore, we assign the relatively weaker mode at 2846 cm −1 to the theoretically obtained ω Au 2 mode (2878 cm −1 , corrected frequency 2851 cm −1 , phonon effective charge 0.10 e). Remaining features in the experimental data (green line in Fig. 9) are not reproducible and therefore cannot be reliably assigned to the additional IR phonon modes. This is consistent with our theory, as the remaining IR active modes ω Au 3 , ω Au 4 , and ω Au 5 have a much smaller phonon effective charge (from 0.01 to 0.07 e).
Finally, our calculation predicts the existence of a significantly redshifted IR active mode ω Au 6 at 2664 cm −1 (corrected value is 2644 cm −1 ) with a large phonon effective charge (0.40 e). Although the frequency of this mode is currently outside of our experimentally attainable frequency range (from 2840 to 2990 cm −1 ), we expect that it will be accessible to future experimental probing.
Correction of the dynamical matrix
Here we present the method we use to correct the DFT-LDA IR spectrum of the adamantane monolayer on the Au(111) substrate (red line in Fig. 9). First we obtain the correction D corr to the calculated dynamical matrix of the adamantane gas phase so that it exactly reproduces the experimentally measured frequencies of adamantane gas and solution phase,
D corr = i (∆ 2 i − 2ω i ∆ i ) |u i u i | .(5)
Here ω i and |u i are the phonon frequencies and eigenvectors of the original dynamical matrix, while ∆ i is the difference between the computed and the measured adamantane gas and solution phase frequency. In the second step, we add this same correction matrix D corr to the dynamical matrix of the adamantane monolayer on the Au(111) surface. Finally, we use the eigenvalues and eigenvectors of the corrected dynamical matrix to compute the corrected IR spectrum.
Our correction procedures improve the agreement of the calculated IR spectrum of adamantane on the Au(111) surface with the experimental spectrum (see Fig. 9). The theoretical peak position of the ω Au 1 mode is redshifted by 8 cm −1 (from 2922 cm −1 to 2914 cm −1 ) and sits closer to the experimental peak position (2913 ± 1 cm −1 ). Similarly,
the ω Au 2 mode is redshifted by 27 cm −1 (from 2878 cm −1 to 2851 cm −1 ), again closer to the experimental value (2846 ± 2 cm −1 ).
V. SUMMARY AND CONCLUSIONS
Our work combining IRSTM 14 measurements and ab initio calculation of the IR spectrum of an adamantane monolayer on Au(111) demonstrates the complex nature of adamantaneadamantane and adamantane-gold interactions. In Sec. IV B we have described in detail the effect of each of these interactions on the mixing (hybridization) of adamantane vibrational modes, the changes in their frequencies, and the IR intensities. Figure 10 The IR spectrum of the isolated adamantane molecule (black dashed line in Fig. 10) consists of three IR active C-H stretching modes (ω 5 , ω 6 , ω 7 ). The adamantane-adamantane interaction (packing effect) reduces the IR intensity of one of these modes (ω 6 ) by a factor of 3.5. On the other hand, the adamantane-gold interaction severely redshifts the gas phase ω 5 mode (by 276 cm −1 ), and increases its IR intensity by a factor of 2.6. In addition, both ω 5 and ω 6 are hybridized with IR inactive gas phase modes (ω 2 and ω 1 respectively). See Sections IV B 2, IV B 3 and Tables III, IV for more details.
In conclusion, we expect that these techniques can be used to study intermolecular and molecule-substrate effects of other molecular systems, including the use of other metallic substrates. In particular, we expect that the IR intensity reduction of the gas phase ω 6 mode (or equivalent, for other molecules) can be used as a direct measure of intermolecular interactions. Similarly, the increase in the IR intensity and the redshift of the ω 5 mode can be used as a direct measure of molecule-substrate interactions.
FIG. 1 .
1(Color online) (a) Model of molecular structure of an adamantane molecule. Hydrogen and carbon atoms are represented by white and gray spheres, respectively. (b) STM topography of a self-assembled island of adamantane molecules on a Au(111) surface (sample bias V sample = −1.0 V, current setpoint I = 100 pA, temperature T = 13 K). (c) Schematic picture of alignment of the molecules on the Au(111) surface. Gold atoms in the topmost layer, second layer, and third layer are represented by gold, blue, and green color spheres of different shades. Black lines show the supercell of the √ 7× √ 7 molecular alignment. Upper case letters indicate the location of adsorption sites: A for atop site, B for bridge site, and H for hollow site. Top view of an adamantane in the hollow-atop geometry is also illustrated (the center of the molecule is in the hollow site and the three bottom hydrogen atoms are in the atop site.) The three bottom hydrogen atoms can be seen at the bottom ofFig. 1(a), but not be seen inFig. 1(c)
the framework of DFT 17,18 to understand the properties of an adamantane monolayer on a Au(111) surface. We use the local density approximation (LDA) for the exchange and correlation energy functionals based on the quantum Monte-Carlo results of Ceperley andAlder 19 as parameterized by Perdew and Zunger. 20 Vanderbilt ultrasoft pseudopotentials 21 are adopted in combination with a plane wave basis with cut off energies of 30 and 360 Ry for the wavefunctions and charge density, respectively. A Brillouin zone integration is done on an 8 × 8 × 1 uniform k-grid. Gaussian smearing with a 0.01 Ry width is used for the calculation of metallic systems. We use the Quantum ESPRESSO package 22 to perform DFT calculations. We also use XCrySDen 23 to visualize the results. We model the Au(111) surface by a finite slab with a thickness of seven gold layers with seven gold atoms in each layer in a supercell geometry 24 . The primitive unit cell of the slab includes one adamantane molecule. Therefore, in total the primitive unit cell contains 75 atoms. The width of the vacuum region is 15.5Å. Binding energies and molecule-surface distances of four different optimized geometries are listed in
substrate, respectively. We first interpolate the spectra from the isolated molecule phase to the monolayer phase (M) to determine the effect of the intermolecular interactions. Finally to estimate the effect of the molecule-substrate interactions we interpolate the spectrum of the monolayer phase (M) to the case of molecules on the gold substrate (Au).
Figure 2 FIG
2shows an experimentally measured IRSTM spectrum (green line) of 0.8 ML of adamantane adsorbed on a Au(111) surface. The spectrum was obtained by measuring the STM Z-signal under constant-current feedback conditions while sweeping the IR excitation from 2840 cm −1 to 2990 cm −1 (the spectrum shown was averaged over 15 frequency sweeps and background-corrected by subtracting a linear fit to the estimated bare gold contribution to the spectrum). Two IR absorption peaks for adamantane/Au(111) can clearly be seen at 2846 ± 2 and 2912 ± 1 cm −1 inFig. 2. The other small peaks seen inFig. 2are not reproducible and thus we are not able to unambiguously relate them to the adamantane absorption. The black dashed lines in Fig. 2 show the IR peak positions of an adamantane molecule in the gas phase. 13 Comparing the green curve and the black dashed line in Fig. 2, it is clear that the IR spectrum of adamantane on the Au(111) surface is considerably different from the gas phase spectrum. . 2. (Color online) The green line shows the experimentally observed spectrum (averaged over 15 sweeps) of 0.8 ML of adamantane on a Au(111) surface with gold baseline signal subtracted.The IR absorption peaks are seen at 2846 ± 2 and 2912 ± 1 cm −1 . The vertical black dashed lines show the IR peak positions of an adamantane molecule in the gas phase as listed inTable II (2859, 2912, 2938 cm −1 ).
, symmetric C-H stretch mode (ω 6 ), and symmetric C-H 2 stretch mode (ω 7 ). Their eigendisplacement vectors are also shown in Figs. 3(d), (e), and (f). The calculated IR spectrum of an isolated adamantane molecule is indicated in Fig. 4 by a black dashed line. Here we compute only the α = z component of the IR intensities (Eq. 3)
FIG. 3 .FIG
3(Color online) Eigendisplacement vectors of (a) ω 1 , (b) ω 2 , (c) ω 4 , (d) ω 5 , (e) ω6 , and (f) ω 7 modes of an isolated adamantane molecule (notation is fromTable II). Gray and white spheres represent carbon and hydrogen atoms, respectively. Blue arrows indicate the displacements of the atoms (mostly hydrogen atoms). Some of the atoms and eigendisplacements are overlapping since the molecule is shown from a high-symmetry direction. . 4. (Color online) Calculated IR spectrum of an isolated adamantane molecule (black dashed line), an adamantane monolayer (blue dotted line), and an adamantane monolayer on Au(111) (red solid line). The IR spectra are displayed in arbitrary units (chosen so that the intensity of the IR peak around 2650 cm −1 equals 1.0). The black vertical line in the figure separates lower and higher frequency regimes (there are no IR active modes in the intermediate regime between 2730and 2840 cm −1 ). Here we show only the IR spectra close to the C-H stretching mode frequencies (the closest IR mode not corresponding to C-H stretching is below 1500 cm −1 ).
FIG
. 5. (Color online) Interpolated IR spectra between the single molecule adamantane (solid black line) and the adamantane monolayer (solid blue line). The dashed (λ = 0.2), dotted (λ = 0.4), and chain (λ = 0.8) lines are interpolated spectra between two cases (using techniques from Sec. III B).
are between 0.33 and 0.63) which also affects their eigendisplacement patterns (compare Figs. 3(a) and (e) and Figs. 6(a) and (e)). In addition, the ω M 5 mode is redshifted by about 20 cm −1 with respect to the ω 1 and ω 6 mode in the gas phase.
of an adamantane monolayer without a gold substrate.
the Au(111) substrate is a mixture of the ω M 2 , ω M 5 , and ω M 6 modes of the isolated monolayer phase. Similarly, the ω Au 4 mode is composed of the ω M 2 , ω M 4 , and ω M 5 modes, while the ω Au 5 mode comes mostly from the ω M 6 mode with some admixture of the ω M 5 mode. The IR intensity of modes ω Au 2 , only a small amount of C-H bond stretching perpendicular to the surface (see phonon eigendisplacements in
shows a rather remarkable change in the frequency and IR intensity of the ω Au 6 mode. This mode is a mixture of the ω M 4 (2934 cm −1 ) and ω M 2 (2879 cm −1 ) modes of the isolated monolayer phase and its frequency is red shifted to online) Interpolated IR spectra going from an isolated molecular monolayer to a monolayer on Au(111). The blue solid line shows the IR spectrum of the isolated molecular monolayer, while the red solid line shows the IR spectrum of the monolayer on the Au(111) substrate. The dashed (λ = 0.1), dotted (λ = 0.2), and chain (λ = 0.4) lines show an interpolated spectra between the two cases. The arrow shows the transition from ω M 1 to ω Au 1 . We also indicate the peak position of the relatively weak ω Au 2 mode. The severely redshifted peak ω Au 6 is not shown in this figure (see left panel of Fig. 4.) 2664 cm −1 . Furthermore, its IR intensity is increased from 0.05 e 2 to 0.16 e 2 (effective charge in Table IV changes from 0.22 e to 0.40 e) compared to the ω M 4 mode. The ω Au 6 mode consists of the in-phase perpendicular vibration of the three bottom hydrogen atoms near the Au(111) surface (see Fig. 8(f)), therefore it is not unexpected that this mode will be significantly affected by the Au(111) surface. Comparison of the charge density of the isolated adamantane monolayer to the charge density of the monolayer on the Au(111)surface reveals that the molecule-surface interaction reduces the electron charge density on the adamantane C-H bonds. We speculate that this reduction of charge density within the C-H bonds is responsible for the decrease in the ω Au 6 mode frequency as well as increase of its effective charge.C. Comparison of theory and experiment
Figure 9
9shows a comparison of the experimental (green line) and theoretical (dashed blue line) IR spectra for an adamantane monolayer on the Au(111) surface. We find goodFIG. 8. (Color online) Eigendisplacement vectors of (a) ω Au 1 , (b) ω Au 2 , (c) ω Au 3 , (d) ω Au 4 , (e) ω Au 5 , and (f) ω Au 6 modes of an adamantane monolayer on the Au(111) surface. Yellow spheres illustrate Au atoms in the topmost layer of the Au(111) surface.
FIG. 9 .
9(Color online) Uncorrected (blue dashed line) and corrected (red solid line) theoretical IR spectra and experimentally observed IR spectrum (green solid line). The frequency region between 2730 cm −1 and 2840 cm −1 is not shown. The vertical scale is chosen so that the theoretical and experimental peaks around 2850 cm −1 have almost the same height. Left and right vertical axes correspond to theoretical and experimental IR intensity. Corrected theoretical values of ω Au 1 , ω Au 2 , ω Au 4 , ω Au 5 , and ω Au 6 modes are 2914, 2851, 2893, 2869, and 2644 cm −1 , respectively.
summarizes our main results. The black dashed line in Fig. 10 shows the calculated isolated adamantane gas phase IR spectrum, while the red line shows the severely modified spectrum of the adamantane monolayer on the Au(111) surface. The green line shows the experimental spectrum of an adamantane submonolayer on Au(111).
FIG. 10. (Color online) Summary of our main results. There are three IR active C-H stretching modes in the isolated (gas phase) adamantane molecule (black dashed line). The interaction between the neighboring adamantane molecules in the monolayer (packing effect) reduces the IR intensity of the central IR active mode by a factor of 3.5. The interaction between the monolayer and the Au(111) surface (Au substrate effect) reduces the highest frequency gas phase mode by 276 cm −1 and it increases its IR intensity by a factor of 2.6. The remaining third IR active gas phase mode is affected both by the Au(111) substrate and by the packing effect. See Sec. IV B for more detailed analysis. The calculated IR spectrum of an adamantane monolayer on Au(111) is shown with a red line, while the experimental spectrum is shown with a green line. ACKNOWLEDGMENTS Computational resources were provided by the DOE at Lawrence Berkeley National Laboratory's NERSC facility. Numerical calculations were also carried out on the TSUBAME2.0 supercomputer in the Tokyo Institute of Technology. Theoretical part of the work was supported by NSF Grant No. DMR-10-1006184 (structural determination) and by the Nanomachines Program at the Lawrence Berkeley National Lab funded by the office of Basic Energy Sciences, DOE under Contract No. DE-AC02-05CH11231 (infrared spectra simulations and analyses). The Experimental part of the study was supported by the Nanomachines
TABLE I .
ITheoretical binding energies and surface-molecular distances of several adsorption geometries obtained within the LDA approximation. The binding energies in theTable are computedas the sum of the total energy of the gold slab and the isolated adamantane molecule minus thetotal energy of an adamantane adsorbed on a Au(111) surface system. The convention for the
binding sites of Au(111) (atop, bridge, hollow) is as in Fig. 1(c). In our naming convention (for
example atop-bridge), the first part (atop) represents the position of the center of the adamantane
molecule, and second part (bridge) represents position of the three bottom hydrogen atoms of
adamantane.
Geometry
Distance
Binding energẙ
A
meV/molecule
atop-bridge
2.25
379
atop-hollow
2.25
384
hollow-atop
2.29
490
hollow-hollow
2.32
355
TABLE II .
IICalculated vibrational frequencies (in cm −1 ) and phonon effective charge Q z (in elementary charge e) of an isolated adamantane molecule from 2850 cm −1 to 2950 cm −1 . Experimental values are shown for the gas phase
IR spectroscopy of gas phase adamantane. b IR and Raman spectroscopy of adamantane solution. Mode assignment is based on Ref. 11.2
2940
0.25
2938
2950
ω 6
T 2
2918
0.32
2912
2904
ω 7
T 2
2892
0.19
2859
2849
a
TABLE III .
IIICalculated vibrational frequencies (in cm −1 ) and phonon effective charges Q z (in elementary charge e) of an adamantane monolayer. We label monolayer phonon modes by ω M 1 -ω M
6
DE-AC02-05CH11231 (STM measurements) and by the Department of Energy Early Career Award de-sc0003949 (development of IR laser source). YS acknowledges financial support from Japan Society for the Promotion of Science. RBC acknowledges financial support from Brazilian agencies CNPq, FAPERJ, INCT -Nanomateriais de Carbono and Rede de Pesquisa e Instrumentação em Nano-EspectroscopiaÓptica. Program of the Office of Basic Energy Sciences, Materials Sciences and Engineering Division, U.S. Department of Energy under Contract NoSGL acknowledges support of a Simons Foundation Fellowship in Theoretical PhysicsProgram of the Office of Basic Energy Sciences, Materials Sciences and Engineering Divi- sion, U.S. Department of Energy under Contract No. DE-AC02-05CH11231 (STM measure- ments) and by the Department of Energy Early Career Award de-sc0003949 (development of IR laser source). YS acknowledges financial support from Japan Society for the Pro- motion of Science. RBC acknowledges financial support from Brazilian agencies CNPq, FAPERJ, INCT -Nanomateriais de Carbono and Rede de Pesquisa e Instrumentação em Nano-EspectroscopiaÓptica. SGL acknowledges support of a Simons Foundation Fellowship in Theoretical Physics.
. N D Drummond, A J Williamson, R J Needs, G Galli, 10.1103/PhysRevLett.95.096801Phys. Rev. Lett. 9596801N. D. Drummond, A. J. Williamson, R. J. Needs, and G. Galli, Phys. Rev. Lett. 95, 096801 (2005).
. W L Yang, J D Fabbri, T M Willey, J R I Lee, J E Dahl, R M K Carlson, P R Schreiner, A A Fokin, B A Tkachenko, N A Fokina, W Meevasana, N Mannella, K Tanaka, X J Zhou, T Van Buuren, M A Kelly, Z Hussain, N A Melosh, Z.-X Shen, 10.1126/science.1141811Science. 3161460W. L. Yang, J. D. Fabbri, T. M. Willey, J. R. I. Lee, J. E. Dahl, R. M. K. Carlson, P. R. Schreiner, A. A. Fokin, B. A. Tkachenko, N. A. Fokina, W. Meevasana, N. Mannella, K. Tanaka, X. J. Zhou, T. van Buuren, M. A. Kelly, Z. Hussain, N. A. Melosh, and Z.-X. Shen, Science 316, 1460 (2007).
. Y Wang, E Kioupakis, X Lu, D Wegner, R Yamachika, J E Dahl, R M K Carlson, S G Louie, M F Crommie, 10.1038/nmat2066Nat. Mater. 738Y. Wang, E. Kioupakis, X. Lu, D. Wegner, R. Yamachika, J. E. Dahl, R. M. K. Carlson, S. G. Louie, and M. F. Crommie, Nat. Mater. 7, 38 (2008).
. D F Blake, F Freund, K F M Krishnan, C J Echer, R Shipp, T E Bunch, A G Tielens, R J Lipari, C J D Hetherington, S Chang, 10.1038/332611a0Nature. 332611D. F. Blake, F. Freund, K. F. M. Krishnan, C. J. Echer, R. Shipp, T. E. Bunch, A. G. Tielens, R. J. Lipari, C. J. D. Hetherington, and S. Chang, Nature (London) 332, 611 (1988).
. O Pirali, M Vervloet, J E Dahl, R M K Carlson, A G G M Tielens, J Oomens, 10.1086/516731Astrophys. J. 661919O. Pirali, M. Vervloet, J. E. Dahl, R. M. K. Carlson, A. G. G. M. Tielens, and J. Oomens, Astrophys. J. 661, 919 (2007).
. R Bailey, 10.1016/0584-8539(71)80094-6Spectrochimica Acta Part A: Molecular Spectroscopy. 271447R. Bailey, Spectrochimica Acta Part A: Molecular Spectroscopy 27, 1447 (1971).
. T J Broxton, L W Deady, M Kendall, R D Topsom, 10.1366/000370271779951011Appl. Spectrosc. 25600T. J. Broxton, L. W. Deady, M. Kendall, and R. D. Topsom, Appl. Spectrosc. 25, 600 (1971).
. P.-J Wu, L Hsu, D A Dows, 10.1063/1.1675234J. Chem. Phys. 542714P.-J. Wu, L. Hsu, and D. A. Dows, J. Chem. Phys. 54, 2714 (1971).
. R M Corn, V L Shannon, R G Snyder, H L Strauss, 10.1063/1.447687J. Chem. Phys. 815231R. M. Corn, V. L. Shannon, R. G. Snyder, and H. L. Strauss, J. Chem. Phys. 81, 5231 (1984).
. L Bistricic, G Baranovic, K Mlinaricmajerski, 10.1016/0584-8539(95)01416-RSpectrochim. Acta, Part A. 511643L. Bistricic, G. Baranovic, and K. Mlinaricmajerski, Spectrochim. Acta, Part A 51, 1643 (1995).
. G Szasz, A Kovacs, 10.1080/00268979909482949Mol. Phys. 96161G. Szasz and A. Kovacs, Mol. Phys. 96, 161 (1999).
. J Jensen, 10.1016/j.saa.2003.09.024Spectrochim. Acta, Part A. 601895J. Jensen, Spectrochim. Acta, Part A 60, 1895 (2004).
. O Pirali, V Boudon, J Oomens, M Vervloet, 10.1063/1.3666853J. Chem. Phys. 13624310O. Pirali, V. Boudon, J. Oomens, and M. Vervloet, J. Chem. Phys. 136, 024310 (2012).
. I V Pechenezhskiy, X Hong, G D Nguyen, J E P Dahl, R M K Carlson, F Wang, M F Crommie, Phys. Rev. Lett. 111126101I. V. Pechenezhskiy, X. Hong, G. D. Nguyen, J. E. P. Dahl, R. M. K. Carlson, F. Wang, and M. F. Crommie, Phys. Rev. Lett. 111, 126101 (2013).
. T Kitagawa, Y Idomoto, H Matsubara, D Hobara, T Kakiuchi, T Okazaki, K Komatsu, J. Org. Chem. 711362T. Kitagawa, Y. Idomoto, H. Matsubara, D. Hobara, T. Kakiuchi, T. Okazaki, and K. Komatsu, J. Org. Chem. 71, 1362 (2006).
. X Hong, X Shen, M Gong, F Wang, Opt. Lett. 374982X. Hong, X. Shen, M. Gong, and F. Wang, Opt. Lett. 37, 4982 (2012).
. P Hohenberg, W Kohn, 10.1103/PhysRev.136.B864Phys. Rev. 136864P. Hohenberg and W. Kohn, Phys. Rev. 136, 864 (1964).
. W Kohn, L J Sham, 10.1103/PhysRev.140.A1133Phys. Rev. 1401133W. Kohn and L. J. Sham, Phys. Rev. 140, 1133 (1965).
. D M Ceperley, B J Alder, 10.1103/PhysRevLett.45.566Phys. Rev. Lett. 45566D. M. Ceperley and B. J. Alder, Phys. Rev. Lett. 45, 566 (1980).
. J P Perdew, A Zunger, 10.1103/PhysRevB.23.5048Phys. Rev. B. 235048J. P. Perdew and A. Zunger, Phys. Rev. B 23, 5048 (1981).
. D Vanderbilt, 10.1103/PhysRevB.41.7892Phys. Rev. B. 417892D. Vanderbilt, Phys. Rev. B 41, 7892 (1990).
. P Giannozzi, S Baroni, N Bonini, M Calandra, R Car, C Cavazzoni, D Ceresoli, G L , P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L.
. M Chiarotti, I Cococcioni, A Dabo, S Corso, S De Gironcoli, G Fabris, R Fratesi, U Gebauer, C Gerstmann, A Gougoussis, M Kokalj, L Lazzeri, N Martin-Samos, F Marzari, R Mauri, S Mazzarello, A Paolini, L Pasquarello, C Paulatto, S Sbraccia, G Scandolo, A P Sclauzero, A Seitsonen, P Smogunov, R M Umari, Wentzcovitch, 10.1088/0953-8984/21/39/395502J. Phys. Condens. Matter. 215502Chiarotti, M. Cococcioni, I. Dabo, A. Dal Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scan- dolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, J. Phys. Condens. Matter 21, 5502 (2009).
. A Kokalj, Comput. Mater. Sci. 28155A. Kokalj, Comput. Mater. Sci. 28, 155 (2003).
. M L Cohen, M Schlüter, J R Chelikowsky, S G Louie, 10.1103/PhysRevB.12.5575Phys. Rev. B. 125575M. L. Cohen, M. Schlüter, J. R. Chelikowsky, and S. G. Louie, Phys. Rev. B 12, 5575 (1975).
. H Rydberg, M Dion, N Jacobson, E Schröder, P Hyldgaard, S I Simak, D C Langreth, B I Lundqvist, 10.1103/PhysRevLett.91.126402Phys. Rev. Lett. 91126402H. Rydberg, M. Dion, N. Jacobson, E. Schröder, P. Hyldgaard, S. I. Simak, D. C. Langreth, and B. I. Lundqvist, Phys. Rev. Lett. 91, 126402 (2003).
. K Lee, É D Murray, L Kong, B I Lundqvist, D C Langreth, 10.1103/PhysRevB.82.081101Phys. Rev. B. 8281101K. Lee,É. D. Murray, L. Kong, B. I. Lundqvist, and D. C. Langreth, Phys. Rev. B 82, 081101 (2010).
. V R Cooper, 10.1103/PhysRevB.81.161104Phys. Rev. B. 81161104V. R. Cooper, Phys. Rev. B 81, 161104 (2010).
. I Hamada, M Tsukada, 10.1103/PhysRevB.83.245437Phys. Rev. B. 83245437I. Hamada and M. Tsukada, Phys. Rev. B 83, 245437 (2011).
. G Li, I Tamblyn, V R Cooper, H.-J Gao, J B Neaton, 10.1103/PhysRevB.85.121409Phys. Rev. B. 85121409G. Li, I. Tamblyn, V. R. Cooper, H.-J. Gao, and J. B. Neaton, Phys. Rev. B 85, 121409 (2012).
. S Baroni, S De Gironcoli, A Corso, P Giannozzi, 10.1103/RevModPhys.73.515Rev. Mod. Phys. 73515S. Baroni, S. de Gironcoli, A. dal Corso, and P. Giannozzi, Rev. Mod. Phys. 73, 515 (2001).
. D Porezag, M R Pederson, 10.1103/PhysRevB.54.7830Phys. Rev. B. 547830D. Porezag and M. R. Pederson, Phys. Rev. B 54, 7830 (1996).
. R G Greenler, 10.1063/1.1726462J. Chem. Phys. 44310R. G. Greenler, J. Chem. Phys. 44, 310 (1966).
. R Hexter, M G Albrecht, 10.1016/0584-8539(79)80143-9Spectrochim. Acta, Part A. 35233R. Hexter and M. G. Albrecht, Spectrochim. Acta, Part A 35, 233 (1979).
. N Sheppard, J Erkelens, 10.1366/0003702844555133Appl. Spectrosc. 38471N. Sheppard and J. Erkelens, Appl. Spectrosc. 38, 471 (1984).
|
[] |
[
"Universal Algebras of Hurwitz Numbers",
"Universal Algebras of Hurwitz Numbers"
] |
[
"A Mironov ",
"A Morozov ",
"S Natanzon "
] |
[] |
[] |
Infinite-dimensional universal Cardy-Frobenius algebra is constructed, which unifies all particular algebras of closed and open Hurwitz numbers and is closely related to the algebra of differential operators, familiar from the theory of Generalized Kontsevich Model.
| null |
[
"https://arxiv.org/pdf/0909.1164v2.pdf"
] | 115,162,331 |
0909.1164
|
2ffbb59ce371305dfcceff23d5f0c556d6c5d6d3
|
Universal Algebras of Hurwitz Numbers
24 Nov 2009
A Mironov
A Morozov
S Natanzon
Universal Algebras of Hurwitz Numbers
24 Nov 2009
Infinite-dimensional universal Cardy-Frobenius algebra is constructed, which unifies all particular algebras of closed and open Hurwitz numbers and is closely related to the algebra of differential operators, familiar from the theory of Generalized Kontsevich Model.
1. Introduction. Classical Hurwitz numbers of complex algebraic curves generate a commutative Frobenius algebra A m , which is naturally isomorphic to the center of the group algebra of symmetric group S m [1,2]. A natural extension is provided by Hurwitz numbers of seamed surfaces or foams [3,4]. These numbers determine a non-commutative Frobenius algebra B m and a homomorphism φ m : A m → B m . This set of data forms a Cardy-Frobenius algebra, which describes Klein topological field theories [5,6]. In the present paper the infinite-dimensional algebras A, B are described which unify all the Hurwitz numbers algebras. The construction is based on representation of the group S ∞ in the algebra M of formal differential operators, made from the matrix elements or directly from gl(∞) generators, which has its own value. M is actually the regular representation of the universal enveloping algebra of gl(∞) and our B is a subset in M, obtained by taking a kind of operator "traces". The homomorphism φ : A → M coincides with the representation of "cut-andjoin" operators constructed in [2], which naturally appear in the theory of Kontsevich integrals [7]- [12] and form an associative algebra, isomorphic to the algebra of S ∞ characters introduced in [2]. The simplest operators from B (in a different form) appeared in [13]. It would be also interesting to find a place for the algebras from [14] in this context.
Operators.
Let N denote the set of natural numbers (positive integers). Let D ab with a, b ∈ N be gl(∞) generators. They satisfy the commutation relation [D ab , D cd ] = δ bc D ad − δ ad D cb and can be conveniently represented in the regular representation by the differential operators
D ab = N e=1 X ae ∂ ∂X be , and we denoteÑ = {1 . . . N} ⊂ N. Introduce "balanced" operators V a 1 ... am| b 1 ... bm = : D a 1 b 1 . . . D ambm : = (e 1 ... em)∈Ñ m X a 1 e 1 . . . X amem ∂ ∂X b 1 e 1 . . . ∂ ∂X bmem
which form a basis in the universal enveloping algebra Ugl(∞). The second part of the formula is the explicit definition of the normal ordering in the first part. "Balanced" means that the number of X's is the same as the number of "momenta" ∂/∂X. The algebra M consists of linear combinations of such balanced operators. The algebras A and B are formed by summation over free indices a 1 , . . . , b m in two different ways. This operator depends only on the conjugation class of σ (Young diagram ∆ σ ) and it is the cut-andjoin operator constructed in [2]. These operators form an associative commutative algebra A ⊂ M.
Two-fold graphs and algebra B.
A graph (V a , E, V B ) is called two-fold, if its vertices are divided into two sets V a and V b , and all edges from E have one end in V a and another in V b . A homeomorphism of graphs ϕ :
(V a , E, V b ) → (V ′ a , E ′ , V ′ b ) is called isomorphism if ϕ(V a ) = (V ′ a ) and ϕ(V b ) = (V ′ b ). Denote through [(V a , E, V b )] the isomorphism class of (V a , E, V b ). Let Γ = [(V a , E, V b )]. Associate with every edge E i ∈ E a pair of numbers (a i , b i ) so that a i = a j , iff E i and E j have a common vertex in V a , while b i = b j iff E i and E j have a common vertex in V b .
We call the corresponding operator V a 1 ... am|b 1 ... bm with m = #(E) compatible with Γ. Then, as a straightforward generalization of the above definition of W [σ], denote through V(Γ) ⊂ M a sum over {a 1 , . . . , b m } of all operators, compatible with Γ, with certain normalization factor c(Γ|N). Denote through B the associative but non-commutative algebra formed by all the operators V(Γ).
Let B m be the set of isomorphism classes of the two-fold graphs with m edges. The associated vector space B m has a natural structure of Frobenius algebra, see s.2.3 of [4]. The product of classes
Γ 1 = [(V 1 a , E 1 , V 1 b )] and Γ 2 = [(V 2 a , E 2 , V 2 b )
] is a linear combination of classes Γ, consisting of graphs of the form (V 1 a , E, V 2 b ) obtained by identification of vertices of the same valence from V 1 b and V 2 a and gluing together the attached edges from E 1 and E 2 . The structure constants in Γ 1 Γ 2 = Γ C Γ Γ 1 Γ 2 Γ take graphs automorphisms into account. As generalization of a similar statement [2] for A we have:
Theorem. The structure constants of the algebra B,
V(Γ 1 )V(Γ 2 ) = Γ∈B C V(Γ) V(Γ 1 )V(Γ 2 ) V(Γ)
, contain the structure constants of all B m in the following sense:
lim N →∞ C V(Γ) V(Γ 1 )V(Γ 2 ) = C Γ Γ 1 Γ 2 , provided |Γ 1 | = |Γ 2 | = |Γ| = m.
The structure constants of A are independent of N.
3 .
3Algebra A. The group S ∞ is formed by the permutations in N, involving only the finite sets of numbers. Define a representation φ : S ∞ → M, mapping σ ∈ S m into the sum W [∆ σ ] = (a 1 ... am)∈Ñ m V a 1 ... am|a σ(1) ... a σ(m) 1 P.N.Lebedev Physical Institute and ITEP; [email protected] 2 ITEP; [email protected]
5 .
5Acknowledgment. Our work is partly supported by Russian Federal Nuclear Energy Agency, by RFBR grants 07-02-00878 (A.Mir.), 07-02-00645 (A.Mor.), 07-01-00593 (S.N.), by joint grants 09-02-90493-Ukr, 09-02-93105-CNRSL, 09-01-92440-CE, 09-02-91005-ANF and by Russian President's Grants of Support for the Scientific Schools NSh-3035.2008.2 (A.M.'s) and NSh-709.2008.1 (S.N.)
Mirror symmetry and elliptic curves. R Dijkgraaf, Prog.in Math. 129The moduli spaces of curvesR.Dijkgraaf, Mirror symmetry and elliptic curves, The moduli spaces of curves, Prog.in Math., 129 (1995) 149-163
A Mironov, A Morozov, S Natanzon, arXiv:0904.4227Complete set of cut-and-join operators in Hurwitz-Kontsevich theory. A.Mironov, A.Morozov and S.Natanzon, Complete set of cut-and-join operators in Hurwitz-Kontsevich theory, arXiv:0904.4227
Algebra of Hurwitz numbers for seamed surfaces. A Alexeevski, S Natanzon, Russian Math.Surveys. 614A.Alexeevski and S.Natanzon, Algebra of Hurwitz numbers for seamed surfaces, Russian Math.Surveys 61 [4] (2006) 767-769
Algebra of two-fold graphs and Hurwitz numbers for seamed surfaces, 72. A Alexeevski, S Natanzon, A.Alexeevski and S.Natanzon, Algebra of two-fold graphs and Hurwitz numbers for seamed surfaces, 72 [4] (2008) 3-24
Noncommutative two-dimensional topological field theories and Hurwitz numbers for real algebraic curves. A Alexeevski, S Natanzon, math.GT/0202164Selecta Math., New ser. 123A.Alexeevski and S.Natanzon, Noncommutative two-dimensional topological field theories and Hurwitz numbers for real algebraic curves, Selecta Math., New ser., 12 [3] (2006) 307-377, math.GT/0202164
Hurwitz numbers for regular coverings of surfaces by seamed surfaces and Cardy-Frobenius algebras of finite groups. A Alexeevski, S Natanzon, Amer.Math.Soc.Transl. 2242A.Alexeevski and S.Natanzon, Hurwitz numbers for regular coverings of surfaces by seamed surfaces and Cardy- Frobenius algebras of finite groups, Amer.Math.Soc.Transl. 224 [2] (2008) 1-25
Intersection theory on the moduli space of curves and the Airy function. M Kontsevich, Comm.Math.Phys. 147M.Kontsevich, Intersection theory on the moduli space of curves and the Airy function, Comm.Math.Phys. 147 (1992) 1-23
Towards unified theory of 2d gravity. S Kharchev, A Marshakov, A Mironov, A Morozov, A Zabrodin, hep-th/9201013Nucl. Phys. 380S.Kharchev, A.Marshakov, A.Mironov, A.Morozov and A.Zabrodin, Towards unified theory of 2d gravity, Nucl. Phys. B380 (1992) 181-240, hep-th/9201013
1-55, hep-th/9303139; Matrix Models as Integrable Systems. A Morozov, hep-th/9502091Phys.Usp. 37Integrability and Matrix ModelsA.Morozov, Integrability and Matrix Models, Phys.Usp. 37 (1994) 1-55, hep-th/9303139; Matrix Models as Inte- grable Systems, hep-th/9502091
2d gravity and matrix models. I. 2d gravity. A Mironov, hep-th/9312212Int.J.Mod.Phys. 9A.Mironov, 2d gravity and matrix models. I. 2d gravity, Int.J.Mod.Phys. A9 (1994) 4355, hep-th/9312212
A Alexandrov, A Mironov, A Morozov, P Putrov, arXiv:0811.2825Partition Functions of Matrix Models as the First Special Functions of String Theory. II. Kontsevich Model. A.Alexandrov, A.Mironov, A.Morozov and P.Putrov, Partition Functions of Matrix Models as the First Special Functions of String Theory. II. Kontsevich Model, arXiv:0811.2825
Generation of Matrix Models by W-operators. A Morozov, Sh, Shakirov, arXiv:0902.2627JHEP. 090464A.Morozov and Sh.Shakirov, Generation of Matrix Models by W-operators, JHEP 0904:064, 2009, arXiv:0902.2627
Disk single Hurwitz numbers, to appear in Funk. S Natanzon, arXiv:0804.0242An. and Apl. S.Natanzon, Disk single Hurwitz numbers, to appear in Funk. An. and Apl., arXiv:0804.0242)
S Loktev, S Natanzon, arXiv:0910.3813Generalized Topological Field Theories from Group Representations. S.Loktev and S.Natanzon, Generalized Topological Field Theories from Group Representations, arXiv:0910.3813
|
[] |
[
"On Market Design and Latency Arbitrage 1",
"On Market Design and Latency Arbitrage 1",
"On Market Design and Latency Arbitrage 1",
"On Market Design and Latency Arbitrage 1"
] |
[
"Wolfgang Kuhle [email protected] \nMax Planck Institute for Social Law and Social Policy\nZhejiang University\nHangzhou, MunichChina, Germany\n",
"Mea \nVSE\nPragueCzech Republic\n",
"Wolfgang Kuhle [email protected] \nMax Planck Institute for Social Law and Social Policy\nZhejiang University\nHangzhou, MunichChina, Germany\n",
"Mea \nVSE\nPragueCzech Republic\n"
] |
[
"Max Planck Institute for Social Law and Social Policy\nZhejiang University\nHangzhou, MunichChina, Germany",
"VSE\nPragueCzech Republic",
"Max Planck Institute for Social Law and Social Policy\nZhejiang University\nHangzhou, MunichChina, Germany",
"VSE\nPragueCzech Republic"
] |
[] |
We argue that contemporary stock market designs are, due to traders' inability to fully express their preferences over the execution times of their orders, prone to latency arbitrage. In turn, we propose a new order type which allows traders to specify the time at which their orders are executed after reaching the exchange. Using this order type, traders can synchronize order executions across different exchanges, such that high-frequency traders, even if they operate at the speed of light, can no-longer engage in latency arbitrage.
|
10.2139/ssrn.3997709
|
[
"https://arxiv.org/pdf/2202.00127v1.pdf"
] | 246,442,245 |
2202.00127
|
3c6235473cda9229452e7e467f9b0499309e0858
|
On Market Design and Latency Arbitrage 1
26 Dec 2021
Wolfgang Kuhle [email protected]
Max Planck Institute for Social Law and Social Policy
Zhejiang University
Hangzhou, MunichChina, Germany
Mea
VSE
PragueCzech Republic
On Market Design and Latency Arbitrage 1
26 Dec 2021Market DesignHigh-frequency TradingLatency ArbitrageLaw of one Price JEL: D47
We argue that contemporary stock market designs are, due to traders' inability to fully express their preferences over the execution times of their orders, prone to latency arbitrage. In turn, we propose a new order type which allows traders to specify the time at which their orders are executed after reaching the exchange. Using this order type, traders can synchronize order executions across different exchanges, such that high-frequency traders, even if they operate at the speed of light, can no-longer engage in latency arbitrage.
Introduction
Investors, who minimize the cost at which they acquire a given number of shares, have an incentive to break-up large orders into several smaller orders, which are then placed on different exchanges.
Moreover, in order to avoid that a high frequency trader (HFT) can front-run some of their orders, investors have an incentive to place orders such that they are executed simultaneously across different exchanges. Such simultaneous executions are, however, difficult to implement in a world with random latencies. 2 1 I thank Eric Budish and Al Roth for bringing the topics of market design and high-frequency trading to my attention. First draft August 2019.
2 That is, suppose an investor sends two buy orders to two different exchanges, and one of these orders, e.g. Order 1, reaches Exchange 1 some time before Order 2 reaches Exchange 2. This scenario allows a HFT, who detects the early execution of Order 1 on Exchange 1, to quickly buy on Exchange 2. In turn, the HFT can sell at a profit when the investor's Order 2 reaches Exchange 2. High frequency traders operate dedicated glass-fiber networks for this purpose. Such networks allow for (one way) latencies of roughly 4 milliseconds (ms) between Chicago and New York. At the same time, an investor, e.g. from Albany, faces a distribution of latencies: Albany-New York (µ = 51ms, σ = 28ms) and Albany-Chicago (µ = 103ms, σ = 25.7ms). That is, all orders sent from Albany to New York and to Chicago, which do not arrive within 4 ms of one-another, are subject to latency arbitrage. Using data from the NYSE and the CME, Budish et al. (2015) show that such arbitrage opportunities are roughly worth 75
In the present paper, we study latency arbitrage in a model where investors/traders buy and sell one homogenous asset on two geographically distinct exchanges. Trading is complicated by randomly varying latencies, and by the presence of high-frequency traders (HFTs), who enjoy lower latencies than all other market participants. Based on this model, we propose a new order type, which allows investors to specify the time at which their orders are executed after reaching the exchange. That is, we propose a market design, where traders can, in addition to choosing when to send orders, choose the time at which an order is executed after it reaches the exchange.
In turn, we show that traders can use this order type to better synchronize order executions across exchanges, such that HFTs can no-longer engage in latency arbitrage.
Related Literature: Budish et al. (2015) have recently argued that the competition for lower latencies among HFTs is "a symptom of a flawed market design." In turn, to reduce the rents collected by HFTs, Budish et al. (2015), p. 1549, propose that exchanges should switch from "continuoustime trading" to "discrete-time trading." That is, Budish et al. (2015) argue that exchanges should only execute orders at prescribed, discrete, points in time. Put differently, Budish et al. (2015) propose to restrict traders' choices regarding the execution times of their trades. In the present paper we show that giving traders additional, rather than fewer, choice variables may also resolve the problem of latency arbitrage. Taking this view, "discrete-time trading" may be an unnecessary constraint on the market place. Budish et al. (2015) observe that the price of the SPY in Chicago is not perfectly correlated with the price of the SPY on the NYSE. Put yet differently, Budish et al. (2015) show that the law of one price does not hold at very short time horizons. This phenomenon is (naturally) even more pronounced, e.g. Epps (1979), in older data sets.
have placed an order in a manner which creates an arbitrage opportunity. In turn, we show that creating such arbitrage opportunities is (i) costly for the trader and (ii) can be avoided if the trader uses the order type proposed here. That is, the arbitrage opportunities, upon which Budish et al.
(2015) build their argument for slowing markets, are no-longer present. Budish et al. (2015), and Aquilina et al. (2020) review 4 several alternative proposals aimed at reducing latency arbitrage. Many of these proposals suggest to Tobin-tax financial transactions, or to tax high frequency trading, or to tax low latency infrastructure. Other proposals argue for reductions in the speed at which orders are executed, or for reductions in the frequency with which markets open, or limits to the speed with which market participants can place/cancel orders. Yet different proposals suggest to introduce additional noise in the placement times of orders, to dilute the speed advantage of HFTs. Another branch of models suggest that fast traders should compete in a "fast market," and that slow traders should trade in a "slow market." These proposals have in common that they place additional restrictions on markets and market participants. The present paper thus offers an alternative perspective: we argue that latency arbitrage can be addressed by removing, rather than adding, to the restrictions that market participants face.
Organization: Section 2 studies a deterministic benchmark. Section 3 introduces the latency friction, and shows that contemporary market designs, where orders are executed as soon as they reach the exchange, are prone to latency arbitrage; even if traders strategically delay the sending of orders. We also note that strategic order delay generates price distributions, which are in line with the empirical observations in Budish et al. (2015). Section 4 proposes an order type, which helps traders to synchronize order executions across exchanges. Using recent latency data, Section 4.1 illustrates the practical effectiveness of the order type that we propose. Section 5 concludes.
Deterministic Benchmark
One asset is traded on two exchanges m = L, S. Each exchange has its own limit order book/excess demand function for the asset. The number/density of shares f (P ), which are on offer at each price P , differs across exchanges. To distinguish between the two exchanges, we assume that there is a large exchange L, which is more liquid than the smaller exchange S in the sense that f L (P ) > f S (P )∀P . 5 We also assume that, unless a large order is placed in a manner that brings 4 See also Stiglitz (2014) and Linton and Mahmoodzadeh (2017) for recent reviews on high-frequency trading, and Roth and Xing (1994); Roth and Ockenfels (2002) for a broader market design perspective on the optimal time that markets open and close. 5 The CME-Group (2016), p.3, estimates that its market for the SPY future is 7 times more liquid than the NYSE's market for the SPY ETF. The CME-Group (2016), p.3, also estimates that buying 100 Million worth of the S&P prices into temporary disequilibrium, the market satisfies the law of one price P L = P S = P 0 . 6 A trader, who buys all shares offered for prices less or equal P * m on exchange m, receives a quantity X m :
X m = P * m P 0 f m (P )dP, m = L, S.
(1)
The cost of buying a bundle of stocks X m in market m is thus:
E m = P * m P 0 P f m (P )dP, m = L, S.(2)
To minimize the cost, of acquiring a given bundle of stocks X, the trader buys shares on both exchanges:
min P * L ,P * S P * L P 0 P f L (P )dP + P * S P 0 P f S (P )dP s.t. X L + X S = X.(3)
The first-order conditions to problem (3) imply that:
P * L = P * S = P * .(4)
Hence, we have:
Lemma 1. Large traders split orders between both marketplaces such that the law of one price is not violated.
Random Latency
Suppose now that the trader communicates his orders via a telecommunication network with random latency. That is, orders to exchanges L and S may be delayed such that one order is executed earlier than the other. A high-frequency trader (HFT) can exploit this. That is, once he observes that, e.g., the price on the small exchange increases to P * S , he knows from equation (4) that there is an order X * L , P * L on its way to exchange L. He thus quickly buys the quantity X * L on the large exchange, in order to sell at a higher price once the trader's delayed order arrives at exchange L. 7
This yields a rent for the HFT:
P * L X L − P * L P 0 P f L (P )dP > 0,(5)
500 costs 1.25 basis points (BP) on the CME while the cost is 2 BP if the same amount of the SPY ETF is bought on the NYSE. 6 Appendix A, presents such a market place, consisting of two local markets/exchanges, each of which with a distinct (excess) demand function/limit-order book. 7 If the HFT's order arrives after the order of the trader, the HFT gets no fill and cancels the order. That is, the HFT acts as a pure arbitrageur in our model. Using our assumptions on market liquidity, namely f S (P ) < F L (P ), we can rank these outcomes:
Lemma 2. E sim < E L < E S .
8 Note that, even if the trader knew that his order is front run buy the HFT with probability one, he would still buy from the HFT: executing the whole order on just one exchange would increase the (short-run) price on that exchange beyond the price P * that he is paying when he buys from the HFT. Note also that a HFT, who acts as a pure arbitrageur, carries no inventory. That is, he will not buy more than what the investor is "willing" to buy from him.
Proof.
E sim = P * P 0 P f L (P )dP + P * P 0 P f S (P )dP < P * P 0 P f L (P )dP + P * X S = E L (6) E L = P * P 0 P f L (P )dP + P * X S < P * X L + P * P 0 P f S (P )dP = E S(7)
Optimal Order Delay
In this section we study how investors can cope with random latencies under the contemporary market design, where orders are executed as soon as they arrive at the exchange. More precisely, we show that investors have an incentive to strategically delay orders such that trades are revealed more often on the liquid exchange. Put differently, the contemporary market design forces traders to tradeoff early execution on one exchange against early execution on the other exchange.
Simultaneous execution of orders, however, is very difficult to implement. In turn, with these observations in place, we present a simple solution to the problem of latency arbitrage in Section 4.
We denote order delays to the small exchange, relative to the order to the large exchange, by δ ∈ R. 10 To minimize the expected cost of purchasing a given quantity of the asset he chooses δ such that: min δ π sim (δ, H)E sim + π L (δ, H)E L + π S (δ, H)E S , π sim + π L + π S = 1.
9 CME-Group (2016), p. 3, discuss that the Chicago market for the S&P 500 is significantly more liquid than that of the NYSE. 10 That is, δ = 10 means that the order to exchange S is delayed by 10 milliseconds (ms), and δ = −20 means that the order to the large exchange is sent 20 ms after the order to exchange S was sent.
Where π sim , π L , π S are the probabilities with which orders are executed simultaneously, or revealed on the large exchange or on the small exchange, respectively. To evaluate (8), we first work with general probabilities. In turn, we assume that latencies are normally distributed, and solve for traders' optimal order delay explicitly.
Regarding probabilities we assume that π L,δ := ∂π L ∂δ ≥ 0, π S,δ := ∂π S ∂δ ≤ 0. Since π sim +π L +π S = 1, we have π sim,δ := ∂π sim ∂δ 0. The first order condition to problem (8) is:
π sim,δ (E sim − E L ) = π S,δ (E L − E S ),(9)
respectively
π L,δ (E L − E sim ) + π S,δ (E S − E sim ) = 0.(10)
We thus have 11
Lemma 3. At an interior solution δ * , we have π sim,δ < 0 and |π L,δ | > |π S,δ |.
Proof. π sim,δ < 0 follows from (9) and Lemma 2. |π L,δ | > |π S,δ | follows from (10) and Lemma 2.
Lemma 3 indicates that traders do not maximize the probability of simultaneous execution.
Instead, they lean towards execution on the large exchange as illustrated in Diagram 2. In Appendix B we assume that latencies are normally distributed, and solve explicitly for δ * .
Φ S Φ L µ S µ L ℓ S ,ℓ L Φ S ,Φ L Φ S Φ L µ S +δ µ L ℓ S ,ℓ L Φ S ,Φ L
Diagram 2: Delaying Orders. Traders have an incentive to delay orders to the small exchange.
The optimal delay δ * does not maximize the probability of simultaneous execution.
11 If latency has compact support, there exists an interior optimal delay δ * as long as H > 0.
Synchronized Order Placement
Suppose now that orders can be accompanied by a time identifier, which specifies the exact time T at which the order is added to the exchange's order-book/executed. That is, orders are sent out to the exchanges at time t = 0, with an identifier T m > 0, m = L, S, indicating the time at which the exchange adds the respective order to its order book, i.e. the time when the trade is executed.
That is, a trade is implemented via the following time line:
1. Orders are sent to exchanges. In addition to price and quantity, these orders also specify the exact time of execution/addition to the limit order book.
2. Once orders arrive at the respective exchanges, they are not executed/placed until the specified placement time is reached. Exchanges are not allowed to publish the receipt of these orders until the placement time has been reached.
3. If an order arrives at the exchange after the desired placement time, it is placed immediately.
Lemma 4. Under the new order type, traders can choose placement times such that simultaneous execution is ensured.
Proof. We note that:
π sim ≥ P (l S ≤ T + H)P (l L ≤ T + H) + P (|l S − l L | ≤ H)(1 − P (l S ≤ T + H)P (l L ≤ T + H)) ≥ P (l S ≤ T + H)P (l L ≤ T + H) (11) π sim = 1 − π L − π S(12)
Equations (11) with (12), and the fact that π L , π S ≥ 0 and lim T →∞ P (l S ≤ T +H)P (l L ≤ T +H) = 1, imply: lim T →∞ π sim = 1, lim T →∞ π L = 0, and lim T →∞ π S = 0.
That is, simultaneous execution can be ensured via the extended order type: instead of delaying messages, to tradeoff the probability of early execution on one exchange with the probability of early execution on the other. Put differently, increases in placement time T simultaneously reduce the probabilities π L and π S with which latency arbitrage occurs, and unambiguously increase the probability of simultaneous order placement. 12 12 Using latency data by wondernetwork (2021)
Φ S Φ L ℓ S T +H ℓ S ,ℓ L Φ S ,Φ L Φ S Φ L ℓ S 2H ℓ S ,ℓ L Φ S ,Φ LDiagram
Calibration
To illustrate the advantage of the order type proposed here, let us consider the case of an Albany (New York State) based investor, who trades the S&P 500 in New York City and Chicago:
1. The HFT's one-way Chicago to New York time is roughly H=4 ms 2. Latencies 13 are distributed: Albany-NY (µ S = 51, σ S = 28) and Albany-Chicago (µ L = 103, σ L = 25.7).
0.67 seconds. This means that the mean delay necessary to ensure simultaneous execution is roughly 0.2 seconds.
Within the US, we find that Knoxville, Tennessee, puts up the highest latency numbers (the maximum observed latencies to Manhattan and Chicago are 70 ms and 80 ms respectively). Accordingly, Knoxville based traders can ensure simultaneous execution of orders by setting T=0.08 seconds. All other US based traders, who enjoy a better connection, can ensure simultaneous execution by choosing even lower values for T . Likewise, traders from London and Frankfurt, could choose T = 80 ms and T = 95 ms respectively, to ensure simultaneous order execution. 13 We use latency data provided by wondernetwork (2021), 27.05.2021, 9:22 GMT. 3. Relative excess cost (E L −E sim ) (E S −E sim ) : CME-Group (2016) estimate the price impact of placing 100 million SPY order as 1,25 BP. Placing the same order on the NYSE has an estimated impact of 2BP. Using the the linear model in Appendix A, we thus have E S −E sim E L −Esim = 2 1.25 = 1.6.
In Appendix C, we use these data to compute execution probabilities for (i) the myopic scenario,
where orders are simply sent simultaneously to the exchanges (ii) the model of Section 3.1, where traders strategically delay the sending of orders and (iii) for the model of Section 4, where traders can specify the execution times of their orders. First, we find that only 96% of trades are subject to latency arbitrage, when orders are simply sent simultaneously by investors. Second, for the model of optimal order delay of Section 3.1 and Appendix B, we find that a large majority, i.e. roughly 98% of trades, are first revealed in Chicago. This finding is in line with the empirical evidence provided in Budish et al. (2015). 14 Moreover, the incentive to skew early executions to the large exchange is so strong that the probability of simultaneous execution is only 1%, i.e. even lower than the 4% of simultaneous executions that we observed in the scenario where orders were not strategically delayed. Finally, once traders can use the order type proposed in Section 4, they can, e.g. set T=150 ms, to ensure that over 99% of trades are executed simultaneously. Put differently,
given that the mean latency from from Albany to Chicago is 103 ms, a mean delay of 0.047 seconds ensures that latency arbitrage is no-longer possible.
Conclusion
We propose an order type, which allows traders to specify the time at which their orders are executed after reaching the exchange. Using this order type, traders can synchronize order executions across exchanges such that HFTs can no-longer engage in latency arbitrage. Put differently, the order type proposed here allows large traders to place orders such that the law of one price holds, even at "high-frequency time horizons."
Earlier proposals in the literature, aimed at reducing latency arbitrage, require taxes on financial transactions, or place restrictions on the speed at which trades are executed, orders placed, or prices quoted. The present paper thus offers an alternative perspective, where a relaxation, rather than a tightening, of constraints helps traders, and thus the entire market, to avoid the cost of latency arbitrage.
A Linear Model
We study a linear demand system
P S = a − bX S , a, b > 0 (13) P L = c − dX L , c, d > 0 (14) X S + X L =X(15)P S = P L = P 0 ,(16)
which rewrites as:
X S = a + dX − c b + d X L = −a + bX + c b + d P S = a − b a + dX − c b + d P L = c − d −a + bX + c b + d .
If a trader buys a number of sharesX, we obtain new (long-run) prices and quantities
X S = a + d(X −X) − c b + d , X L = −a + b(X −X) + c b + d P S = a − b a + d(X −X) − c b + d P L = c − d −a + b(X −X) + c b + d .
Strategy of HFT and the Limit Order Book For the current demand/supply system we have:
dX S dP S = − 1 b , dX L dP L = − 1 d
the cost of purchasing a quantity of sharesX is thus:
X = − P * P 0 dX S dP S dP S − P * P 0 dX L dP L dP L = (P * − P 0 )( 1 b + 1 d )
Expenditures, in the case of simultaneous execution, are:
E sim = − P * P 0 P S dX S dP S dP S − P * P 0 P L dX L dP L dP L = 1 2 (P * 2 − P 2 0 )( 1 b + 1 d )
Expenditures, in the case where the trade is revealed on exchange L, are:
E L = −P * P * P 0 dX S dP S dP S − P * P 0 P L dX L dP L dP L = P * 2 1 b − P * P 0 1 b + 1 2 (P * 2 − P 2 0 ) 1 d
Expenditures, in the case where the trade is revealed on exchange L, are:
E S = − P * P 0 dX S dP S dP S − P * P * P 0 dX L dP L dP L = P * 2 1 d − P * P 0 1 d + 1 2 (P * 2 − P 2 0 ) 1 b
Hence we have
E L − E sim = 1 2b (P * − P 0 ) 2 > 0 E S − E sim = 1 2d (P * − P 0 ) 2 > 0 E S − E sim E L − E sim = b d
Taking into account the CME-Group (2016), p.3, estimate that a purchase worth 100 Million in the SPY increases prices by ∆P L = 1.25 BP and ∆P S = 2 BP respectively. Moreover, for demands (13) and (14) we have d = ∆P L ∆X L as well as b = ∆P S ∆X S . Hence, we can recall (17) to obtain
E S −E sim E L −E sim = b d ≈ 2 1.25 = 1.6.
B Normally Distributed Latency
Delivery times for normally distributed network noise are:
l S = µ S + δ + σ S ξ ξ ∼ N (0, 1)(17)
l L = µ L + σ L ε ε ∼ N (0, 1)
The HFT's order delivery time is H > 0. To work with these latencies, we define x := l S − l L and γ := E[l S − l L ] = µ S − µ L + δ. Moreover, we assume that latencies are uncorrelated such that α := 1 V ar(l S −l L ) = 1 σ 2 S +σ 2 L . We denote the cumulative standard normal distribution function by Φ() and it's derivative by φ(). The probabilities of early revelation on exchanges L, S as well as the probability of simultaneous execution are thus:
π L = P (l S − l L > H) = 1 − Φ( √ α(−γ + H))(18)π S = P (l S − l L < −H) = Φ( √ α(−γ − H))(19)π sim = P (|l S − l L | < H) = 1 − π L − π S(20)
Given (18) and (19), the first order condition for optimal order delay (10) can be rewritten as:
φ( √ α(−γ + H))(E L − E sim ) = φ( √ α(−γ − H))(E S − E sim ).(21)
Recalling φ(z) = 1 √ 2π e − z 2 2 , we solve (21):
Lemma 5. Orders to the small exchange are delayed such that δ * = µ L − µ S + σ 2 S +σ 2 L 2H ln( E S −E sim E L −E sim ), and γ * (δ * ) = σ 2 S +σ 2 L 2H ln( E S −E sim E L −E sim ) > 0, and π L (δ * ) > π S (δ * ).
C Early Execution Probabilities
Orders without strategic delay: without delay, i.e. δ = 0 such that γ = µ S − µ L + δ = −52, we have π S ≈ Φ( 48 38 ) ≈ 0.89, π L = 1 − Φ(− 56 38 ) ≈ 0.07 and π sim ≈ 0.04. Orders with strategic delay: recalling Appendix B, we have optimal delay δ * = µ L − µ S + σ 2 S +σ 2 L 2H ln( E S −E sim E L −E sim ), and γ * (δ * ) =
the three possible outcomes faced by an investor I. Top left: the large exchange L reveals the trade to the HFT. For this case, we denote the investor's total expenditures by E L . Top right, the small exchange S reveals the trade. For this case, we denote the investor's total expenditures by E S . Bottom left, orders are executed simultaneously. In this case, the investor pays E sim .
The observation that traders delay orders to the small exchange, to increase the probability with which trades are revealed early on the large exchange, is in line with the empirical evidence inBudish et al. (2015), p. 1569, who find that "[t]he majority (88.56 percent) of arbitrage opportunities in our data set are initiated by a price change in ES [Chicago], with the remaining 11.44 percent initiated by a price change in SPY[NYSE]." 9 Moreover, they remark that this "is consistent with the practitioner perception that the ES [Chicago] market is the center for price discovery in the S&P 500 index." UnlikeBudish et al. (2015), however, we do not treat the early executions in Chicago as an exogenous empirical fact. Early executions in Chicago are an endogenous feature of our model.
3 illustrates how choosing execution time T increases the probability of simultaneous order execution. The shaded area on the left indicates the probability of simultaneous order placement when traders can specify execution time T , conditional on a latency realizationl S . The shaded area on the right indicates the probability of simultaneous order placement when traders cannot specify execution time T , conditional on a latency realizationl S .
3 :
3Probability of Synchronized Order Placement: the shaded areas illustrate the probability of simultaneous order placement, conditional on a latencyl S . The probability of synchronized order placement (on the left), where traders can choose the execution time T is much larger than under the current market design (on the right), where orders are executed immediately after reaching the exchange.
L.
2H ln( E S −E sim E L −E sim ) ≈ 1444 8 ln(1.6) ≈ 84.8. Recalling that α = Substitution into (18)-(20) yields π S ≈ Φ( −88.8 38 ) ≈ 0.01, π L ≈ 1 − Φ( −80.8 38 ) ≈ 0.98 and thus π sim ≈ 0.01. Orders with strategic placement time: recalling that π sim ≥ P (l S ≤ T + H)P (l L ≤ T + H) + P (|l S − l L | ≤ H)(1 − P (l S ≤ T + H)P (l L ≤ T + H)) = Φ( 103 5.2 )Φ( 51 5 ) + 0.04(1 − 0.99) ≈ 0.99. Hence choosing, e.g. T = 150 ms, yields π sim ≈ 0.99, i.e. effectively ensures simultaneous execution of orders.
Budish et al. (2015), p. 1548, take the observation that correlations between the prices of homogenous assets, which trade on two different exchanges, break down at "high-frequency time horizons" as an exogenous, empirical, fact. 3 In turn,Budish et al. (2015), p. 1552, argue that "[t]his correlation breakdown in turn leads to obvious mechanical arbitrage opportunities, available to whoever is fastest. For instance, at 1:51:39.590 PM, after the price of the ES [Chicago] has just jumped roughly 2.5 index points, the arbitrage opportunity is to buy SPY [NYSE] and sell ES[Chicago]." In the present paper, on the contrary, we start with a model where the break down in correlations results endogenously from the placement of large orders under random latency.That is, the observation "[a]fter the price of the ES [Chicago] has just jumped roughly 2.5 index
points, the arbitrage opportunity is to buy SPY [NYSE] and sell ES [Chicago]," has the following
interpretation in our model: a trader, large enough to move the market by 2.5 index points, must
million USD annually for the trade in the SPY ETF alone.
3 Put differently,
, 27.05.2021 at 9:22 GMT, we note that the present order type can ensure simultaneous order placement, even for very bad connections. For example, the mean latencies from Kampala to Manhattan and Chicago are both roughly 440 ms. The highest observed latency from Kampala to Manhattan (latencies to, e.g. servers in New Jersey, are similar) is 640 ms. The highest latency from Kampala to Chicago was 671 ms. Hence, a trader from Kampala can ensure simultaneous execution of his orders by setting T = 671 ms, i.e.
That is, there would be a maximum time span of ±0.2ms, with which orders would be placed on two separate exchanges once the order type proposed here is used. This error is an order of magnitude smaller than the time frames that any HFT 15 could exploit.Technical Feasibility: the order type proposed here, requires that both exchanges use rea-
sonably precise clocks. Regarding time measurement, we not that recent MIFID II regulation,
European-Commission (2016), requires that all financial markets transactions within the EU, which
are related to high-frequency trading, are recorded with a precision of at least 100 microseconds,
i.e. 0.1 ms. 14 Budish et al. (2015), p. 1569, who find that "[t]he majority (88.56 percent) of arbitrage opportunities in our data
set are initiated by a price change in ES [Chicago], with the remaining 11.44 percent initiated by a price change in
SPY [NYSE]."
15 Sending a message from Chicago to New York 3.66 ms, if it travels at the speed of light. That is, possible time
measurement errors ξ within ±.1 ms, which would change execution times from exactly T to T ± ξ, would not change
our analysis. Indeed time frames of 0.1 ms are still small in the context of European markets, where geographic
distances are smaller. That is, it takes light roughly 1 ms to travel from London to Frankfurt.
Quantifying the high-frequency trading "arms race. M Aquilina, E Budish, P Neill, A simple new methodology and estimates. Workingpaper. Aquilina, M., Budish, E., and O'Neill, P. (2020). Quantifying the high-frequency trading "arms race": A simple new methodology and estimates. Workingpaper, pages 1-94.
The high-frequency trading arms race: Frequent batch auctions as a market design response. E Budish, P Crampton, J Shim, Quarterly Journal of Economics. 1304Budish, E., Crampton, P., and Shim, J. (2015). The high-frequency trading arms race: Frequent batch auctions as a market design response. Quarterly Journal of Economics, 130(4):1547-1621.
The big picture: A cost comparison of futures and etfs. Cme-Group, CME-Group (2016). The big picture: A cost comparison of futures and etfs.
Comovements in stock prices in the very short run. T Epps, Journal of the American Statistical Association. 74366Epps, T. (1979). Comovements in stock prices in the very short run. Journal of the American Statistical Association, 74(366):291-298.
Supplementing directive 2014/65/eu of the european parliament and of the council with regard to regulatory technical standards for the level of accuracy of business clocks. European-Commission, European-Commission (2016). Supplementing directive 2014/65/eu of the european parlia- ment and of the council with regard to regulatory technical standards for the level of ac- curacy of business clocks. https : //eur − lex.europa.eu/legal − content/EN/T XT /?uri = CELEX%3A32017R0574.
Implications of high-frequency trading for security markets. O Linton, S Mahmoodzadeh, Annual Review of Economics. Linton, O. and Mahmoodzadeh, S. (2017). Implications of high-frequency trading for security markets. Annual Review of Economics, pages 237-259.
Last-minute bidding and the rules for ending second-price auctions: Evidence from ebay and amazon auctions on the internet. A E Roth, A Ockenfels, American Economic Review. 92Roth, A. E. and Ockenfels, A. (2002). Last-minute bidding and the rules for ending second-price auctions: Evidence from ebay and amazon auctions on the internet. American Economic Review, 92:1093-1103.
Jumping the gun: Imperfections and institutions related to the timing of market transactions. A E Roth, X Xing, American Economic Review. 84Roth, A. E. and Xing, X. (1994). Jumping the gun: Imperfections and institutions related to the timing of market transactions. American Economic Review, 84:992-1044.
Tapping the brakes: Are less active markets safer and better for the economy. J E Stiglitz, WorkingpaperStiglitz, J. E. (2014). Tapping the brakes: Are less active markets safer and better for the economy. Workingpaper, pages 1-19.
Global ping statistics. wondernetwork (2021). Global ping statistics. https://wondernetwork.com/pings/.
|
[] |
[
"Gravitational Radiation by Cosmic Strings in a Junction",
"Gravitational Radiation by Cosmic Strings in a Junction"
] |
[
"R Brandenberger ",
"H Firouzjahi ",
"J Karouby ",
"S Khosravi ",
"\nPhysics Department\nInstitute of High Energy Physics\nMcGill University\nH3A 2T8MontrealCanada\n",
"\nSchool of Physics\nInstitute for Research in Fundamental Sciences (IPM)\nPhysics Department\nChinese Academy of Sciences\nP.O. Box 918-4100049Beijing, TehranP.R. China, Iran\n",
"\nPhysics Department\nFaculty of Science\nMcGill University\nH3A 2T8MontrealCanada\n",
"\nSchool of Astronomy\nInstitute for Research in Fundamental Sciences(IPM)\nTarbiat Mo'alem University\nTehran, TehranIran, Iran\n"
] |
[
"Physics Department\nInstitute of High Energy Physics\nMcGill University\nH3A 2T8MontrealCanada",
"School of Physics\nInstitute for Research in Fundamental Sciences (IPM)\nPhysics Department\nChinese Academy of Sciences\nP.O. Box 918-4100049Beijing, TehranP.R. China, Iran",
"Physics Department\nFaculty of Science\nMcGill University\nH3A 2T8MontrealCanada",
"School of Astronomy\nInstitute for Research in Fundamental Sciences(IPM)\nTarbiat Mo'alem University\nTehran, TehranIran, Iran"
] |
[] |
The formalism for computing the gravitational power radiation from excitations on cosmic strings forming a junction is presented and applied to the simple case of co-planar strings at a junction when the excitations are generated along one string leg. The effects of polarization of the excitations and of the back-reaction of the gravitational radiation on the small scale structure of the strings are studied.
|
10.1088/1475-7516/2009/01/008
|
[
"https://arxiv.org/pdf/0810.4521v2.pdf"
] | 10,419,910 |
0810.4521
|
73d21bff05a28550d4063a61153f77a41e532883
|
Gravitational Radiation by Cosmic Strings in a Junction
26 Jan 2009
R Brandenberger
H Firouzjahi
J Karouby
S Khosravi
Physics Department
Institute of High Energy Physics
McGill University
H3A 2T8MontrealCanada
School of Physics
Institute for Research in Fundamental Sciences (IPM)
Physics Department
Chinese Academy of Sciences
P.O. Box 918-4100049Beijing, TehranP.R. China, Iran
Physics Department
Faculty of Science
McGill University
H3A 2T8MontrealCanada
School of Astronomy
Institute for Research in Fundamental Sciences(IPM)
Tarbiat Mo'alem University
Tehran, TehranIran, Iran
Gravitational Radiation by Cosmic Strings in a Junction
26 Jan 2009Gravity Waves, Cosmic Strings PACS numbers: * Electronic address: rhb@physicsmcgillca † Electronic address: firouz@ipmir ‡ Electronic address: karoubyj@physicsmcgillca § Electronic address: khosravi@ipmir
The formalism for computing the gravitational power radiation from excitations on cosmic strings forming a junction is presented and applied to the simple case of co-planar strings at a junction when the excitations are generated along one string leg. The effects of polarization of the excitations and of the back-reaction of the gravitational radiation on the small scale structure of the strings are studied.
I. INTRODUCTION
In models of brane inflation cosmic strings are produced (for a review see [1]). This has led to a revival of interest in cosmic strings (see e.g. [2,3,4]). Cosmic strings forming in the context of brane models can take the form of Fundamental strings (F-strings), D1branes (D-strings) or their bound states, ((p,q) strings). A (p,q) string is a bound state of p F-strings and q D-strings. Networks of stringy cosmic strings which can involve strings with different values of p and q have features unlike those of simple gauge theory strings.
Unlike U(1) gauge theory cosmic strings which inter-commute when they intersect, in the case of cosmic (p,q) strings there are conservation laws which prevent the inter-commutation of strings with different values of p and q. Instead, a string junction can be formed. For example, a p string and a q string can join at a junction to form a (p,q) string. The construction of cosmic strings with junctions and its cosmological implications were studied in [5,6,7,8,9,10,11,12,13,14,15,16].
Gravitational wave (GW) emission from loops and cusps of cosmic strings has been studied (for a comprehensive review see [17,18] and for more recent analyses see [19,20]). A straight infinite string does not emit GW. This is because to emit GW, as we shall explicitly see in the next section, both left-movers and right-movers should be present on the string world sheet. In a network of cosmic strings, it is quite natural to expect that wiggles of different wavelengths are generated on the world sheet of an infinite string. These wiggles, for example, are left over from times when the correlation length of the string network was much smaller, or are remnants of string inter-commutations which took place in the past.
These wiggles cause the GW emission from long strings and can smooth out the wiggles of the string world sheet. GW emission from wiggles on a straight string were studied in [21,22,23]. In particular, in [23] left-moving and right-moving wave-trains of different wavelengths and amplitudes on an infinite string were considered. It was shown that when the wavelengths and the amplitudes of the wave-trains are comparable, the GW emission is mainly from lower harmonics and is proportional to the frequency of the wave-trains.
This indicates that excitations of higher frequency die out faster than excitations of shorter frequencies. On the other hand, when the wavelengths and amplitudes of the wave-trains are much different, then GW emission is exponentially suppressed.
As mentioned above, the formation of junctions is a generic feature of networks of cosmic superstrings. With this motivation, in this paper, we consider gravitational radiation from strings at a junction. As we shall see, the presence of the junction leads to mixing of left and right-moving excitations on the string which is the necessary criterium for the emission of GW. In Section 2, we present the setup of our study. In Section 3 we study three examples.
The first example is GW emission from a semi-infinite string attached to a rigid wall. The second example corresponds to GW emission from a stationary junction. The third example concerns GW emission from a non-stationary junction. As we shall see, the expressions for the gravitational wave power radiated has a similar form in all three examples. We discuss our results and summarize our conclusions in Section 4.
II. THE SETUP
Our setup consists of semi-infinite strings forming a stationary junction. The formalism in this section is valid for any number of semi-infinite strings meeting at a junction. However, to be specific, in our study we shall focus on the simple example where three semi-infinite strings form a stationary junction.
The world-sheet of each string is described by a temporal coordinate τ and a string length parameter σ. The induced metric γ i ab on each string is given by
γ i ab = g µν ∂ a X µ i ∂ b X ν i .(1)
Here and in the following, we reserve {a, b} = {τ, σ} for the string world-sheet indices while Greek indices represent the four-dimensional space-time coordinates. Furthermore, X µ i stands for the position of the i-th string in four space-time dimensions. We impose the conformal temporal gauge on the string world-sheet for which X 0 i = t = τ and γ i 0σ = 0. This is equivalent tȯ
x i . x ′ i = 0 ,ẋ 2 i + x ′2 i = 1 .(2)
Here an overdot and a prime denote derivatives with respect to t and σ, respectively, while
x i represent the spatial components of the i-th string.
For the components of the induced metric on the string world sheet we obtain
γ i 00 = 1 −ẋ 2 i , γ i σσ = −x ′2 i = −γ i 00 .(3)
We start with the following action
S = − i µ i d t d σ −|γ i | θ(s i (t) − σ)(4)
where |γ i | is the determinant of the world sheet metric of the i-th string. We are using the convention that the position of the junction on the i-th string is given by s i (t). It is assumed that σ is increasing towards the junction. We can impose a lower cutoff on σ, which would correspond to the physical length of the string under consideration. The equation of motion for s i (t) and the conditions for junction formation have been studied in [6,7,8].
The energy-momentum tensor for the action given by Eq. (4) is obtained by varying the action with respect to the background metric g µν , with the result
δ gµν S = − 1 2 i µ i d t d σ −|γ i | γ ab ∂ a X µ i ∂ b X ν i θ(s i (t) − σ) δg µν ≡ − 1 2 d 4 x T µν δg µν (5) which gives T µν (x) = i µ i d t d σ −|γ i | γ ab X µ i, a X ν i, b θ(s i (t) − σ) δ (4) (x − X i ) = i µ i d t d σ(Ẋ µẊ ν − X ′µ X ′ν ) θ(s i (t) − σ) δ (4) (x − X i ) .(6)
Having obtained the energy-momentum tensor, we can use the standard formalism for calculating GW emission from a source [24] .The derivation in [24] is for a source which is localized in space. To justify the application of the formalism to the case of a long string, we can imagine considering first short wave-trains on the string, in which case the formalism of [24] applies as it was initially derived, and then taking the limit in which the length of the wave trains increases. This limit does not lead to any problems when applying for formalism . According to this formalism, the power emitted in direction k per solid angle Ω, integrating over the frequencies ω of the emitted waves, is given by
dE dΩ = 2G ∞ 0 dωω 2 T λν * (k) T λν (k) − 1 2 |T λ λ (k)| 2 ,(7)
where G is Newton's gravitational constant and T λν (k) is the Fourier transform of T λν (t, x)
T µν (k) = 1 2π d 4 x T µν (x) e ik.x .(8)
In conformal temporal gauge the solution of the string equations of motion
X µ − X ′′ µ = 0 (9)
can be represented by the combination of left-moving and right-moving modes:
X µ i = 1 2 (a µ i (v) + b µ i (u)) , a ′ i 2 = b ′ i 2 = 0 .(10)
where v = σ + t and u = σ − t are the light-cone coordinates.
Since we need the components of the energy-momentum tensor in Fourier space, it is useful to replace the θ function by its Fourier representation which is
θ(x) = 1 2π i ∞ −∞ dℓ e iℓx ℓ − iε , ε → 0 + .(11)
Inserting this into Eq. (6), we find
T µν (k) = j µ i 8π 1 2πi ∞ −∞ dℓ ℓ − iε dudv(a ′µ j b ′ν j + a ′ν j b ′µ j ) × exp iℓs j (u, v) − iℓ 2 (u + v) + i 2 k.(a j + b j ) . (12)
In the first two examples in the following section, we consider cases when the junction remains stationary, corresponding to s i = 0. In this case, one obtains
T µν (k) = j µ j 8π 1 2πi ∞ −∞ dℓ ℓ − iε A µ j (k, ℓ)B ν j (k, ℓ) + A ν j (k, ℓ)B µ j (k, ℓ)(13)
where
A µ j (k, ℓ) ≡ L/2 −L/2 dv a ′µ j (v) exp [ik.a j (v)/2 − iℓv/2] B µ j (k, ℓ) ≡ L/2 −L/2 du b ′µ j (u) exp [ik.b j (u)/2 − iℓu/2] ,(14)
where L is the physical length of the string being considered (since the GW emission comes from regions where the wave trains are non-vanishing, effectively L can be taken as the length on the string which corresponds to the region where the wave-trains are localized), and we assumed the mid point of the string is at world sheet coordinates u = v = 0.
To calculate A µ i (k, ℓ) and B µ i (k, ℓ) we follow the formalism of [23]. We assume that on each string there are left-moving and right-moving wave-trains of lengths
L i = N i a λ i a and L i = N i b λ i b for integers N i a and N i b , where λ i a(b) = 2π/κ i a(b)
are the wavelengths of the left(right)-moving wave-trains on each string.
We are interested in GW emission from the excitations of the strings and neglect the contributions of the straight parts of the strings to T µν . To establish our notation, the contributions to a quantity Q from the string excitations are denoted by δQ. For example, a µ i → a µ i + δa µ i and so on. Discarding the contributions from the straight parts of the strings (which do not contribute to gravitational radiation), one obtains for the fluctuating part of
T µν : δT µν (k) = j µ j 8π 1 2πi ∞ −∞ dℓ ℓ − iε (δA µ j δB ν j + δA ν j δB µ j ) ,(15)
where
δA µ j (k, ℓ) = L/2 −L/2 dv e ivK j + /2 δa ′µ j + i 2 a ′µ j k.δa j(16)
and δB µ (k, ℓ) is given by a similar expression. Herê
K j + ≡ K j + − ℓ , K j + ≡ k.a ′ j(17)
withK − and K − defined similarly for the right-movers.
For the case where s i = 0, such as in the third example in the next section, we obtain
δT µν (k) = j µ j 8π 1 2πi ∞ −∞ dℓ ℓ − iε (δA µ j δB ν j + δA ν j δB µ j ) + iℓ dudvs j e iK j + v/2 e iK j − u/2 (a ′µ j δb ′ν j + b ′ν j δa ′µ j + µ ↔ ν) + i 2 k.(δa j + δb j )(a ′µ j b ′ν j + a ′ν j b ′µ j ) − ℓ 2 2 du dv s 2 j e iK j + v/2 e iK j − u/2 (a ′µ j b ′ν j + a ′ν j b ′µ j ) }(18)
III. EXAMPLES
In this section we employ the formalism presented in the last section to calculate GW emission for different examples.
A. A semi-infinite string attached to the wall
The first example we would like to consider is gravitational radiation from a semi-infinite string attached to a rigid wall. An incoming perturbation is coming from infinity, hits the wall and gets reflected. This creates wave-trains of both left-mover and right-mover on the string. This problem is in spirit very similar to the problem of two left-moving and right-moving wave-trains propagating on an infinite string studied by Siemens and Olum [23].
The non-fluctuating string configuration is given by
a ′µ = (1, e) , b ′µ = (−1, e)(19)
where the unit vector e represents the orientation of the string. We could simply take the vector e to be along the z axis. However, in order to establish a formalism which can also be applied to the next examples involving strings oriented in different directions, we keep the vector e unspecified. The perturbations on the string are given by
δa ′µ = ǫ a f cos(κ a v) , δb ′µ = ǫ b f cos(κ a u)(20)
where Calculating δA µ , one obtains
δ A = δA 0 e −K + k.f f , δA 0 = −4 (−1) Na ǫ a k.f sin(LK + /4) K 2 + − 4κ 2 a(21)
with a similar expressions for δB µ withK + replaced byK − .
To calculate δT µν (k), we need to plug δA µ (k, ℓ) and δB µ (k, ℓ) into Eq. (15) and integrate over ℓ. There are five poles at
ℓ 1 = iε , ℓ 2,3 = K + ± 2κ a , ℓ 4,5 = K − ± 2κ a .(22)
One can easily check that only the residue at ℓ 1 gives a non-zero contribution to δT µν (k) and the other residues vanish. For example, calculating the residue at ℓ = ℓ 2 , one obtains that δT µν (k) ∝ sin(Lκ a /2) = sin(N a π) = 0.
Calculating the residue at ℓ = ℓ 1 , one obtains
δT µν (k) = µ 1 8π [δA µ (k, 0)δB ν (k, 0) + δA ν (k, 0)δB µ (k, 0)] .(23)
Noting that δA µ (k, 0) and δB µ (k, 0) are real, one obtains
dE dΩ dω = Gµ 2 1 16π 2 ω 2 δA(k, 0) 2 δB(k, 0) 2 = 16 Gµ 2 1 π 2 ω 2 ǫ 2 a ǫ 2 b K 2 + K 2 − sin 2 (LK + /4) (K 2 + − 4κ 2 a ) 2 sin 2 (LK − /4) (K 2 − − 4κ 2 b ) 2(24)
One interesting result which emerges from the above is that the combination k.f drops from the numerator and denominator of the above expression. This indicates that the GW power emission is independent of the polarization of the incoming waves. This feature will also show up in next example.
To exploit the symmetry of the problem, now we assume that the string is oriented along the z axis, so k.e = ω cos θ, where θ is defined as the angle between the vector k and the orientation of string. On the other hand, calculating K + and K − , we get
K + = ω(1 − cos θ) , K − = −ω(1 + cos θ) .(25)
As in [23], changing the coordinates from (ω, cos θ) to (K + , K − ), and noting that 2ω =
K + − K − and dω d(cos θ) = dK + dK − /2ω, one obtains dE dφ = 4 Gµ 2 1 π 2 ǫ 2 a ǫ 2 b dK + dK − (K + − K − )K 2 + K 2 − sin 2 (LK + /4) (K 2 + − 4κ 2 a ) 2 sin 2 (LK − /4) (K 2 − − 4κ 2 b ) 2 .(26)
Here φ is defined as the azimuthal angle around the string.
The above integral can be performed using the following approximations for large N a [23] sin 2 (N a x a /2)
sin 2 (x a /2) = 2κ a N a ∞ n=−∞ δ(K + − 2nκ a )(27)
where x a ≡ λ a K + /2. A similar identity also holds for right-movers with K + → K − , N a → N b and n → m. Using this identity, Eq. (26) yields
dE dφ = 2Gµ 2 1 π 2 N a N b κ a κ b m,n n 2 m 2 (nκ a − mκ b ) sin 2 (nπ) sin 2 (mπ) (n 2 − 1) 2 (m 2 − 1) 2 .(28)
Knowing that K + ≥ 0, K − ≤ 0, one can see that only n = −m = 1 contribute in the summation in Eq. (28) and one obtains
dE dφ = Gµ 2 1 8 N a N b π 2 ǫ 2 a ǫ 2 b κ a + κ b κ a κ b .(29)
We are interested in the power radiated per unit of length, dP/dl, which is obtained by dividing the above expression by the world-sheet volume of the string, L a L b /2. After integrating over the angle φ and noting that ǫ a = ǫ b and κ a = κ b , we obtain
dP d l = Gµ 2 1 π 16 ǫ 4 a κ a .(30)
Note that our result is identical to the power radiated per unit of length obtained in [23] for an infinite string. This is not surprising since our semi-infinite string locally looks identical to an infinite string. The only difference is at the junction with the wall -but since that point is not moving it does not contribute to the power of gravitational radiation. Thus, we expect that our result for a semi-infinite string agrees with that of [23] for an infinite string (there is a factor of 1/2 mistake in the original version of Eq. (72) of [23] which is corrected in the printed version. Taking that into account, our result here agrees with their Eq. (36)).
As in [23] the power radiation is dominated by lower harmonics. Also for n = −m = 1 we note that ω = 2κ a , so the frequency of the radiation is twice of the frequency of the incoming wave.
B. Strings at a junction
In this section we consider the problem of GW emission from strings at a junction. Three semi infinite strings form a stationary junction. There is an incoming right moving excitation on one string, say String 1. After the wave hits the junction, part of it is transferred to Strings 2 and 3, while part of the incoming wave is reflected along String 1 (see [7] for the details of the dynamics). Depending on the polarization of the incoming wave, the junction may stay stationary, corresponding to δs i (t) = 0, or the junction may dislocate along the strings corresponding to δs i (t) = 0.
To be specific, suppose that the strings are in the x − y plane and their orientations are
given by e i = (cos θ i , sin θ i , 0) where θ i is the angle of the i-th string with the x-axis. This
gives
a ′µ i ≡ (1, a i ) = (1, e i ) b ′µ i ≡ (−1, b i ) = (−1, e i ) ,(31)
where a i (b i ) indicates the spatial part of a µ i (b µ i ). The relations µ i cos θ i = µ i sin θ i = 0 also must be satisfied if the junction is to be stationary (this is due to the force balance condition).
Now suppose there is a small incoming excitation on one string, say String 1, with
δb ′ 1 (u) = ǫ f 1 cos(κu) ,(32)
and δb ′ 2 = δb ′ 3 = 0. Here ǫ ≪ 1 is a dimensionless parameter controlling the amplitude of the perturbation.
The case s i (t) = 0
The simplest case is when f 1 = (0, 0, 1). This has the advantage that the junction does not dislocate on the strings: δṡ i = 0 [7] and we can use the formalism developed in the previous section. Following [7], one finds
δa ′ 1 = ν 1 µ ǫ f 1 cos(κv) , δa ′ 2 = δa ′ 3 = 2µ 1 µ ǫ f 1 cos(κv) ,(33)
where
µ ≡ µ 1 + µ 2 + µ 3 , ν 1 ≡ µ 2 + µ 3 − µ 1 .(34)
With this initial condition, one sees that only δB µ 1 is non-zero. Furthermore, δT µν (k) is as given in Eq. (23), with δA µ 1 given as in Eq. (21). Thus, dE/dΩ dω has the same form as Eq. (24). We note that String 1 is in x − y plane. But we can label the coordinate (or perform a coordinate transformation) such that String 1 is along the zdirection. To calculate the power radiated, we can simply use Eq. (29) with the identification ǫ b = ǫ and ǫ a = ǫν 1 /µ, and as before, we note that φ is defined as the azimuthal angle around String 1.
The power radiated per unit of length, using Eq. (29), therefore is dP d l = Gµ 2 1 π 16
ν 2 1 µ 2 ǫ 4 κ .(35)
One may wonder why this result has the same form as that in the previous example where a semi-infinite strings was attached to a rigid wall. The reason is that here the junction plays the role of the rigid wall. Indeed, the fact that the junction remains stationary makes this analogy more manifest. The effect of Strings 2 and 3 is to let parts of incoming waves be transferred to them. This has the effect that ǫ a = ǫ b .
The case s i (t) = 0
Now we consider the case when the polarization of the incoming perturbation is in the xy-plane. Then s i will also oscillate and the junction does not stay at a fixed position on each string [7]. As in [7] we assume
f i = (− sin θ i , cos θ i , 0)(36)
and e i .f i = 0 for each string. As before, the incoming perturbation is along String 1 and is given by
δb ′ 1 (u) = ǫf 1 cos(κu) .(37)
Following [7] , one obtains
δa ′ 1 = − ν 1 µ ǫ f 1 cos(κv) , δa ′ 2,3 = 2µ 1 µ ǫ f 2,3 cos(κv) ,(38)
and
δs 1 = (µ 2 − µ 3 )ν 1 κ ∆ ǫ sin(κt) , δs 2 = −δs 3 = − µ 1 ν 1 κ ∆ ǫ sin(κt)(39)
with ν 1 as given in Eq. (34) and ∆ = √ µν 1 ν 2 ν 3 , where ν 2,3 are defined like ν 1 with the appropriate permutations.
With these forms of δs i , one can show (see the Appendix) that both integrals containing the linear and the quadratic powers of s i (t) in δT µν (k) in Eq. (18) vanish.
One sees that δT µν is of the same form as Eq. (23) with δA µ 1 given as in Eq. (21) . Like in previous example, the effect of the junction is to make ǫ a = ǫ b . On the other hand, since the amplitude of δa µ 1 is the same as in the previous example where the polarization was along the z axis, we find that the power radiated is the same as before, given by Eq. (35).
The fact that the power of gravitational radiation when the polarization f 1 is coplanar with the strings is the same as when the polarization is perpendicular to the plane of the strings may seem surprising. However, as mentioned in [7], one can consider the excitations on the strings as the propagation of massless particles. Using conservation of energy, one can check that the transmission and reflection indices for both polarizations are the same.
IV. DISCUSSION
In this paper, gravitational wave (GW) emission from strings at a stationary junction has been studied. We considered the simple case when three co-planar semi-infinite strings form a stationary junction. A purely left-moving wave, excited on one string, travels towards the junction. Part of it is reflected from the junction while the rest is transferred to other strings.
The role of the junction therefore is to mix the left-moving and right-moving excitations which are necessary for GW emission.
We found that power of gravitational radiation is independent of the polarization of the incoming wave. Furthermore, its magnitude is proportional to the frequency of the incoming wave. This means that excitations of higher frequencies (shorter wavelengths) die out faster than excitations with lower frequencies (longer wavelengths).
In [23,25,26] the gravitational back-reaction effects on the small scale structure present on a long string are studied. Here we shall briefly apply their formalism to our case. An excitation of the form Eq. (32) leads to a change δµ in the mass per unit length of the string. This change takes the form
δµ ∼ µ 1 ǫ 2 .(40)
The energy loss via gravitational radiation given by Eq. (30) or Eq. (35) leads to a decrease of this contribution:
d dt (δµ) = − dP dz = − Gµ 1 8λ π 2 ν 2 1 µ 2 ǫ 2 δµ .(41)
This differential equation has the solution
δµ ∼ exp (−t/τ ) ,(42)
where τ = 8λ π 2 Gµ 1 µ 2 ǫ 2 ν 2 1 .
(43)
Excitations which survive until the present time t 0 are characterized by τ > t 0 . Taking ǫ 1, the minimum wavelength of excitations that can survive is thus approximately given by
λ min ∼ Gµ 1 ν 2 1 µ 2 t 0 ,(44)
while on smaller scales the wiggles are exponentially suppressed.
In this analysis we have considered monochromatic wave in the form of Eq. (32). In [25] (see also [26]) the estimation of back-reaction was generalized to the case when higher harmonics of the initial Fourier modes on the long string are present and when not all the modes interact with all of the other modes. In this case, it was shown that the minimum wavelength is given by
λ min = (Gµ 1 ) n t 0(45)
where n = 3/2, 5/2 for radiation and matter dominated eras, respectively.
In this work we have considered GW emission from three co-planar strings forming a junction and assuming that excitations are originally generated on one string leg. It would be interesting to generalize this exercise to more realistic cases of an arbitrary number
ǫ a(b) are small numbers controlling the amplitude of the perturbations, κ a(b) are the frequencies of the left(right)-moving perturbations and f is a unit vector indicating the polarization of the perturbations with e.f = 0. In this example we know that ǫ a = ǫ b and κ a = κ b . However, in order to keep the formalism general we have not made these identifications.
of strings in a junction when incoming waves of arbitrary frequencies and amplitudes are excited on each string. It would be interesting to see if the results of [23] hold, where it was shown that GW emission from left-moving and right-moving wave-trains on an infinite string is zero if the wave-trains have significantly different frequencies and amplitudes. V. ACKNOWLEDGMENTS We would like to thank E. Copeland, T. Kibble, X. Siemens and D. Steer for useful comments. At McGill, this research has been supported by NSERC under the Discovery Grant program. R.B. is also supported by the Canada Research Chairs program. R.B. wishes to thank the Theory Division of the Institute of High Energy Physics in Beijing for their wonderful hospitality during the final stages of this project. Appendix : Higher powers of s i (t) in δT µν (k) In this Appendix we show that the integrals containing the powers of s i (t) in δT µν (k) in Eq. (18) vanish. To see this, consider the term quadratic in s dℓ ℓ − iε du dv e iK + v/2 e iK − u/2 (1 − cos κu cos κv − sin κu sin κv) . (47) has three poles at ℓ = iε,K + = 0 andK − = 0. However, the residues at all three poles vanish. Performing the integrals for the other two terms in the bracket in Eq. (47), one can check that the residues at the poles vanish. In conclusion, the integral in Eq. (47) and correspondingly the integral in Eq. (18) containing terms of second power in s i (t) vanish. Following the same strategy, one can check that the integral in Eq. (18) containing terms of linear power in s i (t) also vanishes.
Brane inflation: String theory viewed from the cosmos. S H Henry Tye, arXiv:hep-th/0610221Lect. Notes Phys. 737949S. H. Henry Tye, "Brane inflation: String theory viewed from the cosmos," Lect. Notes Phys. 737, 949 (2008) [arXiv:hep-th/0610221].
Cosmic strings reborn?. T W B Kibble, astro-ph/0410073T. W. B. Kibble, "Cosmic strings reborn?," astro-ph/0410073.
Fundamental cosmic strings. A C Davis, T W B Kibble, arXiv:hep-th/0505050Contemp. Phys. 46313A. C. Davis and T. W. B. Kibble, "Fundamental cosmic strings," Contemp. Phys. 46, 313 (2005) [arXiv:hep-th/0505050].
Cosmic Superstrings. M Sakellariadou, arXiv:0802.3379hep-thM. Sakellariadou, "Cosmic Superstrings," arXiv:0802.3379 [hep-th].
Cosmic superstring gravitational lensing phenomena: Predictions for networks of (p,q) strings. B Shlaer, M Wyman, arXiv:hep-th/0509177Phys. Rev. D. 72123504B. Shlaer and M. Wyman, "Cosmic superstring gravitational lensing phenomena: Predictions for networks of (p,q) strings," Phys. Rev. D 72, 123504 (2005) [arXiv:hep-th/0509177].
Collisions of strings with Y junctions. E J Copeland, T W B Kibble, D A Steer, arXiv:hep-th/0601153Phys. Rev. Lett. 9721602E. J. Copeland, T. W. B. Kibble and D. A. Steer, "Collisions of strings with Y junctions," Phys. Rev. Lett. 97, 021602 (2006) [arXiv:hep-th/0601153].
Constraints on string networks with junctions. E J Copeland, T W B Kibble, D A Steer, arXiv:hep-th/0611243Phys. Rev. D. 7565024E. J. Copeland, T. W. B. Kibble and D. A. Steer, "Constraints on string networks with junctions," Phys. Rev. D 75, 065024 (2007) [arXiv:hep-th/0611243].
On the Collision of Cosmic Superstrings. E J Copeland, H Firouzjahi, T W B Kibble, D A Steer, arXiv:0712.0808Phys. Rev. D. 7763521hep-thE. J. Copeland, H. Firouzjahi, T. W. B. Kibble and D. A. Steer, "On the Collision of Cosmic Superstrings," Phys. Rev. D 77, 063521 (2008) [arXiv:0712.0808 [hep-th]].
Lensing and CMB Anisotropies by Cosmic Strings at a Junction. R Brandenberger, H Firouzjahi, J Karouby, arXiv:0710.1636Phys. Rev. D. 7783502hep-thR. Brandenberger, H. Firouzjahi and J. Karouby, "Lensing and CMB Anisotropies by Cosmic Strings at a Junction," Phys. Rev. D 77, 083502 (2008) [arXiv:0710.1636 [hep-th]].
Velocity-Dependent Models for Non-Abelian/Entangled String Networks. A Avgoustidis, E P S Shellard, arXiv:0705.3395astro-phA. Avgoustidis and E. P. S. Shellard, "Velocity-Dependent Models for Non-Abelian/Entangled String Networks," arXiv:0705.3395 [astro-ph].
Numerical experiments with p F-and q Dstrings: the formation of (p,q) bound states. A Rajantie, M Sakellariadou, H Stoica, arXiv:0706.3662JCAP. 071121A. Rajantie, M. Sakellariadou and H. Stoica, "Numerical experiments with p F-and q D- strings: the formation of (p,q) bound states," JCAP 0711, 021 (2007) [arXiv:0706.3662].
Evolution of cosmic superstring networks: a numerical simulation. J Urrestilla, A Vilenkin, arXiv:0712.1146JHEP. 080237hep-thJ. Urrestilla and A. Vilenkin, "Evolution of cosmic superstring networks: a numerical simula- tion," JHEP 0802, 037 (2008) [arXiv:0712.1146 [hep-th]].
A C Davis, W Nelson, S Rajamanoharan, M Sakellariadou, arXiv:0809.2263Cusps on cosmic superstrings with junctions. hep-thA. C. Davis, W. Nelson, S. Rajamanoharan and M. Sakellariadou, "Cusps on cosmic super- strings with junctions," arXiv:0809.2263 [hep-th].
Cosmic Necklaces from String Theory. L Leblond, M Wyman, arXiv:astro-ph/0701427Phys. Rev. D. 75123522L. Leblond and M. Wyman, "Cosmic Necklaces from String Theory," Phys. Rev. D 75, 123522 (2007) [arXiv:astro-ph/0701427].
Lumps in the throat. K Dasgupta, H Firouzjahi, R Gwyn, arXiv:hep-th/0702193JHEP. 070493K. Dasgupta, H. Firouzjahi and R. Gwyn, "Lumps in the throat," JHEP 0704, 093 (2007) [arXiv:hep-th/0702193].
Exact gravitational lensing by cosmic strings with junctions. T Suyama, arXiv:0807.4355Phys. Rev. D. 7843532astro-phT. Suyama, "Exact gravitational lensing by cosmic strings with junctions," Phys. Rev. D 78, 043532 (2008) [arXiv:0807.4355 [astro-ph]].
Cosmic strings. M B Hindmarsh, T W B Kibble, arXiv:hep-ph/9411342Rept. Prog. Phys. 58477M. B. Hindmarsh and T. W. B. Kibble, "Cosmic strings," Rept. Prog. Phys. 58, 477 (1995) [arXiv:hep-ph/9411342].
Cosmic Strings and Other Topological Defects. A. Vilenkin and E. P. S. ShellardCambridge University Press[18] "Cosmic Strings and Other Topological Defects," A. Vilenkin and E. P. S. Shellard, Cambridge University Press, 1994.
Gravitational radiation from cosmic (super)strings: Bursts, stochastic background, and observational windows. T Damour, A Vilenkin, arXiv:hep-th/0410222Phys. Rev. D. 7163510T. Damour and A. Vilenkin, "Gravitational radiation from cosmic (super)strings: Bursts, stochastic background, and observational windows," Phys. Rev. D 71, 063510 (2005) [arXiv:hep-th/0410222].
Gravitational wave stochastic background from cosmic (super)strings. X Siemens, V Mandic, J Creighton, arXiv:astro-ph/0610920Phys. Rev. Lett. 98111101X. Siemens, V. Mandic and J. Creighton, "Gravitational wave stochastic background from cosmic (super)strings," Phys. Rev. Lett. 98, 111101 (2007) [arXiv:astro-ph/0610920];
Gravitational wave bursts from cosmic (super)strings: Quantitative analysis and constraints. X Siemens, J Creighton, I Maor, S Ray Majumder, K Cannon, J Read, arXiv:gr-qc/0603115Phys. Rev. D. 73105001X. Siemens, J. Creighton, I. Maor, S. Ray Majumder, K. Cannon and J. Read, "Gravitational wave bursts from cosmic (super)strings: Quantitative analysis and constraints," Phys. Rev. D 73, 105001 (2006) [arXiv:gr-qc/0603115].
Gravitational waves emitted from infinite strings. M Sakellariadou, Phys. Rev. D. 42354Erratum-ibid. D 43, 4150 (1991)M. Sakellariadou, "Gravitational waves emitted from infinite strings," Phys. Rev. D 42, 354 (1990) [Erratum-ibid. D 43, 4150 (1991)].
Gravitational radiation from kinky infinite strings. M Hindmarsh, Phys. Lett. B. 25128M. Hindmarsh, "Gravitational radiation from kinky infinite strings," Phys. Lett. B 251, 28 (1990).
Gravitational radiation and the small-scale structure of cosmic strings. X Siemens, K D Olum, arXiv:gr-qc/0104085Nucl. Phys. B. 611367Erratum-ibid. BX. Siemens and K. D. Olum, "Gravitational radiation and the small-scale structure of cosmic strings," Nucl. Phys. B 611, 125 (2001) [Erratum-ibid. B 645, 367 (2002)] [arXiv:gr-qc/0104085].
Gravitation and cosmology: principles and applications of the general theory of relativity. S Weinberg, WileyNew YorkS. Weinberg, "Gravitation and cosmology: principles and applications of the general theory of relativity", Wiley, New York, 1972.
On the size of the smallest scales in cosmic string networks. X Siemens, K D Olum, A Vilenkin, arXiv:gr-qc/0203006Phys. Rev. D. 6643501X. Siemens, K. D. Olum and A. Vilenkin, "On the size of the smallest scales in cosmic string networks," Phys. Rev. D 66, 043501 (2002) [arXiv:gr-qc/0203006].
Cosmic string structure at the gravitational radiation scale. J Polchinski, J V Rocha, arXiv:gr-qc/0702055Phys. Rev. D. 75123503J. Polchinski and J. V. Rocha, "Cosmic string structure at the gravitational radiation scale," Phys. Rev. D 75, 123503 (2007) [arXiv:gr-qc/0702055].
|
[] |
[] |
[
"John Bechhoefer \nDepartment of Physics\nSimon Fraser University\nV5A 1S6BurnabyB.CCanada\n",
"Brandon Marshall \nDepartment of Physics\nSimon Fraser University\nV5A 1S6BurnabyB.CCanada\n"
] |
[
"Department of Physics\nSimon Fraser University\nV5A 1S6BurnabyB.CCanada",
"Department of Physics\nSimon Fraser University\nV5A 1S6BurnabyB.CCanada"
] |
[] |
DNA replication in Xenopus laevis is extremely reliable, failing to complete before cell division no more than once in 10,000 times; yet replication origins sites are located and initiated stochastically. Using a model based on 1d theories of nucleation and growth and using concepts from extreme-value statistics, we derive the distribution of replication times given a particular initiation function. We show that the experimentally observed initiation strategy for Xenopus laevis meets the reliability constraint and is close to the one that requires the fewest resources of a cell.
|
10.1103/physrevlett.98.098105
|
[
"https://arxiv.org/pdf/q-bio/0611016v1.pdf"
] | 18,707,571 |
q-bio/0611016
|
632aacc51024c3f429f7a00e7f57e1697c0577a6
|
4 Nov 2006
John Bechhoefer
Department of Physics
Simon Fraser University
V5A 1S6BurnabyB.CCanada
Brandon Marshall
Department of Physics
Simon Fraser University
V5A 1S6BurnabyB.CCanada
4 Nov 2006(Dated: November 20, 2013)How Xenopus laevis replicates DNA reliably even though its origins of replication are located and initiated stochastically
DNA replication in Xenopus laevis is extremely reliable, failing to complete before cell division no more than once in 10,000 times; yet replication origins sites are located and initiated stochastically. Using a model based on 1d theories of nucleation and growth and using concepts from extreme-value statistics, we derive the distribution of replication times given a particular initiation function. We show that the experimentally observed initiation strategy for Xenopus laevis meets the reliability constraint and is close to the one that requires the fewest resources of a cell.
PACS numbers: 87. 15.Aa, 87.14.Gg, 87.17.Ee, 87.15.Ya DNA replication is one of the defining processes of living systems, and evolution has accordingly selected for highly reliable replication mechanisms. The South African clawed frog Xenopus laevis is an organism often used to study replication in eukaryotes [1]. The replication of its embryonic cells is particularly interesting, as it corresponds to a "stochastic limit," where the placement and initiation of the sites where DNA replication begins ("replication origins") show significant stochasticity [2]. As with humans, the Xenopus genome contains approximately three billion bases [3]. Just after fertilization, cells divide for twelve generations with an abbreviated cell cycle that is as short as 25 min. (at 20 • C). The cell cycle is divided into an "S phase" of about 20 min., when DNA is replicated, and a mitosis phase of about 5 min., when chromosomes separate and the cell divides [2]. In order to replicate so many bases in so little time, the cell initiates DNA replication at many [∼ O(10 5 )] origins. For these embryonic cells, in contrast to the situation for fully developed somatic cells, there is no sequence dependence to the location of replication origins [2]. In addition, each origin initiates stochastically, with no pre-determined time of initiation. The stochasticity in the location and initiation of replication origins leads to a potential difficulty: the typical time for replication is about 20 min., but the maximum allowable time is only 25 min. In particular, embryonic cells lack the efficient checkpoint mechanisms [4] that somatic cells have to pause the cell cycle to allow for unusually slow replication. The cell must replicate by the time it divides, or die. But empirically, such a "mitotic catastrophe" [5] is rare, 10 4 replications [6]. How can one reconcile the variations in S-phase duration due to the stochastic placement and initiation of origins with the high reliability of replication?
In the biological literature the above is known as the "random-completion problem" [3] and has been an unsettled question for over twenty years [4,7,8]. In its simplest form, randomly placed origins imply an exponential distribution of origin separations and, hence, a small number of very large gaps that take a long time to replicate. Two approaches to a solution have been advanced. The first notes evidence that the spacing of origins is not completely random and that any regularity in the spacing of origins will tend to suppress large gaps [3,9]. However, in isolation, such a scenario is fragile: if a single origin fails to initiate, it will create a much larger gap than exists usually. The second approach draws on a recent experimental result that origins initiate throughout S phase and, indeed, that the rate of initiation of origins, I(t) (initiations per time per length of unreplicated genome), increases significantly as S phase proceeds [10,11,12]. Intuitively, initiating origins throughout S phase allows the cell to "fill in gaps" and avoid unusually long delays.
In this Letter, we first calculate, following theories of nucleation and growth in one dimension [13,14], the distribution of replication times ρ rep (t) given an initiation function I(t) and a constant "fork velocity" v describing the symmetric growth of replication domains. We find that an increasing I(t) can insure replication at the required level of reliability, even in the worst case of completely random origin spacing. We then show that the specific I(t) observed in in vitro experiments is close to an optimal I(t) that minimizes the amount of cellular replication machinery (polymerases, helicases, etc.) that a cell is required to supply. Our derivation of ρ rep uses a model inspired by the Kolmogorov-Johnson-Mehl-Avrami theory of crystallization kinetics [15], which is a stochastic model with three elements: nucleation (initiation) of ordered (replicated) domains; symmetric growth of these domains; and coalescence of domains that grow into each other. (See Fig. 1.) Using such a model, we showed that the fraction f of DNA replicated on an infinite domain at a time t after the start of S phase is given by [16]. Here, v is the fork velocity, and f (t) typically has a sigmoidal shape. Equation 1 predicts that it will take infinite time to replicate all the DNA (f = 1); but obviously, the replication time should be finite on a finite-length genome. Because the location and time of initiation of origins is stochastic, the time to finish replication will also be a stochastic process.
f (t) = 1 − e −2vh(t) (1) where h(t) = t 0 g(t ′ )dt ′ and g(t) = t 0 I(t ′ )dt ′ and I(t) is the initiation function (≥ 0)
In order to calculate the distribution of replication times ρ rep (t), we first note that, except for edge effects, there is a one-to-one mapping from replication origins to coalescences of replication domains. (See Fig. 1.) Because the evolution of domains is deterministic once the origin has initiated, one can derive the distribution of coalescence times, ρ c (t) from the initiation function I(t). In [16], we derived the density of non-replicated domains ("holes") of size x at time t to be n h (
x, t) = g 2 (t) exp[−g(t)x − 2vh(t)]
. Since a coalescence event is equivalent to a hole of zero size (x = 0), we can write the normalized distribution ρ c (t) as
ρ c (t) = 2vL N o g 2 (t)e −2vh(t) .(2)
where N o is the total number of origins along a genome of length L initiated throughout S phase. As Fig. 1 shows, the time to complete replication corresponds to the last coalescence event. Since there are N o coalescences, the problem of determining the typical time of the last coalescence is equivalent to asking, "Drawing N o coalescences from a distribution ρ c (t), what is the largest time one expects to occur?" Such questions are the subject of the field of extreme-value statistics [17,18], where an analog to the central-limit theorem holds: given a parent distribution whose maximum value is unbounded and whose tail decays asymptotically at least as fast as an exponential (conditions satisfied here), the maximum value drawn in N o trials will, for N o large, tend to a Gumbel distribution,
ρ G (τ ) = (1/β) exp[−τ − exp(−τ )]
, where the scaled time τ = (t − t * )/β, with t * the mode of the distribution and β its width [17]. An elementary calculation [17] shows that for Eq. 2, the width β is given by 2vg(t * ) and the mode t * by
F c (t * ) = 1 − 1/N o ,(3)
where F c (t) = t 0 ρ c (t ′ )dt ′ is the cumulative probability distribution function (CDF) of the probability distribution function (PDF) ρ c (t). From Eq. 2, the CDF is, asymptotically for large t, given by
F c (t) = 1 − Lg(t)e −2vh(t) N o . (4) Equation 4
is derived by integrating ρ c (t) by parts and dropping sub-dominant terms and, with Eq. 3, leads to a transcendental equation for the magnitude of I(t).
In Fig. 2, we show the results of Monte-Carlo simulations of the replication-time distribution for various I(t) functions. In all cases, we adjusted the amplitude of I(t) so that the mode of ρ rep (t) is at t * = 38 min., which corresponds to the mode deduced from the I(t) measured in the in vitro experiments. (For the in vivo experiments, t * ∼ 20 min. [4].) The solid lines are fits to a Gumbel distribution. The parameters deduced (the βs) are consistent with the values predicted in the paragraph above. The striking implication of Fig. 2 is that one can vary the width of the replication-time distribution ρ c by choosing an initiation function I(t) that increases throughout S phase. Initiating all the origins at the beginning of S phase [I(t) = I δ δ(t)] leads to the broadest possible distribution. Exploring power-law initiation functions I(t) = I n t n (with I n fixed by the t * constraint), we see that as one progresses from constant (n = 0) to linear (n = 1) to quadratic (n = 2) initiation functions, the width of ρ c is progressively reduced. The replication-time distribution can also be calculated using the experimental I(t) [12] (not shown). The experimental I(t) is close to a quadratic curve and its distribution is indistinguishable from the n = 2 case.
It would thus appear that the cell can have arbitrarily reliable replication (an arbitrarily narrow distribution ρ rep ) simply by arranging for its initiation curve to increase fast enough. In fact, the situation is more subtle.
Even when all origins are initiated at the beginning of S phase, it is possible to replicate with arbitrary reliability simply by having enough origins. While it is true that there will be a few unusually long gaps that will set the replication time, these gaps may be reduced arbitrarily if one starts with enough replication origins. We thus propose an alternate way of viewing the random-completion problem: Instead of fixing the number of origins and looking at the replication times for different strategies, we fix a time t * * at which either a cell has finished replication or it dies. Since evolution selects on the basis of mortality, the replication parameters (I(t), v, the number of potential origins, etc.) should be a consequence of this selection, and not vice versa. Choosing t * * to be the cellcycle time (25) min. and allowing a failure rate of 10 −4 , we calculate, for various forms of I(t), the replication parameters required to meet the reliability constraint. (Our results depend only logarithmically on the failure rate.)
In order to compare with experiment, we must confront a further problem. While the in vivo replication time is estimated to be 20 min., the in vitro experiments require nearly twice this time to replicate. We must thus make additional assumptions to translate the in vitro experimental results to the in vivo situation. In fact, we can do this with one simple assumption. In earlier studies, it was assumed that the replication fork velocity v is constant throughout S phase. The original analysis of the in vitro Xenopus data thus estimated an average fork velocity of 0.6 kb/min. More recent work [19] has shown that the fork velocity starts at 1.1 kb/min. at the beginning of S phase and then decreases monotonically to 0.3 kb/min. at the end of S phase. We speculate that the longer time for the in vitro S phase is caused by this reduction in fork velocity -perhaps because some protein concentrations are not kept constant. With this single modificationv = 1.1 rather than 0.6 kb/min. -we shall find results consistent with the in vivo observations. In Fig. 3, we show results of simulations that constrain the replications to finish by t * * = 25 min., allowing a failure rate of 10 −4 . We see that it is indeed possible to find amplitudes for I(t) that satisfy the reliability constraint.
While it is always possible to choose an amplitude (e.g., I δ or I n ) to satisfy the reliability constraint, each choice will have definite implications for the amount of cell resources that are required for its implementation. One may then ask whether there is a "best" strategy for initiating origins (while satisfying the reliability constraint). If so, how close is the experimental I(t) to the optimum?
To answer such questions, one must first define a measure for cell resources. We have considered two possibilities among many that can be imagined: the number of origins initiated throughout S phase and the maximum number of replication forks required. The first choice would be relevant if the origin-initiation proteins were limited. The second would be relevant if the number of polymerases (or other parts of the replication machinery) that needed to be active at one time limited the rate of replication [20]. We find qualitatively the same results in both cases [21]. Intuitively, there should be an optimum for the consumption of resources. Within the fork-density scenario, initiating all origins at the beginning leads to a high initial fork density. Holding off initiating until later in S phase helps by allowing the machinery of replication forks to be repeatedly reused. If the cell waits too long to begin replication, then it is essentially shortening S phase, which requires many origins (and forks). Thus, one expects an optimum. We have explored this by calculating the maximum number of forks, n max , required in several cases. First, we calculated it for delta-function initiation (n max = I delta ). Next, we numerically calculate n max for the power-law case. Finally, we use the calculus of variations to calculate the optimal I(t), denoted I opt (t) that minimizes the maximum number of required forks, subject to the reliability constraint. To calculate I opt , we note that the number of replication forks is given by [16]. One can extract the maximum fork density using a technique familiar from control theory (H ∞ metric) [22]. We thus write
n(t) =ḟ /v = 2g(t) exp −2vh(t)n max [I(t)] = lim p→∞ ∞ 0 2g(t)e −2vh(t) p dt 1/p .(5)
The associated Euler-Lagrange equation turns out to be independent of the exponent p. We find
h(t) = 2vḣ 2 (t) ,(6)
where we recall thatḧ(t) = I(t) andḣ(t) = g(t). Solving Eq. 6 subject to the boundary condition h(0) = 0 gives
I opt (t) = 1 2vt * δ(t) + 1 t * 1 (1 − t/t * ) 2 .(7)
Equation 7 implies that the fork density n = 1/vt * is constant throughout S phase. In Fig. 4, we summarize the results of these investigations. The dashed line at the top gives the fork density required to make the delta-function I(t) meet the reliability constraint. The solid curve represents the fork density required for power-law initiations. As we anticipated, the curve has a minimum (between n = 1 and 2). The fine-dashed line, which lies close to the minimum value of the power-law case, is the experimental maximum fork density [12]. Finally, the broad-dashed line gives the optimal fork density (1/vt * ).
Although the optimal fork density is lower than that observed, it clearly does not represent a physiologically possible case. It is unrealistic to expect the perfect coordination implied by the delta function at the beginning of S phase. More serious, at the end of S phase, Eq. 7 implies that the rate of initiation diverges, along with the total number of activated origins. Still, we note that the qualitative shape of the curve shares the quadratically increasing form of the experimental result. More generally, it would be surprising if the initiation program were identical to the optimum (even if one were to limit the space of functions to those that are physiologically achievable). We note that the minimum is clearly broad: there is little difference in required fork density between a linear and a quadratic I(t). The main point is that there are some strategies -most notably the initiation of all origins at the beginning of S phase -that are clearly bad, and these differ from the observed I(t).
In conclusion, we have calculated the distribution of replication times ρ rep for the stochastic limit of replication, where origins are placed randomly and initiate stochastically at a rate I(t). Choosing an I(t) that increases with time narrows ρ rep and increases the reliability of replication. Using the known mortality rates and length of the cell cycle, we gave a quantitative interpretation to the random-completion problem and showed that one can meet the reliability constraint using an arbitrary I(t). Different I(t) functions demand different resources from the cell. Measuring this resource use by the maximum required fork density, we show that the ex-perimentally observed form of I(t) is close to optimum. In the future, it would be interesting to consider the effects of any regularity in origin spacing. While we have shown that reliable replication may be achieved even in the worst case of random spacing of origins, there is evidence for some regularity. It would also be interesting to measure the replication-time distribution directly. While determining the time at which the last base (of three billion) replicates is unrealistic, one might be able to determine when a given fraction (e.g., 90 or 95%) of origins have replicated. It is straightforward to generalize the methods presented here to determine the distribution of times required to reach a given replication fraction.
FIG. 1 :
1Schematic of DNA replication model. Space-time diagram showing multiple origins (filled circles), each expanding symmetrically at constant velocity. Domains coalesce when they meet (open circles).
FIG. 2 :
2Replication-time distribution function, fixing the mode to be t * = 38 min. Markers are results from Monte Carlo simulations (3000 trials per simulation); solid lines are fits to the Gumbel distribution.
FIG. 3 :
3Replication-times distribution function, fixing the mortality rate at t * * = 25 min. to be 10 −4 . (The area to the right of the dashed line of each probability distribution function is 10 −4 .) Markers are results from Monte Carlo simulations (20000 trials per simulation); solid lines are fits to the Gumbel distribution.
FIG. 4 :
4Maximum required fork density, for different replication schemes.
Control, 2nd ed. (J. Wiley, Chichester, 2005). See p. 60.
We thank O. Hyrien, N. Rhind, R. Harland, J. Herrick, and S. Jun for helpful discussions. This work was supported by NSERC (Canada) and a visiting professorship at the Univ. de Rennes, 1 (France). JB thanks Z. Gueroui for the invitation to Rennes.
. J J Blow, EMBO J. 203293J. J. Blow, EMBO J. 20, 3293 (2001).
. O Hyrien, M Méchali, EMBO J. 124511O. Hyrien and M. Méchali, EMBO J. 12, 4511 (1993).
. J J Blow, P J Gillespie, D Francis, D A Jackson, J. Cell Biol. 15215J. J. Blow, P. J. Gillespie, D. Francis, and D. A. Jackson, J. Cell Biol. 152, 15 (2001).
. O Hyrien, K Marheineke, A Goldar, BioEssays. 25116O. Hyrien, K. Marheineke, and A. Goldar, BioEssays 25, 116 (2003).
T A Prokhorova, K Mowrer, C H Gilbert, J C Walter, Proc. Natl. Acad. Sci. Natl. Acad. Sci10013241T. A. Prokhorova, K. Mowrer, C. H. Gilbert, and J. C. Walter, Proc. Natl. Acad. Sci. 100, 13241 (2003).
. C Hensey, J Gautier, Dev. Biol. 20336C. Hensey and J. Gautier, Dev. Biol. 203, 36 (1998).
. R A Laskey, J. Embryol. Exp. Morphol. Suppl. 89285R. A. Laskey, J. Embryol. Exp. Morphol. Suppl. 89, 285 (1985).
. N Rhind, Nat. Cell. Biol. 812in pressN. Rhind, Nat. Cell. Biol. 8(12), in press.
. S Jun, J Herrick, A Bensimon, J Bechhoefer, Cell Cycle. 3223S. Jun, J. Herrick, A. Bensimon, and J. Bechhoefer, Cell Cycle 3, 223 (2004).
. I Lucas, M Chevrier-Miller, J M Sogo, O Hyrien, J. Mol. Biol. 296769I. Lucas, M. Chevrier-Miller, J. M. Sogo, and O. Hyrien, J. Mol. Biol. 296, 769 (2000).
. J Herrick, P Stanislawski, O Hyrien, A Bensimon, J. Mol. Biol. 3001133J. Herrick, P. Stanislawski, O. Hyrien, and A. Bensimon, J. Mol. Biol. 300, 1133 (2000).
. J Herrick, S Jun, J Bechhoefer, A Bensimon, J. Mol. Biol. 320741J. Herrick, S. Jun, J. Bechhoefer, and A. Bensimon, J. Mol. Biol. 320, 741 (2002).
. K Sekimoto, Physica. 125261K. Sekimoto, Physica 125A, 261 (1984);
. 135A. 328135A, 328 (1986);
. Int. J. Mod. Phys. B. 51843Int. J. Mod. Phys. B 5, 1843 (1991).
. E Ben-Naim, P L Krapivsky, Phys. Rev. E. 543562E. Ben-Naim and P. L. Krapivsky, Phys. Rev. E 54, 3562 (1996).
The Theory of Phase Transformations in Metals and Alloys, Part I: Equilibrium and General Kinetic Theory. J W Christian, Pergamon PressOxford3rd ed.J. W. Christian. The Theory of Phase Transformations in Metals and Alloys, Part I: Equilibrium and General Kinetic Theory, 3rd ed. (Pergamon Press, Oxford, 2002).
. S Jun, H Zhang, J Bechhoefer, Phys. Rev. E. 7111908S. Jun, H. Zhang, and J. Bechhoefer, Phys. Rev. E 71, 011908 (2005).
. E J Gumbel, Statistics of Extremes. Columbia Univ. PressE. J. Gumbel, Statistics of Extremes, (Columbia Univ. Press, NY, 1958).
Extreme Value Distributions. S Kotz, S Nadarajah, Imperial College PressLondonS. Kotz and S. Nadarajah, Extreme Value Distributions, (Imperial College Press, London, 2000).
. K Marheineke, O Hyrien, J. Biol. Chem. 27928071K. Marheineke and O. Hyrien, J. Biol. Chem. 279, 28071 (2004).
. N Rhind, private communicationN. Rhind, private communication.
. B Marshall, J Bechhoefer, unpublishedB. Marshall and J. Bechhoefer, unpublished.
S Skogestad, I Postlethwaite, Multivariable Feedback. S. Skogestad and I. Postlethwaite, Multivariable Feedback
|
[] |
[
"Where-When-What: the General Relativity of Space-Time-Property",
"Where-When-What: the General Relativity of Space-Time-Property"
] |
[
"Robert Delbourgo [email protected] ",
"Paul D Stack [email protected] ",
"\nSchool of Mathematics and Physics\nUniversity of Tasmania\nLocked Bag 37\n",
"\nGPO Hobart\n7001, 7001TasmaniaAUSTRALIA\n"
] |
[
"School of Mathematics and Physics\nUniversity of Tasmania\nLocked Bag 37",
"GPO Hobart\n7001, 7001TasmaniaAUSTRALIA"
] |
[] |
We develop the general relativity of extended spacetime-property for describing events including their properties. The anticommuting nature of property coordinates, augmenting space-time (x, t), allows for the natural emergence of generations and for the simple incorporation of gauge fields in the spacetime-property sector. With one electric property this results in a geometrical unification of gravity and electromagnetism, leading to a Maxwell-Einstein Lagrangian plus a cosmological term. Addition of one neutrinic and three chromic properties should lead to unification of gravity with electroweak and strong interactions. 8 R.Delbourgo and P.D.Stack Carrying out the algebraic manipulations, we obtain R J KLM = (−1) [J]([K]+[L]+[M]) (−1) [K][L] Γ KM J ,L − (−1) [K][M]+[L][M] Γ KL J ,M
|
10.1142/s0217751x14500237
|
[
"https://arxiv.org/pdf/1401.1238v1.pdf"
] | 119,106,410 |
1401.1238
|
41423294e7d5a7906c68c1d496afd634dcd28eea
|
Where-When-What: the General Relativity of Space-Time-Property
6 Jan 2014 January 8, 2014
Robert Delbourgo [email protected]
Paul D Stack [email protected]
School of Mathematics and Physics
University of Tasmania
Locked Bag 37
GPO Hobart
7001, 7001TasmaniaAUSTRALIA
Where-When-What: the General Relativity of Space-Time-Property
6 Jan 2014 January 8, 20141:16 WSPC/INSTRUCTION FILE EventsGR International Journal of Modern Physics A c World Scientific Publishing Company
We develop the general relativity of extended spacetime-property for describing events including their properties. The anticommuting nature of property coordinates, augmenting space-time (x, t), allows for the natural emergence of generations and for the simple incorporation of gauge fields in the spacetime-property sector. With one electric property this results in a geometrical unification of gravity and electromagnetism, leading to a Maxwell-Einstein Lagrangian plus a cosmological term. Addition of one neutrinic and three chromic properties should lead to unification of gravity with electroweak and strong interactions. 8 R.Delbourgo and P.D.Stack Carrying out the algebraic manipulations, we obtain R J KLM = (−1) [J]([K]+[L]+[M]) (−1) [K][L] Γ KM J ,L − (−1) [K][M]+[L][M] Γ KL J ,M
A full description of events
In a static universe nothing happens. All systems continue inertially, never change and never interact; that sort of universe is the ultimate non-event. Even speaking of an 'observer' is a contradiction in terms, because every observer would be incommunicado and unaware of anything and everything. So, more to the point, a static universe is purely hypothetical and logically inconceivable. On the contrary the real universe is in a state of flux. It is punctuated by a succession of events from which we gain the notion of space and time, as the scenario unfolds. Historically the spacetime arena has held centre stage ever since the ideas of relativity took hold about a hundred years ago and we are accustomed to characterising an event by its spacetime location, even to the extent of describing events geometrically. Indeed the geometry of curved spacetime has revolutionised our ideas about gravity ever since Einstein's development of general relativity.
When an event occurs, it amounts to a change in circumstances, whereby an object alters its motion and possibly character as it engages with others. (It is the impact of these changes in spacetime which underpins the process of observation and the Heisenberg uncertainty principle.) Thus when a photon is emitted and reabsorbed by two charged objects we interpret this as a succession of two events resulting in an electric interaction between the charged objects. And since quantum field theory came into being, we recognise this through a trilinear interaction between the charged object and a photon with the conservation of total energy-momentum at the event vertex, which is ensured by taking an integral over all spacetime of the interacting fields. So much is well understood. However saying that an event has happened there and then does not fully specify the event; we must in addition specify what exactly has happened: what sort of transaction has taken place and what properties may have been exchanged. For instance when a proton emits a neutron and charged pion virtually we have to add that information and then the event becomes fully explicated. This is normally done by introducing quantum fields with particular labels and interacting in a manner that usually conserves some quantum number, such as isospin for the nuclear example.
As physicists, we are well aware that particle properties do proliferate but they can be systematised using group theory of some "internal group", resulting in certain types of representations of various Lie algebras. These can be constructed from some fundamental representation as a result of some basic dynamics connected with primitive constituents like quarks and leptons. As new particles are discovered it occasionally becomes necessary to enlarge the group to accommodate features or properties that do not fall comfortably within the conventional picture; this is how progress in quantum field theory has been achieved and it has led to its ultimate incarnation as the standard model, wherein U(1)×SU(2)×SU(3) reigns supreme. Nevertheless some issues remain unresolved, such as the generation problem, where repetitions of particle families lead to further concepts about "horizontal groups" for systematising them -although what the correct group and representations are is still unsettled. Most worrying of all is the number of parameters needed to characterise the interactions and masses of the (light) three generations in the standard model, though it must be admitted that those parameters are sufficient to describe a vast number of experimental facts. This has stimulated research into looking for some kind of grand unification for reducing the number of parameters and it has spawned a large number of interesting new concepts. Supersymmetry and superstrings, based on enlarged dimensions, are the foremost amongst these as they automatically solve the fine-tuning problem and naturally incorporate gravity, but we should not discount other ideas such as technicolour, preons, non-commuting spacetime variables, deformation groups, and so on.
Underlying all these extensions, the question arises: how to enumerate properties and characteristics of events at a fundamental particulate level. Traditionally one specifies the associated quantum fields by attaching as many labels to them as needed to describe the properties and by ensuring that the events due to their mutual interactions conserve whatever properties remain intact overall; in other words adapting to what the experiments dictate. This approach is predictive to the extent that if there are any missing components of the group representations, then they must eventually show up experimentally and conform to the overarching symmetry. And if the interactions do not preserve the expected symmetries, they are nowadays thought to be due to symmetry breaking effects coming from a yet unknown cause, possibly spontaneously generated.
In an effort to characterise the nature of an event, we have, over a number of years, suggested that it may be possible to specify attributes or 'properties' of particles by connecting them with a few anticommuting Lorentz scalar coordinates (and their conjugates or opposites). An event can thereby be fully described by a conglomerate there-then-what label; by analogy to momentum conservation, the conservation of overall property, such as charge, is guaranteed by integrating appropriately over all property space. To that end we have suggested that the spacetime manifold with coordinates x, should be enlarged to a supermanifold 1,2,3,4,5 by attaching a few complex anticommuting ζ coordinates, with fields being regarded as functions of X = (x, ζ,ζ). The anticommuting character of fermionic quantum fields is nothing new and the use of Grassmann variables has featured in the BRST quantisation of gauge theories, and more especially in the invention of supersymmetry -although the latter makes them Lorentz spinor rather than scalar. Once a property is ascribed it cannot be doubly ascribed, corresponding to the fact the square of a given Grassmann variable vanishes. But because the conjugate property is available, one can built up property scalars that may be multiplied with other properties, and so on. This is how one might conceive of families of systems with similar net attributes 6,7 . Furthermore the anticommuting character of a fixed number of properties means that we can never encounter an infinite number of states, unlike bosonic extra dimensional spaces modelled along Kaluza-Klein lines. The mathematics of graded spaces is quite well understood now and we will take full advantage of this in what follows.
Previous investigations have shown that with a minimum of five independent ζ one can readily accommodate all the known particles and their generations. We might tentatively call these properties: charge or electricity, neutrinicity and chromicity. The use of five is no accident, being based solidly on economical grand unified groups such as SU(5) and SO (10); what is more interesting is that the correct representations of those algebras come out automatically. [In our case Sp(10) is the overarching internal group.] Since addition of Grassmann variables has the mathematical effect of reducing the net 'dimension' of the space, there is the tantalising prospect of a universe with zero total dimension. We have elaborated on some consequences of such a scheme 8,9 , including the appearance of a few new particles and the possibility of reducing the number of parameters appearing in the standard model, because we have just nine Higgs fields but only one coupling constant attached to all known fermions 10 .
The purpose of the present paper is to describe the general relativity of spacetime-property along the lines originally devised by Einstein. Our aim is quite modest: we wish to see if one can unite electromagnetism with general relativity geometrically in a way which differs from Klein-Kaluza and (spinorial) supergravity in as much as the extended coordinates are scalar but anticommuting. Having established that, it means one may contemplate generalizations which include other forces, without producing infinite towers of excited states. This paper should be seen as a first step in that direction; further avenues for research are mentioned in the conclusions and include Higgs fields and possibly ghost fields associated with gauge-fixing at the quantized level. The extended metric will consist a x − x sector, a ζ − x sector and a ζ −ζ sector; the space-space sector involves the gravitational field as well as some extraζζ terms, the space-property sector brings in the gauge fields connected with property propagation, and the curved property-property sector can be the source of the cosmological term, as it happens; it is no surprise that the communicators of property or gauge fields should reside where they are, but the association of property curvature with a cosmological term is perhaps more intriguing. It is important to state that fermion fields, such as the gravitino, have no place in the extended metric as they would cause a conflict with Lorentz invariance. Because the fermionic superfield Ψ α carries spinor indices, it must be studied separately and in that regard our approach differs radically from conventional spinorial supersymmetry; we give a preliminary treatment of fermions towards the end of the article.
As we are dealing with a graded manifold or supermanifold, it is essential to make sure that the super-coordinate transformations carry the correct commutation factors. This is carefully explained in Sections 2 to 5, where our conventions are established and summarized to make the paper self-contained. A good notation is of course vital 11,12 and the ordinary general relativity convention with its placement of indices conflicts with conventional differentiation as a left operation (as a rule for the average reader); we have had to compromise on that as some traditional ideas are not easily overthrown. Asorey and Lavrov 13 have written a nice exposition of these ideas but they have instead chosen to take right derivatives, which makes for marginally simpler formulae but does not conform with traditional ideas about left differentiation, which we have religiously adhered to. In Section 3 we delineate transformation properties of supertensors and the supermetric. Then we pay attention to the definition of covariant derivatives with particular application to the super-Riemann tensor R; its supersymmetry properties are obtained and the super-Ricci and superscalar curvature are derived. The Bianchi identities follow in Section 5. For the remainder of the paper we consider the case of just one property, such as charge, and in Section 6 we write down the most general metric, including the electromagnetic field. Unsurprisingly we show that gauge transformations can be construed as supercoordinate transformations associated with phase changes on ζ. We then find the superdeterminant of this metric in Section 7, as well as the Palatini form of the superscalar curvature. In Section 8 we go on to evaluate the super-Riemann tensor components and determine the super-Ricci tensor and full supercalar curvature. This leads to the field equations and it is rather pleasing that the Einstein-Maxwell Lagrangian emerges very naturally, together with a cosmological contribution. Further, the electromagnetic stress tensor presents itself as a purely geometrical addition to the extended Einstein tensor. All such calculations are greatly assisted by an algebraic computer program for handling anticommuting variables as well as ordinary ungraded ones, which has been developed by one of us (PDS) using Mathematica. A penultimate Section 9 details the inclusion of matter fields, but our treatment of fermions there is to be regarded as preliminary at this stage. The conclusions close the paper and an Appendix collates a list of generalised Christoffel symbols, curvature components and super-vielbeins that are needed at intermediate steps in the main text.
Extended transformations and notation
The addition of extra anticommutative coordinates to space time results in a graded manifold, where the standard spacetime is even and the property sector is odd. The notation used in this work will be to define uppercase Roman indices (M , N , L, etc) to run over all the dimensions of spacetime-property and hence have mixed grading. Lower case roman indices (m, n, l, etc) will correspond to even graded spacetime, and Greek characters (µ, ν, λ, etc) will correspond to the odd graded property sector. The grading of an index given by [M ] a , is [m] = 0 and [µ] = 1. Later on we reserve early letters of the alphabet (a,α, etc.) to signify flat or tangent space.
Our starting point is the transformation properties of contravariant and covariant vectors; from these we can build up how a general tensor should transform. We will make use of Einstein summation convention but it has to be done carefully. We pick a convention of always summing a contravariant index followed immediately by a covariant index (up then down). This results in contravariant and covariant vectors transforming as follows:
V ′M = V N ∂X ′M ∂X N , V ′ M = ∂X N ∂X ′M V N .(1)
The scalar V M V M then correctly transforms into itself:
V ′M V ′ M = V N ∂X ′M ∂X N ∂X L ∂X ′M V L = V N δ N L V L = V N V N ,(2)
since the (left) chain rule given by
∂X ′M ∂X N ∂X L ∂X ′M = δ N L .(3)
From (1) one can build up the transformation properties of any tensor, by taking it to behave like a corresponding product of vectors. For example a rank two covariant tensor T MN has to transform like V M V N .
V ′ M V ′ N = ∂X R ∂X ′M V R ∂X S ∂X ′N V S = (−1) [R]([S]+[N ]) ∂X R ∂X ′M ∂X S ∂X ′N V R V S .(4)
In these manipulations we have adhered to the traditional convention of writing derivatives on the left, so the the sign factor arising in (4) is due to permuting V R through the partial derivative. In this way we find how T MN transforms:
T ′ MN = (−1) [R]([S]+[N ]) ∂X R ∂X ′M ∂X S ∂X ′N T RS .(5)
Thus in (5) we do not have an immediate, direct up-down summation; the sign factor is introduced to compensate for this. In this manner it is not hard to derive sign factors for any sort of tensor.
On metrics and supertensors
The metric supertensor G MN is chosen to be graded symmetric,
G MN = (−1) [M][N ] G N M because it is associated with the generalised spacetime-property separation ds 2 = dX N dX M G MN ,
which is overall bosonic. As with standard general relativity the metric can be used to raise and lower indices; however the direct up-down summation rule must be strictly obeyed. This means that for vectors b :
V M G MN = V N , G MN V N = V M(6)
If this order is not followed then the resulting vector (or tensor) will not transform correctly according to the rules in Section 2. When raising or lowering indices of a supertensor an adjoining up-down summation is sometimes impossible; in that event a sign factor like in (4) must again be included to compensate for this, using the same argument. For illustration, consider a tensor T MN whose second index N we wish to raise to get T M N . To work out the sign factor required, look at a product of vectors instead, say T MN = U M V N ; then
T M N = U M V N = U M G N L V L = (−1) [M]([N ]+[L]) G N L T ML(7)
The sign factor ensures that
(−1) [M]([N ]+[L]) G N L T ML behaves like T M N .
This procedure extends to any tensor.
The inverse metric multiplies the covariant metric as follows:
G MN G N L = δ M L = (−1) [M] δ L M (8) (−1) [N ] G MN G N L = δ M L(9)
These equations are consistent with each other, (the metric and its inverse being graded symmetric) and with the transformation properties of the singlet δ M L .They are also consistent with Asorey and Lavrov 13 . In particular notice from (8) that the trace operation introduces a negative sign where the fermionic sector is concerned, as is well-known.
Covariant derivatives and the Riemann supertensor
The connection coefficients of standard general relativity in the case of zero torsion are defined to be :
Γ mn k = (g lm,n + g ln,m − g mn,l ) g lk /2(10)
We take this as the starting point for our covariant derivative, extending it to a graded manifold by allowing for sign factors.
Γ MN K = (−1) XLMN G LM,N + (−1) YLNM G LN,M − (−1) ZMNL G MN,L G LK /2,(11)
where X LMN , Y LN M and Z MN L need to be determined so as to guarantee that the covariant derivative of a covariant vector transforms correctly as a rank 2 covariant tensor. We write the covariant derivative in semicolon notation as
A M;N = (−1) WMN A M,N − Γ MN K A K ,(12)
where again W MN is a sign factor to be found. Expanding this out and finding the conditions on the signs so that all second derivatives cancel and the remaining terms transform as a rank 2 covariant tensor we arrive at
A M;N = (−1) [M][N ] A M,N − Γ MN K A K ,(13)
with
Γ MN K = (−1) [M][N ]+[L] G ML,N +(−1) [L] G N L,M −(−1) [L][M]+[L][N ]+[L] G MN,L G LK /2.(14)
In a similar manner one may establish that
A M ;N = (−1) [M][N ] (A M ,N + A L Γ LN M ) and(15)T LM ;N = (−1) [N ]([L]+[M]) [T LM,N −Γ N L K T KM −(−1) [L]([M]+[K]) Γ N M K T LK ]. (16)
The curious factors of (−1)
+(−1) [L][M] Γ KM R Γ RL J − Γ KL R Γ RM J .(18)
This is the graded version of the standard Riemann curvature tensor. It only remains to work out the fully covariant Riemann curvature tensor if only to check its graded symmetry properties. Thus we lower with the metric,
R JKLM = (−1) ([I]+[J])([K]+[L]+[M]) R I KLM G IJ ,(19)
resulting in
R JKLM = (−1) [J]([K]+[L]+[M]) (−1) [K][L] Γ KM I ,L − (−1) [K][M]+[L][M] Γ KL I ,M +(−1) [L][M] Γ KM R Γ RL I − Γ KL R Γ RM I G IJ .(20)
It is then not too hard to discover the expected graded symmetry relations,
R KJLM = −(−1) [J][K] R JKLM ,(21)R JKML = −(−1) [L][M] R JKLM ,(22)R LMJK = (−1) ([J]+[K])([L]+[M]) R JKLM .(23)
Bianchi identities
The Riemann curvature tensor also satisfies the Bianchi identities. the first cyclic identity is readily established from (20)-(23):
(−1) [K][M] R JKLM + (−1) [M][L] R JMKL + (−1) [L][K] R JLMK = 0.(24)
The second (differential) Bianchi identity, involving the covariant derivative of the curvature tensor, is most easily uncovered by proceeding to a "local frame" wherein the Christoffel symbol (but not its derivative) vanishes; in that case the tensor reduces to R JKLM;N = (−1)
GR of space-time-property 9
which can be written in the form G M N ;
M = 0 where G M N = R M N − δ M N R/2(29)
is the graded version of the Einstein tensor. Having established all the necessary equations with the requisite sign factors, we are in a position to tackle a simple but important case, featuring one property, namely charge, and ensuing electromagnetism.
Gauge changes as property transformations
To begin tackling the case of one property, an ansatz for the metric has to be made which incorporates the property coordinates. With everything flat the metric distance in the manifold X = (x, ζ,ζ) is given by
ds 2 = dX A dX B η BA = dx a dx b η ba + dζdζηζ ζ + dζdζη ζζ ,
where η ζζ = −ηζ ζ = ℓ 2 /2 and η ba is Minkowskian. Notice that we are obliged to introduce a fundamental length ℓ so as to ensure that the separation has the correct physical dimensions of length 2 because the ζ are being taken as dimensionless d . This should be construed as the tangent space. We easily spot that it is invariant under Lorentz transformations and global phase transformations on ζ. However it is not invariant under local x-dependent phase transformations and we are obliged to introduce the gauge field to correct for this, as we shall soon show.
To proceed to curved space we follow the standard method of invoking the tetrad formalism, but generalised to a graded space. (The metric is of course a product of appropriate frame vectors E, which provide the curvature.) We have also been guided by the Kaluza-Klein metric: the standard general relativistic metric is to be contained in the spacetime-spacetime sector and gauge fields must reside in the spacetime-property sector, but we have allowed for U(1) invariant property curvature coefficients, denoted by c i . The results below are not as adhoc as they may seem; for now we are just interested in seeing how far one may mimic Klein-Kaluza by using an anticommuting extension to space-time rather than a commuting one. [The various terms which arise in the metric below do not allow for fermion contributions as they would carry a Lorentz spinor index and would conflict with Lorentz invariance.] We envisage that the c i are expectation values of chargeless Higgs or dilaton fields which ought to be considered in the most general situation, left for future research.
The frame vectors E M A that curve the space are stated in Appendix 2 and provide the cure for local phase invariance. They generate the metric in the usual manner 14,15 via
G MN = (−1) [B]+[B][N ] E M A η AB E N B .(30)
d We have chosen to use the complexζ, ζ description rather than the real coordinates ξ, η (where ζ = ξ + iη) because it lends itself more easily to group analysis when one enlarges the number of property coordinates.
The entries are tightly constrained by the fact that G mn and G ζζ have to be bosonic while G mζ has to be fermionic in a commutational sense. Further they only admit expansions up toζζ; that is why the electromagnetic field A m multiplied by ζ appears in G mζ ; in principle one could also include in that sector an anticommuting ghost field C m timesζζ, as one encounters in quantum gravity, but at a semiclassical level we are ignoring this aspect of the problem. Putting this all together results in the following metric
G MN = G mn G mζ G mζ G ζn 0 G ζζ Gζ n Gζ ζ 0 (31) where G mn = g mn (1 + 2c 1ζ ζ) + e 2 ℓ 2 A m A nζ ζ, G mζ = G ζm = −ieℓ 2 A mζ /2, G mζ = Gζ m = −ieℓ 2 A m ζ/2, G ζζ = −Gζ ζ = ℓ 2 (1 + 2c 2ζ ζ)/2.(32)
A couple of general observations: the charge coupling e accompanies the e.m. potential A and the constants c i are allowed in the frame vectors to provide phase invariant property curvature rather like mass enters the Schwarzschild metric; the space-property metric is guaranteed to be anticommuting through the factor ζ. This inverse metric stays graded symmetric, G MN = (−1) [M][N ] G N M , and transforms correctly as a rank 2 covariant tensor. It can be derived from (8) or (9). Its elements are
G mn = g mn (1 − 2c 1ζ ζ), G mζ = G ζm = ieA m ζ, G mζ = Gζ m = −ieA mζ , G ζζ =−Gζ ζ = 2(1 − 2c 2ζ ζ)/ℓ 2 −e 2 A m A mζ ζ.(33)
Now suppose that we make a spacetime dependent U(1) phase transformation in the property sector:
x ′ = x; ζ ′ = e iθ(x) ζ;ζ ′ = e −iθ(x)ζ .(34)
Then from the general transformation rules such as (5) and its contravariant counterpart we readily find that
eA ′ m = eA m + ∂ m θ,(35)
which shows the field A m acts as a gauge field under variations in charge phase. This can be checked for all components of the metric G MN from the transformation rule (5). On the other hand G mn remains unaffected and thus is gauge-invariant in the sense of (34) and (35). The same comments apply to R mn and R mn ; the former varies with gauge but the latter does not.
Metric superdeterminant and Palatini form
To produce the field equations we require the superdeterminant or Berezinian of the metric, which is given by Berezin and DeWitt 1 to be
s det(X) = det(A − BD −1 C) det(D) −1 ,(36)
for a graded matrix of the form:
X = A B C D .(37)
Given our metric (32) this turns out to be:
s det(G MN ) = 4 ℓ 4 det(g mn ) 1 + (8c 1 − 4c 2 )ζζ (38)
or for short,
√ −G.. = 2 ℓ 2 √ −g.. 1 + (4c 1 − 2c 2 )ζζ .(39)
The absence of the gauge potential should be noted.
= 2(−1) [L] √ G..G MK Γ KL N Γ N M L − Γ KM N Γ N L L .(42)
This means that sum of the first two (double) derivative terms in √ G..R is exactly double the sum of the last two terms, apart from a sign change; in other words, the scalar curvature can be reduced to Palatini form, even in the graded case:
√ G..R → (−1) [L] √ G..G MK (−1) [L][M] Γ KL N Γ N M L − Γ KM N Γ N L L .(44)
This can help to simplify some of the calculations and it also endorses the correctness of all our graded sign factors.
The Ricci Tensor and Superscalar Curvature
From Eqn (14) and the metric given in (32) one may calculate the Christoffel symbols, Γ MN K . A list of these can be found in Appendix 1. Using these connections in (20) one may determine the fully covariant Riemann curvature tensor, R JKLM . This can be a painful process and is where an algebraic computer program developed by one of us (PDS) comes in handy, for it minimises the possibility of errors. Even after making use of it and the symmetry properties of R there are a large number of components. We have not bothered to list them as they are so numerous and not particularly enlightening. However the contracted Ricci tensor,
R KM = (−1) [K][L] G LJ R JKLM ,(45)
has fewer entries so we have provided a list of them e and their contravariant counterparts,
R JL = (−1) [M] G JK R KM G ML ,(46)
in Appendix 2; the latter are gauge invariant. Finally the Ricci superscalar can be found by contraction with the metric,
R = G MK R KM .(47)
In a frame that is locally flat in spacetime, the spacetime component of the contravariant Ricci tensor reduces to
R mn = 4g mn c 1 [1 + (2c 2 − 6c 1 )ζζ]/ℓ 2 − e 2 ℓ 2 F ml F n lζ ζ/2,(48)
and the curvature superscalar collapses to
R = 8[4c 1 −3c 2 +c 1 (8c 2 −10c 1 )ζζ]/ℓ 2 − e 2 ℓ 2 F nl F nlζ ζ/4.(49)
Both expressions (48) and (49) are gauge independent. By making use of them and the superdeterminant (39) we may evaluate firstly the total Lagrangian density for electromagnetic property,
L = dζdζ √ −G.. R ∝ − 1 4 F mn F mn + 48(c 1 − c 2 ) 2 e 2 ℓ 4 ,
and secondly the Einstein tensor in flat spacetime:
dζdζ √ −G..(R km −R G km /2)∝ 48c 2 (c 1 −c 2 )g km /e 2 ℓ 4 −(F kl F m l − F ln F ln g km /4) .
The familiar expression for the electromagnetic stress tensor, namely T km ≡ F k l F lm + F nl F nl g km /4 emerges naturally and becomes part of the geometry. But we also recognize a cosmological constant term that is largely determined by the e We are reasonably certain that those expressions, though complicated, are correct because we have checked that the differential Bianchi identity (28) is obeyed and this is a highly nontrivial test.
magnitude (c 2 − c 1 )/ℓ 4 . [As an aside, we have verified that (48) and (49), remain true in a general frame, not necessarily locally flat.]
Including gravity by curving spacetime means including the standard gravitational curvature R and will render (48) and (49) generally covariant. (One has be careful here to track factors ofζζ, as one will be integrating over property.) It straightforward to see that the gravitational part of the superscalar R (g) is R (1−2c 1ζ ζ), while the super-Ricci tensor R (g)km contains R km (1−4c 1ζ ζ). In consequence we may evaluate the full gravitational-electromagnetic Lagrangian through the property integral:
L = dζdζ √ −G.. R = 2e 2 √ −g.. 2(c 1 − c 2 )R e 2 ℓ 2 − F mn F mn 4 + 48(c 1 − c 2 ) 2 e 2 ℓ 4 ,(50)wherein we recognize 16πG N ≡ κ 2 = e 2 ℓ 2 /2(c 1 − c 2 ), Λ = 12(c 2 − c 1 )/ℓ 2 .
To verify that the entire setup is consistent and free of error we may determine the gravitational variation δG MN which equals δg mn (1+2c 1ζ ζ). Hence the gravitational field equation is obtained through
0 = dζdζ √ −G.. ( 1 + 2c 1ζ ζ)(R km − G km R/2) = √ −g.. 4(c 1 −c 2 ) ℓ 2 (R km − 1 2 g km R)−T km − 48(c 1 −c 2 ) 2 ℓ 4 g km .(51)
This is just what we would have obtained from (50). In any case we see that the universal coupling of gravity to stress tensors T has a factor 8πG N ≡ κ 2 /2 = e 2 ℓ 2 /4(c 1 − c 2 ) > 0. The result is to make the cosmological term go negative and, what is probably worse, it has a value which is inordinately larger than the tiny experimental value found by analyses of supernovae! (All cosmological terms derived from particle physics, except for exactly zero, share the same problem). Numerically speaking, κ ≃ 5.8 × 10 −19 (GeV) −1 means ℓ ∼ 10 −18 (GeV) −1 is Planckian in scale. Of course the magnitude of the miniscule cosmological constant Λ ∼ 4 × 10 −84 (GeV) 2 is at variance with Planckian expectations by the usual factor of 10 −120 , which is probably the most mysterious natural ratio. So far as our scheme is concerned, we are disappointed but not particularly troubled by the wrong sign of Λ because it can readily be reversed by extra property curvature coefficients when we enlarge the number of properties (as we have checked when enlarging the number of properties to at least two). The magnitude of Λ is quite another matter because it will require some extraordinary fine-tuning, even after fixing the sign.
Inclusion of matter fields
The conventional results which we obtained for electromagnetism plus gravity, through the property of electricity, merely confirm the fact that our scheme is perfectly viable and offers a novel perspective on nature. We anticipate that when one incorporates other properties, like chromicity and neutrinicity, then the usual picture of QCD plus gravity plus electroweak theory will emerge. For now we wish to exhibit some preliminary research concerning inclusion of matter fields, despite being limited to the single property of electric charge.
Scalar Field
Adhering to the tenets of the spin-statistics connection, we begin by assuming that a superscalar field Φ(X) is overall Bose and can be expanded into even powers ofζζ; thus it has the general form Φ(x, ζ,ζ) = U (x) + V (x)ζζ. Note that we could have included in Φ two anticommuting scalar ghost fields in the combinationζC +Cζ; such ghost fields have a place in quantum theory but, with their incorrect spinstatistics, cannot be regarded as physical asymptotic states. The same comment applies to the spacetime-property sector where we could have included vector ghost fields of the type C m ,C m multiplying (1 + c ′ζ ζ). We have ignored these extras as we are only dealing with semiclassical e.m./gravity for the purpose of the present investigation, but they are sure to come in their own when quantization of the scheme is undertaken. By imposing self-duality so U (x) = V (x) = ϕ(x), Φ may be reduced to the form 6,10
Φ(X) = ϕ(x)(1 +ζζ)/2.(52)
Necessarily ϕ carries zero charge and is a far cry from a Higgs field. (In fact to obtain the correct quantum numbers of the Higgs field it is imperative to attach three chromic properties to charge.) As we will be coupling this field to the supermetric, which brings in the superdeterminant 2 √ −G../ℓ 2 , we shall introduce an extra factor of ℓ 2 to eliminate this scale and we will also ignore c i curvature in what follows except from what the mixed x − ζ sector produces.
A mass term in the Lagrangian of µ 2 ϕ 2 /2 will arise through the property integral
(ℓ 2 /2) dζdζ √ −G.. µ 2 Φ 2 = dζdζ √ −g..(1 + 2ζζ)µ 2 ϕ 2 /4.(53)
The kinetic term is more interesting, because it adds to the mass owing to the property components in Φ; specifically this can be attributed to the ζ,ζ derivatives, which add a piece to the mass. Thus we consider
(ℓ 2 /2) dζdζ √ −G.. G MN ∂ N Φ ∂ M Φ.(54)
Upon inserting the metric from (32), we find that the contributions from the gauge field cancel out, as they must, and we are left with
dζdζ √ −g..[(1 + 2ζζ) g mn ∂ n φ ∂ m φ/4 +ζζϕ 2 /ℓ 2 ].(55)
The only feasible way then to cancel off mass in order to obtain a massless scalar field in this scheme is to match the ϕ 2 /ℓ 2 from the property kinetic energy to the previously constructed mass term (53).
Spinor Field
In seeking a generalisation of the Dirac equation to incorporate the graded derivative, we need to bear in mind that the electromagnetic potential is embedded in the spacetime-property frame vector E A M ; therefore we first need to determine the inverse vielbein, obtained via the condition E M A E A N = δ M N . The components are listed in the second appendix, where it will be seen that the vector potential is held in the space-property sector via E a ζ and E aζ . Since the Dirac operator has a natural extension from iγ a e a m ∂ m to iΓ A E A M ∂ M , it is vital to include the graded derivative ∂/∂ζ at the very least.
Just as Dirac was obliged to enlarge spinors from two to four components in order to go from non-relativistic electrons to relativistic ones, so too we are forced to extend the space in order to deal with the property derivatives. (There may be other ways to attain that goal.) Since the Dirac operator will act on a spinorial superfield, we have been led to consider an extended field Ψ(X) = θζψ(x) and the representation:
Γ a = γ a , ℓΓ ζ = 2i∂/∂θ, ℓΓζ = 2iθ,(56)
wherein θ is another complex scalar a-number and we eventually have to integrate over θ andθ. (The Γ ζ , Γζ act like fermionic annihilators). The action of the extended Dirac operator then yields
iΓ A E A M ∂ M Ψ = iγ a e a m ∂ m + eγ a A aζ ∂ ∂ζ + 2 ℓ (1 − fζζ) ∂ 2 dθdζ θζψ = θζγ a e a m (i∂ m +eA m )ψ− 2 ℓ (1−fζζ)ψ.(57)
This means that when we include the adjointΨ ≡ −ψζθ and integrate over the subsidiary θ, ζ, we end up with the normal gauge invariant spinorial Lagrangian density:
L = (dζdζ)(dθdθ)Ψ(X) iΓ A E A M ∂ M − M Ψ(X) =ψ(x) [γ a e a m (i∂ m + eA m ) − M] ψ(x).(58)
Very likely there exists a more elegant way of reaching (58) but however this is done the coupling of the charged fermion to the spacetime-property vielbein, which contains A, is critical. The representation (56) will surely need revisiting in order to encompass chirality if the basic fermions are taken as left-handed, especially if we attach charge conjugate left-handed pieces, in order to encompass all spin states.
Conclusions
The framework underlying our research was inspired by supersymmetry, but instead of using auxiliary spinor coordinates we have made them scalar and connected them with something tangible, namely property or attribute. This point is important because all physical events are described by changes in momentum and/or property.
From this perspective, systematisation of property, with the natural occurrence of generations, becomes a guiding principle. An obvious criticism of the approach is that the superstructure has only led to the standard Einstein-Maxwell Lagrangian, which is hardly an earth-shattering conclusion! True, but by geometrizing spacetimeproperty we have succeeded in reinterpreting gauge fields as the messengers of property in a larger graded curved space, besides offering a new viewpoint on the nature of events; as a bonus we see that curvature in property space can act as a source of a cosmological constant -this with just one property coordinate -even if its value is ridiculous. Furthermore addition of further ζ coordinates offers a natural path to group theory classification f without entraining infinite towers of states as one gets with bosonic extensions to spacetime. We foresee no intrinsic difficulty in extending the work to QCD or to electroweak theory, though the algebraic manipulations will perforce be more intricate. Going all the way, we anticipate an extension of our calculations to five property coordinates (ζ 0 to ζ 4 ) and the distillation of the final group to that of the standard model will mean that property curvature coefficients c col , c ew are to be associated with colour and electroweak invariants,ζ i ζ i and (ζ 0 ζ 0 +ζ 4 ζ 4 ) respectively -perhaps engendered by expectation values of chargeless Higgs or dilaton fields. For the future these are just the most prominent issues that come to mind with correction of the Λ sign foremost among them and the quantization of the scheme in the present framework as the next step. There are surely several research avenues g to explore in the present picture, many more than we have envisaged.
In the end, this geometrical approach of uniting gravity with other natural forces through a larger graded space-time-property manifold may turn out to be quite misguided. That would be disappointing as it is hard to imagine what other original way one could unify the gravitational field with other fields. As a fallback position we could be ultraprudent by abandoning the unification goal: just introduce gauge fields in the time-honoured way, ensuring that differentiation is gauge covariant under property transformations, by replacing ordinary derivatives ∂ with D = ∂ + ieA.T , where the generators T are represented by property rotations acting on matter superfields; e.g. the charge operator by (ζ ∂/∂ζ −ζ ∂/∂ζ). That would be a backward step and even then, we might be confronted by insurmountable obstacles. However, for now, the geometric scheme seems flexible enough and accords well with our general understanding of fundamental particle physics and its content as well as f To stress this point we mention that we have succeeded in obtaining the combined Yang-Mills-Gravity Lagrangian for two property coordinates. The details are much more intricate than the one ζ case considered in this paper and will be submitted for publication separately. g Looking beyond the horizon of this paper, since the gravitational coupling to any stress tensor is universal and since we have married couplings with fields it means that these interaction couplings will need to be universal too. At low energies QCD and QED coupling constants e and g are widely different so, unless the curvature coefficients ce and cw are taken to be different which is entirely possible, we envisage a scenario where these couplings are running and unified at a GUT-like scale ℓ with 1/α ≃ 40; they only look different when we run down from e 2 (ℓ 2 ) = g 2 (ℓ 2 ) to electroweak/strong interaction scales.
gravitation. Should our framework fall by the wayside, there is no other recourse than to persist with variants of grand theories which are currently on the market, in the hope that experiment will, at sufficiently high energy, substantiate one of them. Failing that, we trust that one day somebody will conceive a radically new description of events that will lead to new insights with testable predictions.
Acknowledgments
We would like to thank Dr Peter Jarvis for much helpful advice and for his knowledgeable comments about supergroups, their representations and their dimensions.
From the frame vectors, namely E M A , whose components are
[M][N ] , etc. arise from the mismatch between left derivatives clashing with the convention of placing subscript such as ,M on the right and we are stuck with this inappropriateness. Anyhow, with these constructions of the covariant derivative, it is pleasing to check that G MN ;L vanishes, as expected. And from equations such as (13),(15),(16), one may deduce the rules for covariant derivatives of any supertensor c The Riemann curvature tensor arises in the normal way (with suitable sign factors): (−1) [J] A J R J KLM = A K;L;M − (−1) [L][M] A K;M;L (17) c In this context, the rule for covariant differentiation of a product is: (A K B L C M ..) ;N = A K ;N (−1) [N]([L]+[M ]+..) B L C M .. + A K (−1) [N]([M ]+..) B L;N C M .. + A K B L (−1) [N](..) C M ;N .. + ... etc.
[N ]([J]+[K]+[L]+[M]) R JKLM,N . With this simplification there emerges the identity (−1) [L][N ] R JKLM;N + (−1) [N ][M] R JKN L;M + (−1) [M][L] R JKMN ;L = 0. (25) To get the contracted version of the second Bianchi identity, involving the Ricci tensor, we look at G LJ [(−1) [L][N ] R JKLM;N + (−1) [N ][M] R JKN L;M + (−1) [M][L] R JKMN ;L ] = 0. (26) This results in R KM;N − (−1) [N ][M] R KN ;M + (−1) [M][L]+[L][N ]+[K][L] G LJ R JKMN ;L = 0, (27) wherein the Ricci tensor has the graded symmetry, R KM = (−1) [M][K] R MK . One last contraction with G MK gives R ;N = 2(−1) [M][N ] R M N ;M ,
While on the subject of the super-determinant we note that in general, ( √ G..) ,M = √ G..(−1) [N ] Γ MN N , and not just for our particular G. As a direct consequence, [ √ G..A M ] ;M = [ √ G..(−1) [M] A M ] ,M , which impacts on the graded Gauss' theorem. Further, using the derivative identity,G LK ,M = −(−1) [M][L] G LN Γ N M K − (−1) [K]([N ]+[L]) G KN Γ N M L ,(40)we can establish a useful lemma:[ √ G..G MK ] ,L = (−1) [N ] √ G..Γ LN N G MK − √ G..[(−1) [L][M] G MN Γ N L K +(−1) [K]([L]+[M]) G KN Γ N L M ] (41) so that [ √ G..G LK ] ,L = − √ G..G LM Γ ML K quitesimply. Then because, under an integral sign, the total derivative terms [(−1) [L] √ G..G MK Γ KM L ] ,L , and [(−1) [L] √ G..G MK Γ KL L ] ,M both effectively give zero, one can show that √ G..(−1) [L] G MK (−1) [L]([M]+[K]) (Γ KM L ) ,L − (Γ KL L ) ,M
1ζ ζ)e m a , E m ζ =−ieζA m , E mζ = −ieA m ζ,E ζ a = 0, E ζ ζ = 0, E ζζ = (1 + c 2ζ ζ), Eζ a = 0, Eζ ζ = −(1 + c 2ζ ζ), Eζζ = 0, we may derive the super-vielbeins E A N , obtained via E M A E A N = δ M N .In this way we arrive at the set:E a m =e a m (1−c 1ζ ζ), E a ζ = ieA a ζ, E aζ = −ieζA a , E ζ m = 0, E ζ ζ = 0, E ζζ = −(1 − c 2ζ ζ), Eζ m = 0, Eζ ζ = (1 − c 2ζ ζ), Eζζ = 0.These expressions are required in Section 9 B. As a useful check on their correctness we may ascertain thatG MN = (−1) [A][M] η AB E B M E A N , emergesproperly. For example we directly arrive atG ζζ = η ab E b ζ E aζ − 2Eζ ζ E ζζ /ℓ 2 + 2E ζ ζ Eζζ/ℓ 2 = η mn (ieA n ζ)(−iζeA m ) + 2(1 − 2c 2ζ ζ)/ℓ 2 = 2(1 − 2c 2ζ ζ)/ℓ 2 − e 2 A m A mζ ζ
a Asorey and Lavrov use the notation ǫ M instead of[M ].
b Note that in ref.7 the G with raised indices differs from the present G by a factor (−1)[N] .
Appendix A. Christoffel symbols, Vielbeins and Curvature componentsAppendix A.1. The graded Christoffel connectionsFrom definition(14)and the metric elements (32) one may derive the following components of the Christoffel symbols:Γ ζζ l = Γ ζζ ζ = Γ ζζζ = Γζζ l = Γζζ ζ = Γζζζ = 0.Above, Γ (g) signifies the purely gravitational connection and F mn ≡ A n,m − A m,n is the standard Maxwell tensor. These connections are essential in determining the full super-Riemann components.Appendix A.2. Ricci Tensor ComponentsA concise list of the Ricci supertensor components (contravariant and covariant) is as follows, where we neglect spacetime curvature:Other components are derivable from symmetry properties of R MN . Appendix A.3. Vielbeins
The method of second quantization. F A Berezin, Academic PressBostonF.A. Berezin, The method of second quantization, (Academic Press, Boston, 1966);
. B Dewitt, Supermanifolds , Cambridge University PressCambridgeB. DeWitt, Supermanifolds, (Cambridge University Press, Cambridge, 1984).
Introduction to Superanalysis. F A Berezin, D.Reidel Pub. CoDordrechtF.A. Berezin, Introduction to Superanalysis, (D.Reidel Pub. Co., Dordrecht, 1987).
. A Rogers, Supermanifolds: Theory and Applications. World ScientificA. Rogers, Supermanifolds: Theory and Applications, (World Scientific, Singapore, 2007).
E Witten, arXiv:1209.2199v2Notes on Supermanifolds and Integration. E. Witten, Notes on Supermanifolds and Integration, arXiv:1209.2199v2 (2102).
. R Delbourgo, P D Jarvis, R Warner, J. Math. Phys. 343616R. Delbourgo, P.D. Jarvis and R. Warner, J. Math. Phys. 34, 3616 (1993)
. R Delbourgo, R B Zhang, Phys. Rev. 382490R. Delbourgo and R.B. Zhang, Phys. Rev. D38:2490 (1988).
. R Delbourgo, J. Phys. 3914735R. Delbourgo, J. Phys. A39, 5175 (2006); ibid, 14735 (2006)
. R Delbourgo, Int. J. Mod. Phys. 2229R. Delbourgo, Int. J. Mod. Phys. 22A, 29 (2007).
. R Delbourgo, arXiv:1202.4216R. Delbourgo, arXiv:1202.4216 (2012).
. R Delbourgo, P D Jarvis, G Thompson, Phys. Lett. 10925R. Delbourgo, P.D. Jarvis and G. Thompson, Phys. Lett. B109, 25 (1982)
. R Delbourgo, P D Jarvis, G Thompson, Phys. Rev. 26775R. Delbourgo, P.D. Jarvis and G. Thompson, Phys. Rev. D26, 775 (1982).
. M Asorey, P M Lavrov, J. Math. Phys. 5013530M. Asorey and P.M. Lavrov, J. Math. Phys, 50:013530 (2009).
Spacetime and Geometry. S Carroll, Addison WesleySan FranciscoS. Carroll, Spacetime and Geometry, (Addison Wesley, San Francisco, 2004)
S Weinberg, Gravitation and Cosmology. New YorkJ.Wiley & SonsS. Weinberg, Gravitation and Cosmology, (J.Wiley & Sons, New York, 1972)
|
[] |
[
"Information Sources and Needs in the Obesity and Diabetes Twi er Discourse",
"Information Sources and Needs in the Obesity and Diabetes Twi er Discourse"
] |
[
"Yelena Mejova [email protected] \nQatar Computing Research Institute Hamad Bin Khalifa University\nDohaQatar\n"
] |
[
"Qatar Computing Research Institute Hamad Bin Khalifa University\nDohaQatar"
] |
[] |
Obesity and diabetes epidemics are a ecting about a third and tenth of US population, respectively, capturing the attention of the nation and its institutions. Social media provides an open forum for communication between individuals and health organizations, a forum which is easily joined by parties seeking to gain pro t from it. In this paper we examine 1.5 million tweets mentioning obesity and diabetes in order to assess (1) the quality of information circulating in this conversation, as well as(2)the behavior and information needs of the users engaged in it. The analysis of top cited domains shows a strong presence of health information sources which are not a liated with a governmental or academic institution at 41% in obesity and 50% diabetes samples, and that tweets containing these domains are retweeted more than those containing domains of reputable sources. On the user side, we estimate over a quarter of non-informational obesity discourse to contain fat-shaminga practice of humiliating and criticizing overweight individualswith some self-directed toward the writers themselves. We also nd a great diversity in questions asked in these datasets, spanning de nition of obesity as a disease, social norms, and governmental policies. Our results indicate a need for addressing the quality control of health information on social media, as well as a need to engage in a topically diverse, psychologically charged discourse around these diseases.
|
10.1145/3194658.3194664
|
[
"https://arxiv.org/pdf/1804.02850v1.pdf"
] | 4,698,665 |
1804.02850
|
a69d69df8fdb3d2b4c9d9d16a3db4b27f374de30
|
Information Sources and Needs in the Obesity and Diabetes Twi er Discourse
April 23-26. 2018
Yelena Mejova [email protected]
Qatar Computing Research Institute Hamad Bin Khalifa University
DohaQatar
Information Sources and Needs in the Obesity and Diabetes Twi er Discourse
Lyon, FranceApril 23-26. 201810.1145/3194658.3194664ACM Reference Format: Yelena Mejova. 2018. Information Sources and Needs in the Obesity and Diabetes Twitter Discourse. In DH'18: 2018 International Digital Health Conference, April 23-26, 2018, Lyon, France. ACM, New York, NY, USA, 9 pages. https:// ACM ISBN 978-1-4503-6493-5/18/04. . . $15.00CCS CONCEPTS • Networks → Online social networks• Applied computing → Consumer healthHealth informaticsKEYWORDS Information need, Misinformation, Social media, Twitter, Obesity, Diabetes
Obesity and diabetes epidemics are a ecting about a third and tenth of US population, respectively, capturing the attention of the nation and its institutions. Social media provides an open forum for communication between individuals and health organizations, a forum which is easily joined by parties seeking to gain pro t from it. In this paper we examine 1.5 million tweets mentioning obesity and diabetes in order to assess (1) the quality of information circulating in this conversation, as well as(2)the behavior and information needs of the users engaged in it. The analysis of top cited domains shows a strong presence of health information sources which are not a liated with a governmental or academic institution at 41% in obesity and 50% diabetes samples, and that tweets containing these domains are retweeted more than those containing domains of reputable sources. On the user side, we estimate over a quarter of non-informational obesity discourse to contain fat-shaminga practice of humiliating and criticizing overweight individualswith some self-directed toward the writers themselves. We also nd a great diversity in questions asked in these datasets, spanning de nition of obesity as a disease, social norms, and governmental policies. Our results indicate a need for addressing the quality control of health information on social media, as well as a need to engage in a topically diverse, psychologically charged discourse around these diseases.
INTRODUCTION
In the US, a majority of adults now look online for health information, according to Pew Research Center 1 . The quality of the information they may nd, however, has been questioned throughout past two decades [5,6,15,46]. The responsibility of evaluating online health information then falls on the shoulders of internet users, presenting a danger of misinformed decisions about health and medical treatments [12].
With the rise of social networking, health information is increasingly shared peer to peer. Major health organizations began to utilize Twitter and Facebook for communicating with the public. US Centers for Disease Control and Prevention (CDC) use their Facebook page to promote health and inform the public about emerging pandemics [45], whereas American Heart Association, American Cancer Society, and American Diabetes Association keep their Twitter followers updated on organizational news and instruct in personal health [35]. But besides these large governmental organizations promoting a healthy lifestyle, content aggregators, bots and any party with or without medical quali cations may post health-related information on social media. For instance, Facebook posts with misleading information on Zika virus were some of the most popular in the summer of 2016, with hundreds of thousands of views [42]. Either rumor-mongering, seeking clicks, or spam, such messages aim to penetrate health discussions and the social media communities around them to potential detriment of the understanding and eventual health of users coming across this information. Thus far, few attempts have been made to assess the quality and sources of health information circulating on social media, with studies focusing on particular accounts [24,45] or events [7,9,42].
On the other hand, social media provides a unique opportunity to observe peoples' knowledge of and attitudes toward health issues outside of the conventional top-down institutionalized channels. For instance, it is possible to observe communities of anorexic users promoting the disease on image sharing sites like Flickr, and to measure the e ect of possible interventions [55]. The attitude toward food and the perception of its desirability or healthiness can be tracked through the social interactions on Instagram [32,33]. The spread of anti-vaccination opinions can be tracked on Twitter [13] and internet search engines [54]. Insights obtained from these sources have widespread implications from pharmacovigilance, to the design of health intervention campaigns, to public health policy.
Public perception is especially critical to the ongoing epidemics of obesity and diabetes, as these "lifestyle" diseases are connected to everyday activities, as well as psychological stressors (here we refer more to Diabetes Type II, not the largely juvenile Type I).
An astounding 39.8% of the adults in United States was obese in 2015-2016 [20] with an estimated 30.3 million people having diabetes [8]. Linked to daily diet and exercise, change in lifestyle helps manage these conditions. According to the "Transtheoretical Model" of behavior change [37], before a change in behavior can be made, a stage of awareness of the health consequences must be rst achieved. Thus, gauging the awareness and attitude toward the problems of obesity and diabetes is the rst step to designing e ective policies for behavior change. It is especially necessary, as a powerful stigma of obesity (sometimes expressed as "fat shaming") is prevalent in the Western world [38] to the point that one survey reported that 24% of women and 17% of men said they would give up three or more years of their lives to be the weight they want [17]. Such atmosphere may further hurt the chances of individuals to lose weight, as social support and ongoing internal motivation are important factors in successful weight loss and maintenance [14].
Thus, in this paper we take a two-pronged approach to gauging the nature of discourse on obesity and diabetes:
(1) we evaluate the quality of information sources, their credentials, popularity, and the potential for them to spread through the social network; (2) we gauge the attitudes of the participants in the conversation, including their (a) propensity for fat shaming, (b) blaming obese and diabetic people for their conditions, (c) exposing personal information, and (d) information seeking.
This study is among rst of its kind to juxtapose the available information on medical conditions, in this case obesity and diabetes, with the information needs and attitudes of social media users interested in them. The mixed methods approaches applied to nearly six months of Twitter stream data comprising of 1.5 million tweets include quantitative analyses, network analysis, grounded annotation, and crowd sourcing, exemplifying the multidisciplinary content analysis indicative of the emerging elds of computational social sciences.
RELATED WORK
Social media has recently been acknowledged by public health community to be a valuable resource for disease outbreak detection, tracking behavioral risks to communicable and non-communicable diseases, and for health-care agencies and governments to share data with the public [26]. Non-pro t organizations, for instance, use Twitter for informing the public, building a community, and encouraging individual action [30]. However, free nature of social media (and internet in general) allows for information sources not a liated with governmental agencies, raising concerns over the provenance and quality of health and medical information they make available to the public. A recent review on "Web 2.0" urges that the community "must not easily dismiss concerns about reliability as outdated", and address the issues of authorship, information quality, anonymity and privacy [2].
Biased messages, rumors, and misinformation have been gaining attention in the literature. An analysis of anti-vaccination websites has revealed a pervasive misinformation [27]. Moreover, Dunn et al. [13] nd that prior exposure to opinions rejecting the safety or value of HPV vaccines is associated with an increased relative risk of posting similar opinions on Twitter. Recently, a tool combining crowdsourcing and text classi cation has been proposed to track misinformation on Twitter on the topic of Zika [18]. Another tool was proposed for crawling medical content from websites, blogs, and social media using sentiment and credibility scoring [1]. Yet another tool ranks information sources by "reputation scores" based on retweeting mechanism [53]. More generic algorithms comparing information to known sources such as Wikipedia have also been proposed [11]. Yet in medical and health domains, it is still unclear what portion of overall social media content comes from reputable sources, what other sources seek to engage with health-oriented communities, and what success these sources have in propagating their material through the social network. Our work contributes to the understanding of the quality of discourse around obesity and diabetes by evaluating the most cited resources in the relevant Twitter streams.
The breadth and reach of social media not only allowed for wider dissemination of health information, but its network features allow individuals to join communities, seek information, and express themselves on an unprecedented scale. Surveys nd that consumers use social media primarily to see what others say about a medication or treatment and to nd out about other peoples' experiences [41]. When they nd this information, another study indicates that it largely changes what they think about the topic [43]. Moreover, the diabetes patients participating in [43] indicated willingness to discuss personal health information on online social networking sites. Another study of diabetes communities on Facebook showed that "patients with diabetes, family members, and their friends use Facebook to share personal clinical information, to request diseasespeci c guidance and feedback, and to receive emotional support" [19], while 27% of the posts featuring some form of promotional activity of "natural" products. Thus, social media is a marketplace for health information, where both supply and demand is captured in the same medium. In this study, we aim to examine the information needs revealed in Twitter posts concerning obesity and diabetes, as well as behaviors related to revealing one's own private health information.
In 2013, the American Medical Association (AMA) recognized obesity as a complex, chronic disease [36], prompting a debate about the e ects of such a decision on weight discrimination with some claiming it "provides ammunition on 'war on obesity', " [21] while others expressing concerns that it's a sign of "abdication of personal responsibility" [47]. Public perception of obesity on social media also prompts debate. A survey in 2015 showed respondents agreeing less that people are "personally responsible for becoming obese", but more that "the cause of obesity is beyond the control" of a person who has it [28]. Further, a study of a variety of social media including Twitter, blogs, Facebook and forums showed widespread negative sentiment, derogatory language, and misogynist terms, with 92% of Twitter stream having the word "fat" [10]. Another more recent study coded the uses of the word "fat" in Twitter, nding 56.57% to be negative and 32% neutral [31]. Thus, we ask what proportion of obesity discourse contains such messages, do they similarly a ect the diabetes community, and whether the institutional messaging addresses this phenomenon in its communication.
We begin by collecting two datasets, Obesity (tweets mentioning "obese" or "obesity") and Diabetes ("diabetes" or "diabetic") collected using Twitter Streaming API 2 . Both datasets span the period of July 19, 2017 -December 31, 2017, nearly half of a year, comprising of 1,055,196 tweets from 505,897 users in Obesity and 2,889,764 tweets from 996,486 users in Diabetes dataset. Note the conservative selection of keywords for the purposes of this analysis. Whereas other keywords may capture more of the discussion on related matters, such as "fat", "insulin", "exercise", etc., we aim to capture only the discussion on these health conditions. It is notable that whereas obesity is a more prevalent condition, for instance, in US more than one-third (36.5%) of adults have obesity [34] compared to 9.4% having diabetes [8], the Twitter stream shows nearly 3 times as many messages on the latter compared to the former, indicating either heightened interest in and/or more aggressive campaigning for the topic.
In order to understand the major sources of information within these two streams, we examine the URLs present in the tweets of each dataset. Utilizing Twitter API's "expanded url" eld, we extract the original domains associated with the short URLs present in the tweet text. Note that even then some of the original URLs users type in may be shortened by some service. In total, we extract 17,511 and 43,230 domains from Obesity and Diabetes collections, respectively. For each collection, we then use Grounded Theoretic-approach to examine the rst 100 domains by iteratively coding each, until a set of common codes is established. Table 1 shows the most common codes in the two collections. Despite using expanded URLs, 13 of the top domains in each collection were shortening services. Additionally, social media managers and aggregators comprised a similar portion of content. Strikingly, a minority of the domains dealt with health: 17 and 29 in Obesity and Diabetes, respectively. Among these, the quality of the information varied greatly. In particular, we de ne a source as "unveri ed health" if it (1) publishes health-related information, but (2) has no about page describing its credentials. Second point is important, as the reader may make assumptions about the credentials of the source, in the absence of any statement. This also di erentiates these sources from domains we dub "health aggregators" -websites publishing articles on topics of health which clearly state their a liation (often a publishing company). Alternatively, if the a liation is that of a governmental or academic institution such as National Institutes of Health (NIH) or American Diabetes Association, the domain is coded as "veri ed health". Note that some major websites, such as the Diabetes community site www.diabetes.co.uk may be popular, but are not associated with a governmental agency. Overall, out of the health-related domains, 23% of the ones met the criteria for "veri ed" for both datasets, while 41% and 50% were "unveri ed" in Obesity and Diabetes sets, respectively. In terms of comparative volume, unveri ed domains had 2.47 times more content in the Obesity stream than veri ed ones, similarly 2.72 for Diabetes.
As retweeting behavior is commonly used to gauge "virality" of content [23], we examine the number of retweets the content containing veri ed and unveri ed domains received in our datasets.
2 https://developer.twitter.com/en/docs/tweets/ lter-realtime/overview We begin by selecting users who are likely to be real individuals, not bots or organizations, via a two-step process. First, we lter the accounts by the number of followers and followees (maximum gures over the period of time the data was collected), selecting users having both numbers between 10 and 1000, as it has been shown that 89% of users following spam accounts have 10 or fewer followers [48]. Secondly, we use a name dictionary to identify the users having identi able rst names. These names were collected United States Social Security registry of baby names from years 1880-2016 3 . Such users comprised 36.5% of users captured in Obesity and 37.2% of Diabetes datasets. Although it is possible some bots remain in this sample, upon manual inspection, the resulting accounts looked overall likely to be used by real people.
Next, we gather statistics on the number of retweets a piece of content gathered. To make sure to catch all retweets, even if the Twitter API does not identify them as such in metadata, we clean the text of the tweet of special characters, user mentions, and urls, as well as the "RT" identi er for "retweet" to nd the basic linguistic content of the tweet. Figures 1(a,b) show the kernel density (a "violin") plot of the distribution of retweets per unique piece of content for those having veri ed domains associated with health agencies and institutions, and all other health domains, as posted by users in the above selection. For both topics, veri ed domains were less likely to produce viral content, with a mean number of retweets of 1.9 for veri ed compared to 3.4 for other health content in Obesity dataset (di erence signi cant at p-value < 0.01 using Welch two sample t-test) and 2.8 versus 3.8 in Diabetes (although not signi cant at the same level). We hypothesize that the lesser di erence in Diabetes data is due to high quality websites una liated with governmental agencies such as www.webmd.com and www.diabetes.co.uk. When selecting only those websites coded as "unveri ed", the disparity grows to 7.1 retweets for Obesity and 9.6 for Diabetes content, indicating that such websites produce an even more viral content. To further understand the impact of these information sources in the community, we create a co-citation network. In this network, an edge exists between two domains if the same user has posted urls containing them, in the same or in di erent posts. Due to sparsity limitations, we return to the full list of users in this experiment, which will also allow to capture the co-posting behavior of the domains and their Twitter accounts. Speci cally, for each domain pair, we enumerate the users posting at some point URLs from both of the domains, which usually happens in di erent posts, resulting in a set of users who were interested in the content of each domain at some point during the collection period. However, this connection needs to be understood in the context of the posting frequency of each domain, with the most popular ones (such as twitter.com) potentially dominating all others. Thus, we use Jaccard similarity coe cient, having the following form:
accard (domain 1 , domain 2 ) = U domain 1 ∩ U domain 2 U domain 1 ∪ U domain 2
where U domain is the set of users who posted at least one URL of that domain. The network is then expressed in GraphML format 4 and plotted using Gephi 5 . Figures 2(a,b) show the resulting domain co-citation networks. In both, the size of the node (domain) and its label are scaled by the number of tweets the domain appears in. The nodes are positioned using the ForceAtlas 2 force-directed algorithm such that nodes most strongly linked appear closer together while those most weakly linked appear in the periphery of the graph. Finally, the nodes are then colored by the type of domain: light blue -social media, blue -news, red -unveri ed health, green -veri ed health and health aggregators.
In both graphs, we see the green and blue dominating the center of conversation -which are the veri ed health institutions, health aggregators, and news. Social media is closely tied to this central cluster, but usually appears in the periphery. Notably, in both cases twitter.com appears in the corner, despite its large size, indicating that topically the content this domain provides is not central to the conversation. As Twitter provides its own shortener to the posted URLs, this nding supports that it a topically diverse domain. Notably, the unveri ed health domains appear in the periphery, but not too far from the center for Obesity network, and much more on the periphery for Diabetes. In the midst of the red nodes we nd other domains, such www.thebingbing.com and www.grandesmedios.com. This emphasizes the di culty of evaluating quality of content from online sources. It is possible that, despite more clear disclaimers and documentation, some domains still may be positioned in the same space as more questionable ones. Thus, using this technique, we may nd candidates for further investigation into sources of health information which could have questionable provenance.
Finally, we examine the content shared by these top domains, mainly those with veri ed and unveri ed sources. For each domain, obesity being recognized as a chronic disease by American Medical Association (AMA) [29], it is perceived much more as a personal failing than diabetes. Finally, in both sets words "obese" and "diabetic" has been used as a joke or hyperbole for instance "I got diabetes just watching this". As we note in Discussion section, these terms are sometimes used even as adjectives to describe food and other activities.
Next, we examine the nature of questions present in each dataset, each having just under 10% of its documents identi ed by the crowdworkers as containing a question: 94 in Obesity and 82 in Diabetes ones. Unlike the personal information, questions proved to be much more diverse. Figures 5(a,b) show the main codes present in these documents, with those having only one occurrence in text below the graph. Notably, a lively discussion was taking place in Obesity data on whether obesity is a disease, with 5 posts questioning whether social "acceptance" of obesity is desirable. A similar smaller discussion took place in Diabetes data focusing on the perception of diabetics. The breadth of questions, ranging from economic policies and education, to the quality of available information, to the medical and psychological management of these conditions, exemplify the diversity of information needs of these communities.
Finally, as both conditions are known to be connected to dietary behavior [4,39], we ask, what kinds of foods are associated with obesity and diabetes in the social media chatter? To nd out, we apply a longest string matching algorithm to the text of all of the tweets (not only the subset above) to match words and phrases to a dictionary of foods along with their nutritional values. We borrow the recipe collection from [40], which contains nutrition information for both recipes (18,651) and their ingredients (1,499 unique entries). To supplement this lexicon, we take the foods listed in Lexicocalorimeter [3] and search Google for nutritional information, scraping the returned pages, resulting in 1,464 entries. The nal merged list of foods contains 21,163 entries, each annotated with calorie content and other nutritional values.
The 30 most popular foods matched to the two datasets are shown in Table 2. We classify the foods in this table by caloric density, as de ned by the British Nutrition Foundation 7 . It uses the ratio of kcal per gram of food and the following ranges: very low (less than 0.6 kcal/g), low (0.6 to 1.5), medium (1.5 to 4) and high (more than 4). The variability of mentioned foods reveal a dichotomy between the two themes: associating heavy foods with obesity and diabetes (in fact, using the word "diabetes" instead of "sweet"), and proposing healthier alternatives. In the minds of these Twitter users, sugar and fat seem to be most associated with both conditions, although at least in popular culture the main culprit is still undecided 8 . Also note the prominence of tea, which is the most popular remedy advertised in both datasets, especially green tea, and other "weightloss" teas (some of which may actually be e ective [25]). Finally, excluded from this list is "liver", which matched a food in our dictionary, but upon a closer inspection, was actually referred to in the context of a human organ. Thus, we emphasize the diversity of the discourse captured in this data -even the supposed foods found therein may refer to widely di ering content, including commentary on own state of health and behavior, questions and advise on healthy living, and commercial advertisement of goods and services.
DISCUSSION & LIMITATIONS
Analyzing nearly 6 months of Twitter stream about obesity and diabetes, this study explores the sources of information dominating this discourse, as well as the posting behavior of ordinary users. We nd that, besides the large news websites and social media aggregators, a substantial amount of content is posted by what we dubbed "unveri ed" health sources, those not a liated with any established governmental or medical organization. The situation is particularly dire in the Diabetes stream where 50% of health-related domains are of this nature. Further, we nd that their content both has greater volume (at roughly 2.5 times that coming from the veri ed sources) and tends to be retweeted more. This is especially concerning in the case of Diabetes stream, as a substantial portion of this material claims to cure the disease. As mentioned in Related Works section, several automated and semi-automated methods have been proposed to detect misinformation [1,11,18,53], but latest e orts in algorithmic correction of such material showed mixed results, such as Facebook's attempt to demote stories agged as "fake" [44], while others like Twitter simply having no builtin algorithmic response, although third party tools are becoming more available [50,51]. However, there are some signs that social correction may be e ective in reducing misconceptions, as long as an additional reputable source of information is presented [52].
Besides misinformation, our analyses nd a worrying amount of fat-shaming, including self-hate, messages, making up 27.6% of the non-URL messages mentioning obesity. Not only has social media has been shown to negatively a ect body satisfaction [16], such speech may exacerbate the already largely critical body attitudes of overweight individuals, and especially young women [17]. Availability of unveri ed sources advertising potentially harmful weight loss programs and products compounds the danger for those [49]. It is concerning, then, that the topic of body image does not gure in Twitter content posted by veried domains, even though some agencies are aware of social media health debates. For example, the US Centers for Disease Control and Prevention published a study on #thinspo and # tspo movements on Twitter, nding that #thinspo content (which often contained images of extremely thin women) had higher rates of liking and retweeting [22].
Although the rate of fat-shaming was much lower for diabetes data (at 5.9%), we found many misuses of the word "diabetes" to mean excessively sweet or unhealthy:
"@user Can I have one?....I'm in the mood for some diabetes :)" However we also nd some pushback on the practice: "@user Love you guys but could you please stop referring to "getting diabetes" when eating sweet foods? My son is Type1D. " This attitude may be linked to an undercurrent of statements that people are responsible for their conditions (17.8% for obesity and 14.2% for diabetes non-URL streams). In fact, whether obesity is a disease is one of the most frequently asked questions, according to our sample, despite the American Medical Association (AMA) recognizing obesity as a disease in 2013 [36]. More clear messaging may be necessary to explain the complex nature of obesity, with its psychological as well as physical aspects, to the public.
Although our data collection spans nearly half of a year and contains every tweet posted in that time mentioning diabetes or obesity, this sample most certainly does not cover the entirety of conversations on these topics (especially when they are not referred to explicitly). For instance, it is likely that not all references to "fat" in Table 2 refer to food, but to people's weight. A major limitation of content analysis is, thus, the initial selection of the keywords to be considered. However, despite the sizable volume of social media posting, the majority of conversation happens o ine, or in private forums and communities. For instance, one of the most popular weight loss apps LoseIt 9 had over 30 million users as of 2017, who are encouraged to network and communicate. Further, there may be ongoing campaigns outside Twitter by governmental 9 http://www.loseit.com/about/ health agencies or other more local organizations. Thus, the results of this study need to be taken in the broader context of health communication.
CONCLUSION
This study provides an analysis of 1.5 million tweets posted in the latter half of 2017 mentioning obesity and diabetes. We examine the most cited domains, paying special attention to whether they are associated with a known governmental or health agency. Worryingly, we nd a substantial volume from unveri ed sources, especially in the Diabetes dataset. We also nd that this content tends to be retweeted more than that coming from veri ed sources. A complementary analysis of tweets not sharing a URL shows a strong presence of fat shaming in Obesity stream and the sharing of personal information in Diabetes one. The mismatch between the institutional messaging and the questions we have encountered in the data points to a need for better discussion of the nature of obesity and diabetes as diseases, confronting fat-shaming, and providing information other than prevalence statistics and latest medical news.
Figure 1 :
1Violin plots of the distributions of retweets containing domains from veri ed versus other sources, as posted by users likely to be real people, in the two datasets.
Figure 5 :
5Coded topics of questions in the 94 and 82 documents identi ed as asking a question in Obesity and Diabetes datasets, respectively.
Table 1 :
1Domain codes for top 100 domains found in each collection, with accompanying examples. youtu.be, www.facebook.com, lnkd.in, www.instagram.com, cards.twitter.com 8 health aggregator www.medicalnewstoday.com, www.medscape.com, www.diabetes.co.uk 8 SM manager dlvr.it, bu .ly, paper.li, naver.me, socl.club, po.st 7 news aggregator shareblue.com, okz.me, www.sciencedaily.com 7 veri ed health www.ncbi.nlm.nih.gov, www.diabetes.org, care.diabetesjournals.org, www.idf.org, www2.jdrf.orgObesity
Table 2 :
2Top 30 most frequent foods in each dataset along with their energy density in kcal/gram. Foods with high energy density are in bold italic and medium in bold. Arti cial sweeteners range in caloric value widely, thus "-" is used.Obesity
https://www.ssa.gov/oact/babynames/limits.html
http://graphml.graphdrawing.org/ 5 https://gephi.org/
https://www.nutrition.org.uk/healthyliving/fuller/what-is-energy-density.html 8 https://www.newyorker.com/magazine/2017/04/03/is-fat-killing-you-or-is-sugar
Crawling credible online medical sentiments for social intelligence. Ahmed Abbasi, Tianjun Fu, Daniel Zeng, Donald Adjeroh, 2013 International Conference on. IEEE. In Social Computing (SocialComAhmed Abbasi, Tianjun Fu, Daniel Zeng, and Donald Adjeroh. 2013. Crawling credible online medical sentiments for social intelligence. In Social Computing (SocialCom), 2013 International Conference on. IEEE, 254-263.
Revisiting the online health information reliability debate in the wake of "web 2.0": an inter-disciplinary literature and website review. A Samantha, Adams, International journal of medical informatics. 79Samantha A Adams. 2010. Revisiting the online health information reliability debate in the wake of "web 2.0": an inter-disciplinary literature and website review. International journal of medical informatics 79, 6 (2010), 391-400.
The Lexicocalorimeter: Gauging public health through caloric input and output on social media. Jake Ryland Sharon E Alajajian, Andrew J Williams, Reagan, Stephen C Alajajian, R Morgan, Lewis Frank, Jacob Mitchell, Lahne, M Christopher, Peter Sheridan Danforth, Dodds, PloS one. 12168893Sharon E Alajajian, Jake Ryland Williams, Andrew J Reagan, Stephen C Alajajian, Morgan R Frank, Lewis Mitchell, Jacob Lahne, Christopher M Danforth, and Peter Sheridan Dodds. 2017. The Lexicocalorimeter: Gauging public health through caloric input and output on social media. PloS one 12, 2 (2017), e0168893.
Designed for disease: the link between local food environments and obesity and diabetes. H Susan, Allison L Babey, Theresa A Diamant, Stefan Hastert, Harvey, UCLA Center for Health Policy ResearchSusan H Babey, Allison L Diamant, Theresa A Hastert, Stefan Harvey, et al. 2008. Designed for disease: the link between local food environments and obesity and diabetes. UCLA Center for Health Policy Research (2008).
K Gretchen, Berland, N Marc, Elliott, S Leo, Je Morales, Richard L Rey I Algazy, Kravitz, S Michael, David E Broder, Jorge A Kanouse, Juan-Antonio Muñoz, Marielena Puyol, Lara, Health information on the Internet: accessibility, quality, and readability in English and Spanish. 285Gretchen K Berland, Marc N Elliott, Leo S Morales, Je rey I Algazy, Richard L Kravitz, Michael S Broder, David E Kanouse, Jorge A Muñoz, Juan-Antonio Puyol, Marielena Lara, et al. 2001. Health information on the Internet: accessibility, quality, and readability in English and Spanish. Jama 285, 20 (2001), 2612-2621.
Commonly cited website quality criteria are not e ective at identifying inaccurate online information about breast cancer. V Elmer, Bernstam, F Muhammad, Smitha Walji, Deepak Sagaram, Sagaram, W Craig, Funda Johnson, Meric-Bernstam, Cancer. 112Elmer V Bernstam, Muhammad F Walji, Smitha Sagaram, Deepak Sagaram, Craig W Johnson, and Funda Meric-Bernstam. 2008. Commonly cited web- site quality criteria are not e ective at identifying inaccurate online information about breast cancer. Cancer 112, 6 (2008), 1206-1213.
When vaccines go viral: an analysis of HPV vaccine coverage on YouTube. Rowena Briones, Xiaoli Nan, Kelly Madden, Leah Waks, Health communication. 27Rowena Briones, Xiaoli Nan, Kelly Madden, and Leah Waks. 2012. When vaccines go viral: an analysis of HPV vaccine coverage on YouTube. Health communication 27, 5 (2012), 478-485.
National Center for Chronic Disease Prevention and Health Promotion. Cdc, National Diabetes Statistics Report. CDC. 2017. National Diabetes Statistics Report, 2017. National Center for Chronic Disease Prevention and Health Promotion (2017).
Pandemics in the age of Twitter: content analysis of Tweets during the 2009 H1N1 outbreak. Cynthia Chew, Gunther Eysenbach, PloS one. 514118Cynthia Chew and Gunther Eysenbach. 2010. Pandemics in the age of Twitter: content analysis of Tweets during the 2009 H1N1 outbreak. PloS one 5, 11 (2010), e14118.
Obesity in social media: a mixed methods analysis. Wen-Ying Sylvia Chou, Abby Prestin, Stephen Kunath, Translational behavioral medicine. 4Wen-ying Sylvia Chou, Abby Prestin, and Stephen Kunath. 2014. Obesity in social media: a mixed methods analysis. Translational behavioral medicine 4, 3 (2014), 314-323.
Computational fact checking from knowledge networks. Giovanni Luca Ciampaglia, Prashant Shiralkar, M Luis, Johan Rocha, Filippo Bollen, Alessandro Menczer, Flammini, PloS one. 10128193Giovanni Luca Ciampaglia, Prashant Shiralkar, Luis M Rocha, Johan Bollen, Filippo Menczer, and Alessandro Flammini. 2015. Computational fact checking from knowledge networks. PloS one 10, 6 (2015), e0128193.
Consumer health information seeking on the Internet: the state of the art. J W Rebecca, Katie M Cline, Haynes, Health education research. 16Rebecca JW Cline and Katie M Haynes. 2001. Consumer health information seeking on the Internet: the state of the art. Health education research 16, 6 (2001), 671-692.
Associations between exposure to and expression of negative opinions about human papillomavirus vaccines on social media: an observational study. Julie Adam G Dunn, Xujuan Leask, Zhou, D Kenneth, Enrico Mandl, Coiera, Journal of medical Internet research. 176Adam G Dunn, Julie Leask, Xujuan Zhou, Kenneth D Mandl, and Enrico Coiera. 2015. Associations between exposure to and expression of negative opinions about human papillomavirus vaccines on social media: an observational study. Journal of medical Internet research 17, 6 (2015).
Who succeeds in maintaining weight loss? A conceptual review of factors associated with weight loss maintenance and weight regain. Kristina Elfhag, Stephan Rössner, Obesity reviews. 6Kristina Elfhag and Stephan Rössner. 2005. Who succeeds in maintaining weight loss? A conceptual review of factors associated with weight loss maintenance and weight regain. Obesity reviews 6, 1 (2005), 67-85.
Empirical studies assessing the quality of health information for consumers on the world wide web: a systematic review. Gunther Eysenbach, John Powell, Oliver Kuss, Eun-Ryoung Sa, Jama. 287Gunther Eysenbach, John Powell, Oliver Kuss, and Eun-Ryoung Sa. 2002. Em- pirical studies assessing the quality of health information for consumers on the world wide web: a systematic review. Jama 287, 20 (2002), 2691-2700.
Social comparisons on social media: The impact of Facebook on young women's body image concerns and mood. Jasmine Fardouly, C Phillippa, Lenny R Diedrichs, Emma Vartanian, Halliwell, Body Image. 13Jasmine Fardouly, Phillippa C Diedrichs, Lenny R Vartanian, and Emma Halliwell. 2015. Social comparisons on social media: The impact of Facebook on young women's body image concerns and mood. Body Image 13 (2015), 38-45.
The 1997 body image survey results. M David, Garner, Psychology today. 30David M Garner. 1997. The 1997 body image survey results. Psychology today 30, 1 (1997), 30-44.
Catching Zika fever: Application of crowdsourcing and machine learning for tracking health misinformation on Twitter. Amira Ghenai, Yelena Mejova, International Conference on Healthcare Informatics (ICHI. Amira Ghenai and Yelena Mejova. 2017. Catching Zika fever: Application of crowdsourcing and machine learning for tracking health misinformation on Twitter. International Conference on Healthcare Informatics (ICHI) (2017).
Online social networking by patients with diabetes: a qualitative evaluation of communication with Facebook. Jeremy A Greene, K Niteesh, Elaine Choudhry, William H Kilabuk, Shrank, Journal of general internal medicine. 26Jeremy A Greene, Niteesh K Choudhry, Elaine Kilabuk, and William H Shrank. 2011. Online social networking by patients with diabetes: a qualitative evaluation of communication with Facebook. Journal of general internal medicine 26, 3 (2011), 287-292.
Craig Hales, Margaret Carroll, Cheryl D Fryar, Cynthia L Ogden, Prevalence of Obesity Among Adults and Youth: United States. Craig Hales, Margaret Carroll, Cheryl D. Fryar, and Cynthia L. Ogden. 2016. Prevalence of Obesity Among Adults and Youth: United States, 2015-2016. Centers for Disease Control and Prevention. National Center for Health Statistics (2016).
Explode and die! A fat woman's perspective on prenatal care and the fat panic epidemic. Jennifer Hansen, Narrative inquiry in bioethics. 4Jennifer Hansen. 2014. Explode and die! A fat woman's perspective on prenatal care and the fat panic epidemic. Narrative inquiry in bioethics 4, 2 (2014), 99-101.
Peer Reviewed: Messengers and Messages for Tweets That Used# thinspo and# tspo Hashtags in 2016. K Jenine, Alexis Harris, Vera Duncan, Nora Men, Melissa J Shevick, Patricia A Krauss, Cavazos-Rehg, Preventing chronic disease. 15Jenine K Harris, Alexis Duncan, Vera Men, Nora Shevick, Melissa J Krauss, and Patricia A Cavazos-Rehg. 2018. Peer Reviewed: Messengers and Messages for Tweets That Used# thinspo and# tspo Hashtags in 2016. Preventing chronic disease 15 (2018).
Analyzing and predicting viral tweets. Maximilian Jenders, Gjergji Kasneci, Felix Naumann, Proceedings of the 22nd International Conference on World Wide Web. the 22nd International Conference on World Wide WebACMMaximilian Jenders, Gjergji Kasneci, and Felix Naumann. 2013. Analyzing and predicting viral tweets. In Proceedings of the 22nd International Conference on World Wide Web. ACM, 657-664.
Measuring health information dissemination and identifying target interest communities on Twitter: methods development and case study of the@ SafetyMD network. Venk Kandadai, Haodong Yang, Ling Jiang, C Christopher, Linda Yang, Flaura Koplin Fleisher, Winston, JMIR research protocols. 5Venk Kandadai, Haodong Yang, Ling Jiang, Christopher C Yang, Linda Fleisher, and Flaura Koplin Winston. 2016. Measuring health information dissemination and identifying target interest communities on Twitter: methods development and case study of the@ SafetyMD network. JMIR research protocols 5, 2 (2016).
Yung-Hsi Kao, Hsin-Huei Chang, Meng-Jung Lee, Chia-Lin Chen, Tea, obesity, and diabetes. Molecular nutrition & food research. 50Yung-Hsi Kao, Hsin-Huei Chang, Meng-Jung Lee, and Chia-Lin Chen. 2006. Tea, obesity, and diabetes. Molecular nutrition & food research 50, 2 (2006), 188-210.
Social media in public health. Hend Taha A Kass-Hout, Alhinnawi, British Medical Bulletin. 108Taha A Kass-Hout and Hend Alhinnawi. 2013. Social media in public health. British Medical Bulletin 108, 1 (2013), 5-24.
A postmodern Pandora's box: anti-vaccination misinformation on the Internet. Anna Kata , Vaccine. 28Anna Kata. 2010. A postmodern Pandora's box: anti-vaccination misinformation on the Internet. Vaccine 28, 7 (2010), 1709-1716.
Indications of Increasing Social Rejection Related to Weight Bias. T Kyle, Thomas, Joseph Ivanescu, Rebecca M Nadglowski, Puhld, ObesityWeek. Los. 2T Kyle, D Thomas, A Ivanescu, Joseph Nadglowski, and Rebecca M Puhld. 2015. Indications of Increasing Social Rejection Related to Weight Bias. ObesityWeek. Los Angeles (CA), November 2 (2015).
Regarding obesity as a disease: evolving policies and their implications. Emily J Theodore K Kyle, David B Dhurandhar, Allison, Endocrinology and Metabolism Clinics. 45Theodore K Kyle, Emily J Dhurandhar, and David B Allison. 2016. Regarding obesity as a disease: evolving policies and their implications. Endocrinology and Metabolism Clinics 45, 3 (2016), 511-520.
Information, community, and action: How nonpro t organizations use social media. Kristen Lovejoy, D Gregory, Saxton, Journal of Computer-Mediated Communication. 17Kristen Lovejoy and Gregory D Saxton. 2012. Information, community, and action: How nonpro t organizations use social media. Journal of Computer-Mediated Communication 17, 3 (2012), 337-353.
Does this Tweet make me look fat? A content analysis of weight stigma on Twitter. A Janet, Elizabeth W Lydecker, Allison A Cotter, Courtney Palmberg, Melissa Simpson, Kelly Kwitowski, Suzanne E White, Mazzeo, Eating and Weight Disorders-Studies on Anorexia, Bulimia and Obesity. 21Janet A Lydecker, Elizabeth W Cotter, Allison A Palmberg, Courtney Simpson, Melissa Kwitowski, Kelly White, and Suzanne E Mazzeo. 2016. Does this Tweet make me look fat? A content analysis of weight stigma on Twitter. Eating and Weight Disorders-Studies on Anorexia, Bulimia and Obesity 21, 2 (2016), 229-235.
Fetishizing Food in Digital Age:# foodporn Around the World. Yelena Mejova, Hamed So Ane Abbar, Haddadi, ICWSM. Yelena Mejova, So ane Abbar, and Hamed Haddadi. 2016. Fetishizing Food in Digital Age:# foodporn Around the World.. In ICWSM. 250-258.
# foodporn: Obesity patterns in culinary interactions. Yelena Mejova, Hamed Haddadi, Anastasios Noulas, Ingmar Weber, Proceedings of the 5th International Conference on Digital Health. the 5th International Conference on Digital HealthACMYelena Mejova, Hamed Haddadi, Anastasios Noulas, and Ingmar Weber. 2015. # foodporn: Obesity patterns in culinary interactions. In Proceedings of the 5th International Conference on Digital Health 2015. ACM, 51-58.
Margaret D Cynthia L Ogden, Cheryl D Carroll, Katherine M Fryar, Flegal, Prevalence of Obesity Among Adults and Youth: United States. 219Cynthia L Ogden, Margaret D. Carroll, Cheryl D. Fryar, and Katherine M. Flegal. 2015. Prevalence of Obesity Among Adults and Youth: United States, 2011-2014. NCHS Data Brief 219 (2015).
Tweeting as health communication: health organizations' use of twitter for health promotion and public engagement. Hyojung Park, H Bryan, Myoung-Gi Reber, Chon, Journal of health communication. 21Hyojung Park, Bryan H Reber, and Myoung-Gi Chon. 2016. Tweeting as health communication: health organizations' use of twitter for health promotion and public engagement. Journal of health communication 21, 2 (2016), 188-198.
Andrew Pollack, 2013. A.M.A. Recognizes Obesity as a Disease. The New York Times. Andrew Pollack. 2013. A.M.A. Recognizes Obesity as a Disease. The New York Times (2013).
The Transtheoretical Approach. Handbook of psychotherapy integration. O James, Carlo C Prochaska, Diclemente, James O. Prochaska and Carlo C. DiClemente. 2005. The Transtheoretical Ap- proach. Handbook of psychotherapy integration (2005).
Psychosocial origins of obesity stigma: toward changing a powerful and pervasive bias. M Rebecca, Kelly D Puhl, Brownell, Obesity reviews. 4Rebecca M Puhl and Kelly D Brownell. 2003. Psychosocial origins of obesity stigma: toward changing a powerful and pervasive bias. Obesity reviews 4, 4 (2003), 213-227.
Dietary approaches to the treatment of obesity. J Barbara, Elizabeth A Rolls, Bell, Medical Clinics of North America. 84Barbara J Rolls and Elizabeth A Bell. 2000. Dietary approaches to the treatment of obesity. Medical Clinics of North America 84, 2 (2000), 401-418.
Learning Cross-modal Embeddings for Cooking Recipes and Food Images. Amaia Salvador, Nicholas Hynes, Yusuf Aytar, Javier Marin, Ingmar Ferda O I, Antonio Weber, Torralba, Training. 720Amaia Salvador, Nicholas Hynes, Yusuf Aytar, Javier Marin, Ferda O i, Ingmar Weber, and Antonio Torralba. 2017. Learning Cross-modal Embeddings for Cooking Recipes and Food Images. Training 720 (2017), 619-508.
The wisdom of patients: Health care meets online social media. Jane Sarasohn-Kahn, Jane Sarasohn-Kahn. 2008. The wisdom of patients: Health care meets online social media. (2008).
Zika virus pandemic-analysis of Facebook as a social media health information platform. Megha Sharma, Kapil Yadav, Nitika Yadav, Keith C Ferdinand, American journal of infection control. 45Megha Sharma, Kapil Yadav, Nitika Yadav, and Keith C Ferdinand. 2017. Zika virus pandemic-analysis of Facebook as a social media health information platform. American journal of infection control 45, 3 (2017), 301-302.
Health information seeking and social media use on the Internet among people with diabetes. J Ryan, Constance M Johnson Shaw, Online journal of public health informatics. 31Ryan J Shaw and Constance M Johnson. 2011. Health information seeking and social media use on the Internet among people with diabetes. Online journal of public health informatics 3, 1 (2011).
In ring human editors, Facebook has lost the ght against fake news. The Guardian. Olivia Solon, Olivia Solon. 2016. In ring human editors, Facebook has lost the ght against fake news. The Guardian (2016). https://www.theguardian.com/technology/ 2016/aug/29/facebook-trending-news-editors-fake-news-stories
Health risk information engagement and ampli cation on social media: News about an emerging pandemic on Facebook. A Yulia, Strekalova, Health Education & Behavior. 44Yulia A Strekalova. 2017. Health risk information engagement and ampli ca- tion on social media: News about an emerging pandemic on Facebook. Health Education & Behavior 44, 2 (2017), 332-339.
A model for online consumer health information quality. Besiki Stvilia, Lorri Mon, Yong Jeong Yi, Journal of the Association for Information Science and Technology. 60Besiki Stvilia, Lorri Mon, and Yong Jeong Yi. 2009. A model for online consumer health information quality. Journal of the Association for Information Science and Technology 60, 9 (2009), 1781-1791.
Obesity is not a Disease. Michael Tanner, National Review. Michael Tanner. 2013. Obesity is not a Disease. National Review (July 2013). http: //www.nationalreview.com/article/352626/obesity-not-disease-michael-tanner
Suspended accounts in retrospect: an analysis of twitter spam. Kurt Thomas, Chris Grier, Proceedings of the. theDawn Song, and Vern PaxsonKurt Thomas, Chris Grier, Dawn Song, and Vern Paxson. 2011. Suspended accounts in retrospect: an analysis of twitter spam. In Proceedings of the 2011
ACM SIGCOMM conference on Internet measurement conference. ACMACM SIGCOMM conference on Internet measurement conference. ACM, 243-258.
J , Kevin Thomson, Lauren Schaefer, Body Image, Obesity, and Eating Disorders. Eating Disorders and Obesity: A Comprehensive Handbook. 140J. Kevin Thomson and Lauren Schaefer. 2017. Body Image, Obesity, and Eating Disorders. Eating Disorders and Obesity: A Comprehensive Handbook (2017), 140.
Algorithms are screwing us over with fake news but could also x the problem. Anne-Marie Tomchak, Anne-Marie Tomchak. 2017. Algorithms are screwing us over with fake news but could also x the problem. Mashable (2017). https://mashable.com/2017/10/ 05/arti cial-intelligence-algorithm-neva-labs/#iQuXMaJJeaqU
Amar Toor, Reuters built an algorithm to ag and verify breaking news on Twitter. The Verge. Amar Toor. 2016. Reuters built an algorithm to ag and verify breaking news on Twitter. The Verge (2016). https://www.theverge.com/2016/12/1/13804542/ reuters-algorithm-breaking-news-twitter
2017. I do not believe you: how providing a source corrects health misperceptions across social media platforms. Information. K Emily, Leticia Vraga, Bode, Communication & SocietyEmily K Vraga and Leticia Bode. 2017. I do not believe you: how providing a source corrects health misperceptions across social media platforms. Information, Communication & Society (2017), 1-17.
Measuring the reputation in user-generated-content systems based on health information. Leila Weitzel, José Palazzo M De Oliveira, Paulo Quaresma, Procedia Computer Science. 29Leila Weitzel, José Palazzo M de Oliveira, and Paulo Quaresma. 2014. Measuring the reputation in user-generated-content systems based on health information. Procedia Computer Science 29 (2014), 364-378.
Information is in the eye of the beholder: Seeking information on the MMR vaccine through an Internet search engine. Elad Yom, - Tov, Luis Fernandez-Luque, AMIA Annual Symposium Proceedings. Elad Yom-Tov and Luis Fernandez-Luque. 2014. Information is in the eye of the beholder: Seeking information on the MMR vaccine through an Internet search engine. In AMIA Annual Symposium Proceedings, Vol. 2014. American Medical Informatics Association, 1238.
Pro-anorexia and pro-recovery photo sharing: a tale of two warring tribes. Elad Yom-Tov, Luis Fernandez-Luque, Ingmar Weber, Steven P Crain, Journal of medical Internet research. 146Elad Yom-Tov, Luis Fernandez-Luque, Ingmar Weber, and Steven P Crain. 2012. Pro-anorexia and pro-recovery photo sharing: a tale of two warring tribes. Journal of medical Internet research 14, 6 (2012).
|
[] |
[
"Learning Open Information Extraction of Implicit Relations from Reading Comprehension Datasets",
"Learning Open Information Extraction of Implicit Relations from Reading Comprehension Datasets"
] |
[
"Jacob Beckerman \nThe Wharton School\nUniversity of Pennsylvania\nWestern Engineering University of Western Ontario\n\n",
"Theodore Christakis \nThe Wharton School\nUniversity of Pennsylvania\nWestern Engineering University of Western Ontario\n\n"
] |
[
"The Wharton School\nUniversity of Pennsylvania\nWestern Engineering University of Western Ontario\n",
"The Wharton School\nUniversity of Pennsylvania\nWestern Engineering University of Western Ontario\n"
] |
[] |
The relationship between two entities in a sentence is often implied by word order and common sense, rather than an explicit predicate. For example, it is evident that "Fed chair Powell indicates rate hike" implies (Powell, is a, Fed chair) and (Powell, works for, Fed). These tuples are just as significant as the explicitpredicate tuple (Powell, indicates, rate hike), but have much lower recall under traditional Open Information Extraction (OpenIE) systems. Implicit tuples are our term for this type of extraction where the relation is not present in the input sentence. There is very little OpenIE training data available relative to other NLP tasks and none focused on implicit relations. We develop an open source, parsebased tool for converting large reading comprehension datasets to OpenIE datasets and release a dataset 35x larger than previously available by sentence count. A baseline neural model trained on this data outperforms previous methods on the implicit extraction task.
| null |
[
"https://arxiv.org/pdf/1905.07471v1.pdf"
] | 159,041,780 |
1905.07471
|
9ade0ccc5ef96a111ab308485df3338970654f8d
|
Learning Open Information Extraction of Implicit Relations from Reading Comprehension Datasets
Jacob Beckerman
The Wharton School
University of Pennsylvania
Western Engineering University of Western Ontario
Theodore Christakis
The Wharton School
University of Pennsylvania
Western Engineering University of Western Ontario
Learning Open Information Extraction of Implicit Relations from Reading Comprehension Datasets
The relationship between two entities in a sentence is often implied by word order and common sense, rather than an explicit predicate. For example, it is evident that "Fed chair Powell indicates rate hike" implies (Powell, is a, Fed chair) and (Powell, works for, Fed). These tuples are just as significant as the explicitpredicate tuple (Powell, indicates, rate hike), but have much lower recall under traditional Open Information Extraction (OpenIE) systems. Implicit tuples are our term for this type of extraction where the relation is not present in the input sentence. There is very little OpenIE training data available relative to other NLP tasks and none focused on implicit relations. We develop an open source, parsebased tool for converting large reading comprehension datasets to OpenIE datasets and release a dataset 35x larger than previously available by sentence count. A baseline neural model trained on this data outperforms previous methods on the implicit extraction task.
Introduction
Open Information Extraction (OpenIE) is the NLP task of generating (subject, relation, object) tuples from unstructured text e.g. "Fed chair Powell indicates rate hike" outputs (Powell, indicates, rate hike). The modifier open is used to contrast IE research in which the relation belongs to a fixed set. OpenIE has been shown to be useful for several downstream applications such as knowledge base construction (Wities et al., 2017), textual entailment (Berant et al., 2011), and other natural language understanding tasks (Stanovsky et al., 2015). In our previous example an extraction was missing: (Powell, works for, Fed). Implicit extractions are our term for this type of tuple where the relation ("works for" in this example) is not contained in the input sentence. In both colloquial and formal language, many relations are evident without being explicitly stated. However, despite their pervasiveness, there has not been prior work targeted at implicit predicates in the general case. Implicit information extractors for some specific implicit relations such as noun-mediated relations, numerical relations, and others (Pal and Mausam, 2016;Saha et al., 2017;Saha and Mausam, 2018) have been researched. While specific extractors are important, there are a multiplicity of implicit relation types and it would be intractable to categorize and design extractors for each one.
Past general OpenIE systems have been plagued by low recall on implicit relations (Stanovsky et al., 2018). In OpenIE's original applicationweb-scale knowledge base construction -this low recall is tolerable because facts are often restated in many ways (Banko et al., 2007). However, in downstream NLU applications an implied relationship may be significant and only stated once (Stanovsky et al., 2015).
The contribution of this work is twofold. In Section 4, we introduce our parse-based conversion tool and convert two large reading comprehension datasets into implicit OpenIE datasets. In Section 5 and 6, we train a simple neural model on this data and compare to previous systems on precision-recall curves using a new gold test set for implicit tuples.
Problem Statement
We suggest that OpenIE research focus on producing implicit relations where the predicate is not contained in the input span. Formally, we define implicit tuples as (subject, relation, object) tuples that:
1. Have a subject and object word or phrase contained in the input sentence.
2. Have a relation token(s) entailed by word order of the sentence but not contained in it. These "implicit" or "common sense" tuples reproduce the relation explicitly, which may be important for downstream NLU applications using OpenIE as an intermediate schema. For example, in Figure 1, the input sentence tells us that the Norsemen swore fealty to Charles III under "their leader Rollo". From this our model outputs (The Norse leader, was, Rollo) despite the relation never being contained in the input sentence. Our definition of implicit tuples corresponds to the "frequently occurring recall errors" identified in previous OpenIE systems (Stanovsky et al., 2018): noun-mediated, sentence-level inference, long sentence, nominalization, noisy informal, and PP-attachment. We use the term implicit tuple to collectively refer to all of these situations where the predicate is absent or very obfuscated.
Related Work
Traditional Methods
Due to space constraints, see Niklaus et al. (2018) for a survey of of non-neural methods. Of these, several works have focused on pattern-based implicit information extractors for noun-mediated relations, numerical relations, and others (Pal and Mausam, 2016;Saha et al., 2017;Saha and Mausam, 2018). In this work we compare to OpenIE-4 1 , ClausIE (Corro and Gemulla, 2013), ReVerb (Fader et al., 2011), OLLIE (Mausam et al., 2012), Stanford OpenIE (Angeli et al., 2015), and PropS . et al. (2018) frame OpenIE as a BIOtagging problem and train an LSTM to tag an input sentence. Tuples can be derived from the tagger, input, and BIO CFG parser. This method outperforms traditional systems, though the tagging scheme inherently constrains the relations to be part of the input sentence, prohibiting implicit relation extraction. Cui et al. (2018) bootstrap (sentence, tuple) pairs from OpenIE-4 and train a standard seq2seq with attention model using OpenNMT-py (Klein et al., 2017). The system is inhibited by its synthetic training data which is bootstrapped from a rule-based system.
Neural Network Methods
Stanovsky
Dataset Conversion Methods
Due to the lack of large datasets for OpenIE, previous works have focused on generating datasets 1 https://github.com/knowitall/openie from other tasks. These have included QA-SRL datasets and QAMR datasets (Stanovsky et al., 2018). These methods are limited by the size of the source training data which are an order of magnitude smaller than existing reading comprehension datasets.
Dataset Conversion Method
Span-based Question-Answer datasets are a type of reading comprehension dataset where each entry consists of a short passage, a question about the passage, and an answer contained in the passage. The datasets used in this work are the Stanford Question Answering Dataset (SQuADv1.1) (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2017). These QA datasets were built to require reasoning beyond simple patternrecognition, which is exactly what we desire for implicit OpenIE. Our goal is to convert the QA schema to OpenIE, as was successfully done for NLI (Demszky et al., 2018). The repository of software and converted datasets is available at http://toAppear.
QA Pairs to OpenIE Tuples
We started by examining SQuAD and noticing that each answer, A, corresponds to either the subject, relation, or object in an implicit extraction. The corresponding question, Q, contains the other two parts, i.e. either the (1) subject and relation, (2) subject and object, or (3) relation and object. Which two pieces the question contains depends on the type of question. For example, "who was... factoid" type questions contain the relation ("was") and object (the factoid), which means that the answer is the subject. In Figure 1, "Who was Rollo" is recognized as a who was question and caught by the whoParse() parser. Similarly, a question in the form of "When did person do action" expresses a subject and a relation, with the answer containing the object. For example, "When did Einstein emigrate to the US" and answer 1933, would convert to (Einstein, when did emigrate to the US, 1933). In cases like these the relation might not be grammatically ideal, but nevertheless captures the meaning of the input sentence.
In order to identify generic patterns, we build our parse-based tool on top of a dependency parser (Honnibal and Johnson, 2015). It uses fifteen rules, with the proper rule being identified and run based on the question type. The rule then uses its pre-specified pattern to parse the input QA pair and output a tuple. These fifteen rules are certainly not exhaustive, but cover around eighty percent of the inputs. The tool ignores questions greater than 60 characters and complex questions it cannot parse, leaving a dataset smaller than the original (see Table 1).
Each rule is on average forty lines of code that traverses a dependency parse tree according to its pre-specified pattern, extracting the matching spans at each step. A master function parse() determines which rule to apply based on the question type which is categorized by nsubj presence, and the type of question (who/what/etc.). Most questions contain an nsubj which makes the parse task easier, as this will also be the subject of the tuple. We allow the master parse() method try multiple rules. It first tries very specific rules (e.g. a parser for how questions where no subject is identified), then falls down to more generic rules. If no output is returned after all the methods are tried we throw the QA pair out. Otherwise, we find the appropriate sentence in the passage based on the index.
Sentence Alignment
Following QA to tuple conversion, the tuple must be aligned with a sentence in the input passage. We segment the passage into sentences using periods as delimiters. The sentence containing the answer is taken as the input sentence for the tuple. Outputted sentences predominantly align with their tuple, but some exhibit partial misalignment in the case of some multi-sentence reasoning questions. 13.6% of questions require multi-sentence reasoning, so this is an upper bound on the number of partially misaligned tuples/sentences (Rajpurkar et al., 2016). While there may be heuristics that can be used to check alignment, we didn't find a significant number of these misalignments and so left them in the corpus. Figure 1 demonstrates the conversion process.
Tuple Examination
Examining a random subset of one hundred generated tuples in the combined dataset we find 12 noun-mediated, 33 sentence-level inference, 11 long sentence, 7 nominzalization, 0 noisy informal, 3 pp-attachment, 24 explicit, and 10 partially misaligned. With 66% implicit relations, this dataset shows promise in improving OpenIE's recall on implicit relations.
Our model
Our implicit OpenIE extractor is implemented as a sequence to sequence model with attention (Bahdanau et al., 2014). We use a 2-Layer LSTM Encoder/Decoder with 500 parameters, general attention, SGD optimizer with adaptive learning rate, and 0.33 dropout (Hochreiter and Schmidhuber, 1997). The training objective is to maximize the likelihood of the output tuple given the input sentence. In the case of a sentence having multiple extractions, it appears in the dataset once for each output tuple. At test time, beam search is used for decoding to produce the top-10 outputs and an associated log likelihood value for each tuple (used to generate the precision-recall curves in Section 7).
Evaluation
We make use of the evaluation tool developed by to test the precision and recall of our model against previous methods. We make two changes to the tool as described below.
Creating a Gold Dataset
The test corpus contained no implicit data, so we re-annotate 300 tuples from the CoNLL-2009 English training data to use as gold data. Both authors worked on different sentence sets then pruned the other set to ensure only implicit relations remained. We note that this is a different dataset than our training data so should be a good test of generalizability; the training data consists of Wikipedia and news articles, while the test data resembles corporate press release headlines.
Matching function for implicit tuples
We implement a new matching function (i.e. the function that decides if a generated tuple matches a gold tuple). The included matching functions used BoW overlap or BLEU, which aren't appropriate for implicit relations; our goal is to assess whether the meaning of the predicted tuple matches the gold, not the only tokens. For example, the if the gold relation is "is employed by" we want to accept "works for". Thus, we instead compute the cosine similarity of the subject, relation, and object embeddings to our gold tuple. All three must be above a threshold to evaluate as a match. The sequence embeddings are computed by taking the average of the GloVe embeddings of each word (i.e. BoW embedding) (Pennington et al., 2014).
Results
The results on our implicit corpus are shown in Figure 2 (our method in blue). For continuity with prior work, we also compare our model on the origional corpus but using our new matching function in Figure 3. Our model outperforms at every point in the implicit-tuples PR curve, accomplishing our goal of increasing recall on implicit relations. Our system performs poorly on explicit tuples, as we would expect considering our training data. We tried creating a multi-task model, but found the model either learned to produce implit or explicit tuples. Creating a multi-task network would be ideal, though it is sufficient for production systems to use both systems in tandem.
Conclusion
We created a large training corpus for implicit OpenIE extractors based on SQuAD and NewsQA, trained a baseline on this dataset, and presented promising results on implicit extraction. We see this as part of a larger body of work in text-representation schemes which aim to represent meaning in a more structured form than free text. Implicit information extraction goes further than traditional OpenIE to elicit relations not contained in the original free text. This allows maximally-shortened tuples where common sense relations are made explicit. Our model should improve further as more QA datasets are released and converted to OpenIE data using our conversion tool.
Figure 1 :
1Tuple conversion and alignment process flow.
Figure 2 :
2PR curve on our implicit tuples dataset.
Figure 3 :
3PR curve on the explicit tuples dataset.
Source Data Sentences Train Tuples Validation TuplesNewsQA
50880
56646
-
SQuAD
38773
51949
-
Total
89653
107595
1000
Table 1 :
1Dataset statistics.
Leveraging linguistic structure for open domain information extraction. Gabor Angeli, Melvin Jose Johnson Premkumar, Christopher D Manning, ACL. Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguis- tic structure for open domain information extraction. In ACL.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, abs/1409.0473CoRRDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
Open information extraction from the web. Michele Banko, Michael J Cafarella, Stephen Soderland, G Matthew, Oren Broadhead, Etzioni, IJCAI. Michele Banko, Michael J. Cafarella, Stephen Soder- land, Matthew G Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJCAI.
Global learning of typed entailment rules. Jonathan Berant, Ido Dagan, Jacob Goldberger, ACL. Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global learning of typed entailment rules. In ACL.
Clausie: clause-based open information extraction. Luciano Del Corro, Rainer Gemulla, WWWLuciano Del Corro and Rainer Gemulla. 2013. Clausie: clause-based open information extraction. In WWW.
Neural open information extraction. Lei Cui, Furu Wei, Ming Zhou, ACL. Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In ACL.
Transforming question answering datasets into natural language inference datasets. Dorottya Demszky, Kelvin Guu, Percy Liang, abs/1809.02922CoRRDorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. CoRR, abs/1809.02922.
Identifying relations for open information extraction. Anthony Fader, Stephen Soderland, Oren Etzioni, Proceedings of the Conference of Empirical Methods in Natural Language Processing (EMNLP '11). the Conference of Empirical Methods in Natural Language Processing (EMNLP '11)Edinburgh, Scotland, UKAnthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of the Conference of Em- pirical Methods in Natural Language Processing (EMNLP '11), Edinburgh, Scotland, UK.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. 9Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9:1735- 1780.
An improved non-monotonic transition system for dependency parsing. Matthew Honnibal, Mark Johnson, EMNLP. Matthew Honnibal and Mark Johnson. 2015. An im- proved non-monotonic transition system for depen- dency parsing. In EMNLP.
Opennmt: Open-source toolkit for neural machine translation. Guillaume Klein, Yoon Kim, Yuntian Deng, Josep Maria Crego, Jean Senellart, Alexander M Rush, ACL. Guillaume Klein, Yoon Kim, Yuntian Deng, Josep Maria Crego, Jean Senellart, and Alexan- der M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In ACL.
Open language learning for information extraction. Michael Mausam, Robert Schmitz, Stephen Bart, Oren Soderland, Etzioni, Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL). Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL)Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceed- ings of Conference on Empirical Methods in Natu- ral Language Processing and Computational Natu- ral Language Learning (EMNLP-CONLL).
A survey on open information extraction. Christina Niklaus, Matthias Cetto, COLING. André Freitas, and Siegfried HandschuhChristina Niklaus, Matthias Cetto, André Freitas, and Siegfried Handschuh. 2018. A survey on open in- formation extraction. In COLING.
Demonyms and compound relational nouns in nominal open ie. Harinder Pal, Mausam, AKBC@NAACL-HLT. Harinder Pal and Mausam. 2016. Demonyms and compound relational nouns in nominal open ie. In AKBC@NAACL-HLT.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, EMNLP. Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP.
Squad: 100, 000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, EMNLP. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In EMNLP.
Open information extraction from conjunctive sentences. Swarnadeep Saha, Mausam , COL-ING. Swarnadeep Saha and Mausam. 2018. Open informa- tion extraction from conjunctive sentences. In COL- ING.
Bootstrapping for numerical open ie. Swarnadeep Saha, Harinder Pal, Mausam , ACL. Swarnadeep Saha, Harinder Pal, and Mausam. 2017. Bootstrapping for numerical open ie. In ACL.
Creating a large benchmark for open information extraction. Gabriel Stanovsky, Ido Dagan, EMNLP. Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In EMNLP.
Open ie as an intermediate structure for semantic tasks. Gabriel Stanovsky, Ido Dagan, Mausam , ACL. Gabriel Stanovsky, Ido Dagan, and Mausam. 2015. Open ie as an intermediate structure for semantic tasks. In ACL.
Getting more out of syntax with props. Gabriel Stanovsky, Jessica Ficler, Ido Dagan, Yoav Goldberg, abs/1603.01648CoRRGabriel Stanovsky, Jessica Ficler, Ido Dagan, and Yoav Goldberg. 2016. Getting more out of syntax with props. CoRR, abs/1603.01648.
Supervised open information extraction. Gabriel Stanovsky, Julian Michael, Luke S Zettlemoyer, Ido Dagan, NAACL-HLT. Gabriel Stanovsky, Julian Michael, Luke S. Zettle- moyer, and Ido Dagan. 2018. Supervised open in- formation extraction. In NAACL-HLT.
Newsqa: A machine comprehension dataset. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman, Rep4NLP@ACL. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. Newsqa: A machine compre- hension dataset. In Rep4NLP@ACL.
|
[
"https://github.com/knowitall/openie"
] |
[
"COMPARE TRIANGULAR BASES OF ACYCLIC QUANTUM CLUSTER ALGEBRAS",
"COMPARE TRIANGULAR BASES OF ACYCLIC QUANTUM CLUSTER ALGEBRAS"
] |
[
"Fan Qin "
] |
[] |
[] |
Given a quantum cluster algebra, we show that its triangular bases defined by Berenstein and Zelevinsky and those defined by the author are the same for the seeds associated with acyclic quivers. This result implies that the Berenstein-Zelevinsky's basis contains all the quantum cluster monomials.We also give an easy proof that the two bases are the same for the seeds associated with bipartite skew-symmetrizable matrices.
|
10.1090/tran/7610
|
[
"https://arxiv.org/pdf/1606.05604v2.pdf"
] | 54,033,548 |
1606.05604
|
17a778a64e6d2c4aef811f251b038534e68a763a
|
COMPARE TRIANGULAR BASES OF ACYCLIC QUANTUM CLUSTER ALGEBRAS
15 Dec 2016
Fan Qin
COMPARE TRIANGULAR BASES OF ACYCLIC QUANTUM CLUSTER ALGEBRAS
15 Dec 2016arXiv:1606.05604v2 [math.QA]
Given a quantum cluster algebra, we show that its triangular bases defined by Berenstein and Zelevinsky and those defined by the author are the same for the seeds associated with acyclic quivers. This result implies that the Berenstein-Zelevinsky's basis contains all the quantum cluster monomials.We also give an easy proof that the two bases are the same for the seeds associated with bipartite skew-symmetrizable matrices.
1. Introduction 1.1. Cluster algebras. In [FZ02], Fomin and Zelevinsky invented cluster algebras as a combinatorial approach to dual canonical bases of quantum groups (discovered by Lusztig [Lus90] and Kashiwara [Kas90] independently). The quantum cluster algebras were later introduced in [BZ05]. These algebras possess many seeds, which are constructed recursively by an algorithm called mutation. Every seed consists of some skew-symmetrizable matrix and a collection of generators called (quantum) cluster variables. We might view these seeds as analog of local charts of algebraic varieties 1 .
There are many attempts to "good" bases of cluster algebras, cf. [GLS11, GLS12,GLS13] [KKKO15]. In view of the original motivation of Fomin and Zelevinsky, a good basis should contain all the quantum cluster monomials (monomials of quantum cluster variables belonging to the same seed).
1.2. Berenstein-Zelevinsky's triangular basis approach. In [BZ14], Berenstein and Zelevinsky proposed the following new approach to good bases of quantum cluster algebras:
• Inspired by the Kazhdan-Lusztig theory, construct a triangular basis C t in each seed t such that it contains all the quantum cluster monomials in that seed. More precisely, first construct a basis consisting of some ordered products of quantum cluster variables, then Lusztig's lemma [BZ14, Theorem 1.1] guarantees a unique new basis whose transition matrix from the old one is unitriangular, whence the name triangular basis. • Prove that these triangular bases give rise to a common basis for all seeds.
If this approach works, then we have a common triangular basis containing the quantum cluster monomials in all seeds. However, Berenstein-Zelevinsky's construction only works for those special seeds of acyclic type, cf. Section 2.3 for the definition. They arrived at a common basis for the acyclic seeds, which we call the BZ-basis and denote by C.
On the other hand, it is known that the quantum cluster algebras associated with acyclic quiver and z-coefficient pattern are isomorphic to some quantum unipotent subgroups and, consequently, inherit the dual canonical bases, cf. [GLS13] [KQ14]. In [KQ14], Kimura and the author showed that, for such quantum cluster algebras, the dual canonical bases contain all the quantum cluster monomials. It is natural to propose the following conjecture.
Conjecture 1.2.1. For quantum cluster algebras associated with an acyclic quiver and z-coefficient pattern, its dual canonical basis agrees with Berenstein-Zelevinsky's triangular basis C.
The verification of this conjecture would imply the desired property that Berenstein-Zelevinsky's triangular basis contains all quantum cluster monomials.
1
In fact, we have a family of varieties called cluster varieties, whose local charts are tori, local coordinate functions are cluster variables, and transition maps are determined by the matrices in the seeds, cf. [FG09].
Different triangular bases in monoidal categorification.
Inspired by this new approach of Berenstein-Zelevinsky, in [Qin15], in order to prove monoidal categorification conjectures of quantum cluster algebras, the author introduced very different triangular bases for injective-reachable quantum cluster algebras. For every seeds t, we can define a such triangular bases L t , cf. Section 2.2.
There are two crucial differences of the common triangular basis L in [Qin15] with the basis C of Berenstein-Zelevinsky:
(1) The basis is unique but its existence cannot be guaranteed, because Lusztig's lemma does not apply.
(2) The expectation from Fock-Goncharov basis conjecture is included in the definition and plays an important role.
1.4. Results. We have two very different constructions of triangular bases. It is desirable to compare these bases, which are both defined for acyclic seeds. The main result of this paper claims that they are the same for quantum cluster algebras arising from acyclic skew-symmetric matrices (or, equivalently, from acyclic quivers).
Theorem 1.4.1 (Main result). Let A be a quantum cluster algebras who has a seed t with an acyclic skew-symmetric matrix B(t). Then in this seed, its triangular basis L t in [Qin15] agrees with Berenstein-Zelevinsky's triangular basis C.
Notice that, for the quantum cluster algebra arising from an acyclic quiver and z-coefficient pattern, its common triangular bases in [Qin15] is the dual canonical basis. Therefore, our main result Theorem 1.4.1 implies Conjecture 1.2.1.
Our proof is based on ideas and techniques developed by the author in [Qin15], in particular, the maximal degree tracking and the composition of unitriangular transitions. The triangular bases treated in this paper are much easier than those in [Qin15] and our paper does not depend on the long proof there. In particular, we give a self-contained proof that the triangular bases L t in different acyclic seeds t are the same, cf. Theorem 3.1.4.
We could further propose the following natural conjecture.
Conjecture 1.4.2. The triangular basis L t agrees with Berenstein-Zelevinsky's triangular basis C in seeds associated with acyclic skewsymmetrizable seeds.
In a previous private communication with Zelevinsky, the author pointed out that for bipartite orientation, this conjecture is true. The details will be given in the appendix, cf. Theorem 3.3.4.
Acknowledgments
The author thanks Andrei Zelevinsky and Kyungyong Lee for conversations on acyclic cluster algebras. He thanks Yoshiyuki Kimura, Qiaoling Wei and Changjian Fu for remarks.
Preliminaries
2.1. Quantum cluster algebras. We recall the definition of quantum cluster algebras by [BZ05] and follow the convention in [Qin15]. Let [x] + denote max(x, 0). Let B be an m × n integer matrix with n ≤ m. Its n × n upper submatrix B is called the principal part. Assume that B is of rank n and B skew-symmetrizable (namely, there exists a diagonal matrix with strictly positive integer diagonal entries such that its product with B is skew-symmetric). We can choose Λ an m × m skew-symmetric integer matrix such that B T Λ = ( D 0 ) for some diagonal matrix D with strictly positive integer diagonal entries. Such a pair ( B, Λ) is called a compatible pair.
A quantum seed t (or seed for simplicity) consists of a compatible pair ( B(t), Λ(t)) and a collection of indeterminate X i (t), 1 ≤ i ≤ m, called X-variables. Let {e i } denote the natural basis of Z m and X(t) e i = X i (t). We define the corresponding quantum torus T (t) to be the Laurent polynomial ring Z[q ± 1 2 ][X(t) g ] g∈Z m with the usual addition +, the usual multiplication ·, and the twisted product
X(t) g * X(t) h = q 1 2 Λ(t)(g,h) X(t) g+h ,
where Λ(t)( , ) denote the bilinear form on Z m such that
Λ(t)(e i , e j ) = Λ(t) ij .
T (t) admits a bar-involution ( ) which is Z-linear such that
q s X(t) g = q −s X(t) g .
Notice that all Laurent monomials in T (t) commute with each other up to a q-power, which is called q-commute.
Let b ij denote the (i, j)-entry of B(t). We define the Y -variables to be the following Laurent monomials:
Y k (t) = X(t) 1≤i≤m [b ik ] + e i − 1≤j≤m [−b jk ] + e j .
For any direction 1 ≤ k ≤ n, the following operation (called the mutation µ k ) gives us a new seed t ′ = µ k t = ((X i (t ′ )) 1≤i≤m , B(t ′ ), Λ(t ′ )):
• X i (t ′ ) = X i (t) if i = k, • X k (t ′ ) = X(t) −e k + i [b ik ] + e i + X(t) −e k + j [−b jk ] + e j , • B(t ′ ) = (b ′ ij ) is determined by B(t) = (b ij ): b ′ ik = −b ki b ′ ij = b ij + [b ik ] + [b kj ] + − [−b ik ] + [−b kj ] + if i, j = k • Λ(t ′ ) is skew-symmetric and satisfies Λ(t ′ ) ij = Λ(t) ij i, j = k Λ(t ′ ) ik = Λ(t)(e i , −e k + j [−b jk ] + e j ) i = k
The quantum torus T (t ′ ) for the new seed t ′ is defined similarly. Notice that, by [BZ05, Proposition 6.2], any
Z ∈ T (t) ∩ T (t ′ ) is bar- invariant in T (t) if and only if it is bar-invariant in T (t ′ ).
We define a quantum cluster algebra A as the following:
• Choose an initial seed t 0 = ((X 1 , · · · , X m ), B, Λ).
• All the seeds t are obtained from t 0 by iterated mutations at directions 1 ≤ k ≤ n.
• A = Z[q ± 1 2 ][X −1 n+1 , · · · , X −1 m ][X i (t)] t,1≤i≤m
. The X-variables X i (t) in the seeds are called the quantum cluster variables. We call X n+1 , . . . , X m the frozen variables or the coefficients.
The correction technique developed in [Qin14, Section 9] provides a convenient tool for studying the bases of A, cf. [Qin15, Section 5] for a summary. It tells us that most phenomenons and properties of bases keep unchanged when we change the coefficient part of the seed t, namely the lower (m − n) × n submatrix B c (t) of B(t), or when we change Λ(t).
Finally, notice that to each rank n quiver Q, we can associate an n × n skew-symmetric matrix B such that its entry b ij is given by the difference of the number of arrows from i to j with that of j to i. All skew-symmetric matrices arise in this way. So, if the matrix B(t) of a seed t is skew-symmetric, we say t is skew-symmetric or t arises from a quiver; if B(t) is skew-symmetrizable, we say t is skew-symmetrizable.
2.2. Triangular basis. Choose any seed t. We recall the following notions introduced in [Qin15, Section 3.1] Definition 2.2.1 (Pointed elements and normalization). A Laurent polynomial Z in the quantum torus T (t) is said to be pointed if it takes the form
Z = X(t) g · (1 + 0 =v∈N n c v Y (t) v ), (2.1)
for some coefficients c v ∈ Z[q ± 1 2 ]. In this case, Z is said to be pointed at degree g, and we denote deg t Z = g.
If Z = q s X(t) g (1 + 0 =v∈N n c v Y (t) v ) for some s ∈ Z 2 , we use [Z] t to denote the pointed element q −s Z and call it the normalization of Z in T (t).
Notice that all the quantum cluster variables are pointed. In order to say that a pointed element has a unique maximal degree, we need to introduce the following partial order.
Definition 2.2.2 (Degree lattice and dominance order). We call Z m the degree lattice and denote it by D(t). Its dominance order ≺ t is defined to be the partial order such that g ′ ≺ t g if and only if
g ′ = g + deg t Y (t) v for some 0 = v ∈ N n . We might omit the symbol t in X i (t), I k (t),≺ t , deg t or [ ] t for sim- plicity. Lemma 2.2.3 ([Qin15][Lemma 3.1.2]). For any g ′ t g in Z m , there exists finitely many g ′′ ∈ Z m such that g ′ t g ′′ t g. Assume that, in T (t), we have (possibly infinitely many) elements L j pointed in different degrees. Let we denote L j = g∈Z m c g;j X g where c g;j ∈ Z[q ± 1 2 ]. A linear combination j a j L j with a j ∈ Z[q ± 1 2 ]
is well defined and contained in T (t) if j a j c g;j is a finite sum for all g ∈ Z m and vanishes except for finitely many g.
Assume that Z be a Laurent polynomial in T (t) such that it is a well defined linear combination of L j :
Z = j a j L j , a j ∈ Z[q ± 1 2 ]. (2.2) We say that this decomposition ≺ t -triangular if there exists a unique ≺ t -maximal element deg t L 0 in {deg t L j }. It is further called ≺ t -unitriangular if a 0 = 1, or (≺ t , m)-triangular if a j ∈ m = q − 1 2 Z[q − 1 2 ] for j = 0. A set {Z} is said to be (≺ t , m)-unitriangular to {L j } if all its elements Z has such property. Lemma 2.2.4 ([Qin15][Lemma 3.1.9]). If the decomposition (2.2) is ≺ t -triangular, then it is the unique ≺ t -triangular decomposition of Z in {L j }.
Proof. Thanks to Lemma 2.2.3, we can recursively determine all the coefficients a j of L j in (2.2), starting from the higher ≺ t -order Laurent degrees, cf. [Qin15][Remark 3.1.8].
The following lemma will be useful. It allows us to switch to the desired dominance order. (ii) If, further, all but one coefficients in (2.2) belong to m, then
(2.2) is (≺ t , m)-unitriangular.
Proof. (i) We recall the proof in [Qin15][Lemma 3.1.9]. Compare maximal degrees of both hand sides of a finite decomposition, we obtain that the finite set {deg L j } contains a unique maximal element deg L 0 for some L 0 such that deg L 0 = deg Z. So this decomposition is ≺ ttriangular. Finally, a 0 = 1 because Z has coefficient 1 in its leading degree.
(ii) By (i), Z admits a ≺ t -unitriangular decomposition. The hypothesis in (ii) simply tells us that the coefficients other than the leading coefficient (equals 1) belong to m.
For any 1 ≤ k ≤ n, let I k (t) denote 2 the unique quantum cluster variable (if it exists) such that pr n deg t I k (t) = −e k , where pr n is the projection of Z m onto the first n-components. The quantum cluster algebra A is said to be injective reachable if I k (t) exists for any 1 ≤ k ≤ n. This property is independent of the choice of the seed t by [Pla11] [GHKK14]. In this case, the quantum cluster variables I k (t), 1 ≤ k ≤ n, q-commute with each other because they belong to the same seed (denoted by t[1] in [Qin15]).
Remark 2.2.6. In the convention of Section 2.3, if B(t) is acyclic, we can obtain the quantum cluster variables I k , ∀1 ≤ k ≤ n, by applying the sequence of mutations on each vertex 1, · · · , n such that the their order increases with respect to ⊳. In particular, the corresponding cluster algebra is injective reachable. See Example 3.3.5 for an explicit calculation.
Definition 2.2.7 (Triangular basis [Qin15, Definition 6.1.1]). The triangular basis L t for the seed t is defined to be the basis of the quantum cluster algebra A such that • (triangularity) For any X i (t) and S ∈ L t , we have
• The quantum cluster monomials [ 1≤i≤m X i (t) u i ] t ,[ 1≤k≤n I k (t) v k ] t belong to L t , ∀u i , v k ∈ N. • (bar-invariance)[X i (t) * S] t = b + c b ′ · b ′ , where deg t b ′ ≺ t deg t b = deg t X i (t) + deg t S and the coefficients c b ′ ∈ m = q − 1 2 Z[q − 1 2 ]
. It is easy to show that if L t exists, then it is unique by the triangularity and bar-invariance, cf [Qin15, Lemma 6.2.6(i)]. In order to study L t , [Qin15] introduced the injective pointed set I t in the seed t:
I t = {I t (f, u, v)|f ∈ Z [n+1,m] , u, v ∈ N [1,n] , u k v k = 0∀k ∈ [1, n]} I t (f, u, v) = [ n+1≤i≤m X i (t) f i * 1≤k≤m X k (t) u k * 1≤k≤m I k (t) v k ] t
This is a linearly independent family of pointed elements contained in A. By the triangularity of L t , the set of pointed elements I t is (≺ t , m)unitriangular to L t . It follows that L t is also (≺ t , m)-unitriangular to
I t , cf. [Qin15, Lemma(inverse transition)]. Example 2.2.8 (Type A 3 ). Consider the matrix B = 0 −1 0 1 0 −1 0 1 0 −1 1 0 0 −1 1 0 0 −1 ,
which is the matrix of the ice quiver in Figure 2.1.
In the convention of [KQ14], its principal part is an acyclic type A 3 quiver and coefficient part the z-pattern. There is a natural matrix Λ such that ( B, Λ) is compatible. The corresponding quantum cluster algebra A is isomorphic to the quantum unipotent subgroup A q (n(c 2 )) localized at the coefficients X 4 , X 5 , X 6 , where the Coxeter word c = s 3 s 2 s 1 (read from right to left).
The quantum cluster variables I 1 , I 2 , I 3 are obtained from consecutive mutations at 1, 2, 3. Our pointed element I(f, u, v)
I(f, u, v) = [X f 4 4 * X f 5 5 * X f 6 6 * X u 1 1 * X u 2 2 * X u 3 3 * I v 1 1 * I v 2 2 * I v 3 3
] is a localized dual PBW basis element (rescaled by a q-power), and the triangular basis is the localized (rescaled) dual canonical basis, cf. [KQ14].
Lemma 2.2.9 (Substitution [Qin15][Lemma 6.4.4]). If a pointed ele- ment Z is (≺ t , m)-unitriangular to L t , so does [ n+1≤i≤m X f i * X u * Z * I v ] for any f ∈ Z [n+1,m] ,u, v ∈ N n .
Proof. Z is (≺ t , m)-unitriangular to I t and admits a (≺ t , m)-unitriangular decomposition Z = s a s I t (f (s) , u (s) , v (s) ). 2.3. Berenstein-Zelevinsky's triangular basis. Work in some chosen seed t, whose symbol we often omit. Assume that its principal part B = B(t) is acyclic, namely, there exists an order ⊳ on the vertex {1, . . . , n} such that b ij ≤ 0 whenever i ⊳ j. In this case, t is called an acyclic seed. If i ⊳ j, we say i is ⊳-inferior than j, and also denote j ⊲ i.
A vertex j ∈ [1, n] is said to be a source point in t if j is ⊳-maximal, namely, j ⊲ k for all 1 ≤ k ≤ n. Similarly, it is called a sink point in t if j is ⊳-minimal, namely, j ⊳ k for all 1 ≤ k ≤ n.
For any 1 ≤ k ≤ n, let b k = Be k denote the k-th column of B. Let S k = S k (t) denote 3 the quantum cluster variable X k (µ k t). Notice that
S k = X −e k +[−b k ] + · (1 + Y k ) and we have deg S k = −e k + [−b k ] + , where [−b k ] + denote ([−b jk ] + ) 1≤j≤m .
For any a ∈ Z m , Bernstein and Zelevinsky defined the standard monomials
E a = [ n<j≤m X a j * 1≤k≤n X [a k ] + k * ⊳ 1≤k≤n S [−a k ] + k ]
, where the last factor is the product with increasing ⊳ order, cf. 3 We use the symbol S k because this cluster variable corresponds to the k-th simple S k in an associated quiver with potential.
We call C t the BZ-basis for simplicity. Applying the bar involution, we obtain that C t is (≺ BZ , m)-triangular to {E a }, where
E a = [ ⊲ 1≤k≤n S [−a k ] + k * 1≤k≤n X [a k ] + k * n<j≤m X a j ]
where the first factor is the product with decreasing ⊳ order.
Example 2.3.2. Let us continue Example 2.2.8. The standard monomials, after the bar involution, gives us
E a = [S [−a 3 ] + 3 * S [−a 2 ] + 2 * S [−a 1 ] + 1 * X [a 1 ] + 1 * X [a 2 ] + 2 * X [a 3 ] + 3 * * X a 4
4 * X a 5 5 * X a 6 6 ]. Notice that X 4 , X 5 , X 6 q-commute with all the factors. . The Berenstein-Zelevinsky's triangular basis C t is independent of the acyclic seed t chosen, which we denote by C.
Compare triangular bases
3.1. Basic results. Let we choose and work with any seed t whose matrix B(t) is acyclic.
Lemma 3.1.1. For any acyclic seed t, each C a is (≺ t , m)-unitriangular to {E a }.
Proof. Each C a is a finite linear combination of {E a } with one term of coefficient 1 and others of coefficients in m. This decomposition is ≺ t -triangular by Lemma 2.2.5.
Lemma 3.1.2. If n is a source point, then E a remains pointed in t ′ = µ n t.
Proof. It might be possible to deduce this result from the existence of common Berenstein-Zelevinsky triangular bases in t and t ′ . Let us give an alternative elementary verification.
In order to show that the q-normalization factor producing by the factors of E a remains unchanged in T (t ′ ), it suffices to show that, for any 1 ≤ i, j ≤ m, 1 ≤ l < k ≤ n, i = k, we have
Λ(t)(deg t X i , deg t X j ) = Λ(t ′ )(deg t ′ X i , deg t ′ X j ) (3.1) Λ(t)(deg t X i , deg t S k ) = Λ(t ′ )(deg t ′ X i , deg t ′ S k ) (3.2) Λ(t)(deg t S l , deg t S k ) = Λ(t ′ )(deg t ′ S l , deg t ′ S k ). (3.3) Notice that we have deg t S l = −e l + s [−b sl ] + e s ,
where all e s appearing have s = n. Therefore, we deduce that deg t ′ S l = deg t S l , ∀l < n, by the tropical transformation of g-vectors, cf. [Qin15, Section 3.2] [FG09] [FZ07,(7.18)]. The first two equations simply follows from the mutation rule from Λ(t) to Λ(t ′ ). It remains to check (3.3). By using (3.2), we obtain
Λ(t)(deg t S l , deg t S k ) = Λ(t)(− deg t X l + s [−b sl ] + deg t X s , deg t S k ) = −Λ(t)(deg t X l , deg t S k ) + s [b −sl ] + Λ(t)(deg t X s , deg t S k ) = −Λ(t ′ )(deg t ′ X l , deg t ′ S k ) + s [b −sl ] + Λ(t ′ )(deg t ′ X s , deg t ′ S k ) = Λ(t ′ )(deg t ′ S l , deg t ′ S k ).
The following statement is the main result of [KQ14] accompanied with the coefficient correction technique in [Qin14]. Qin14]). If the principal part B(t) of a seed t is acyclic and skew-symmetric, then the triangular basis L t for t exists. Moreover, it contains all the quantum cluster monomials.
Theorem 3.1.3 ([KQ14][
Proof. When we choose the special coefficient pattern B c (t) to be zpattern as in [KQ14], the quantum cluster algebra is isomorphic to a subalgebra of a quantized enveloping algebra [GLS13]. Under this identification, X i (t), I k (t) are the factors of the dual PBW basis element, and the triangular basis L t is just the restriction of the dual canonical basis on this subalgebra (and localized at the coefficients (X n+1 , · · · , X m )). By [KQ14], this basis contains all the quantum cluster monomials.
By the correction technique in [Qin14], we can change the coefficient pattern B c (t) and Λ(t) while keeping the claim true.
The following statement is implied by the general result in [Qin15, Theorem 9.4.1]. We sketch a much easier proof for this special case.
Theorem 3.1.4. Let t and t ′ be two seeds such that t ′ = µ k t for some 1 ≤ k ≤ n and B(t), B(t ′ ) are acyclic and skew-symmetric. Then the quantum cluster algebra has a basis L which is the triangular basis for both t and t ′ .
Proof. Because t and t ′ are acyclic, by Theorem 3.1.3, we know that the triangular bases L t and L t ′ for t and t ′ exist. Moreover, the quantum cluster monomials X ′d
k = X k (t ′ ) d , I ′d k = I k (t ′ ) d belong to L t ,
where d ∈ N. Therefore, X ′d k and I ′d k have (≺ t , m)-unitriangular decomposition in the injective pointed set I t . These are the only new factors of elements in I t ′ which are not factors of elements in I t .
Easy calculation shows that elements in I t ′ remain pointed in T (t), cf. [Qin15, Lemma 5.3.2]. Substituting their new factors X ′d k and I ′d k by the decomposition in I t , we deduce that I t ′ is (≺ t , m)-unitriangular to I t by Lemma 2.2.9. Also, notice that L t ′ is (≺ t ′ , m)-unitriangular to I t ′ and I t is (≺ t , m)unitriangular to L t . Composing these three transitions, we obtain that any S ′ ∈ L t ′ is a finite combination of elements S, S i in L t :
S ′ = S + i a i S i ,
with coefficient a i ∈ m. Now by the bar-invariance of L t and L t ′ , we must have a i = 0 and S ′ = S. It follows that the two triangular bases L t and L t ′ are the same.
3.2.
Proof of the main result. For any chosen 1 ≤ j ≤ n, let t[j −1 ] denote the seed obtained from t by deleting the j-th column in the matrix B(t). This operation is called freezing the vertex j. We have the corresponding quantum cluster algebra
A(t[j −1 ]). Observe that the normalization [ ] t[j −1 ] = [ ] t because Λ(t[j −1 ]) = Λ(t) by construction.
Moreover, the partial order ≺ t[n −1 ] implies ≺ t by definition. We can define similarly, for f ∈ Z {j}∪[n+1,m] , u, v ∈ N [1,n]−{j} , where u k v k = 0 for any k:
I t[j −1 ] (f, u, v) = [ n+1≤i≤m X f i i * X f j j * 1≤k≤n,k =j X u k k * 1≤k≤n,k =j I k (t[j −1 ]) v k ] t[j −1 ] .
We want to compare this new injective pointed set I t[j −1 ] with the old one I t . One has to pay attention to the possible localization at X j in the seed t[j −1 ].
Assume the vertex n to be ⊳-maximal, namely, a source point, then I k (t[n −1 ]) = I k (t) for all 1 ≤ k < n, cf. Remark 2.2.6, and, moreover,
(deg Y i ) n = b ni ≥ 0 ∀1 ≤ i ≤ n.
It follows that the Laurent monomials of I k (t), ∀k = n, have non-negative degrees in X n .
Notice that, for a source point n, if f n ≥ 0, then
I t[n −1 ] (f, u, v) ∈ I t .
Lemma 3.2.1. Assume that n is a source point and a pointed element Z ∈ A(t[n −1 ]) has a finite combination of
Z = s a s I t[n −1 ] (f (s) , u (s) , v (s) ).
If (deg Z) n ≥ 0, then we have f (s) n ≥ 0 whenever a s = 0. Consequently, all I t[n −1 ] (f (s) , u (s) , v (s) ) appearing in the combination are contained in I t .
Proof. Recall that I t[n −1 ] is a linearly independent family of pointed elements with distinguished leading degrees. By Lemma 2.2.5(i), the given decomposition of Z is ≺ t -unitriangular with a unique leading term I t[n −1 ] (f (0) , u (0) , v (0) ) whose leading degree equals deg Z. So the leading degrees of all I t[n −1 ] (f (s) , u (s) , v (s) ) appearing are ≺ t -inferior or equal to deg Z. Since (deg Z) n ≥ 0 and (deg Y i ) n ≥ 0, ∀1 ≤ i ≤ n, they are all non-negative in the n-th components.
Notice that pr n deg I k (t) = −e k by definition and, in particular, the leading degree deg I k (t), ∀k < n, vanishes in the n-th components. Proof of Theorem 1.4.1. We prove the claim by induction on the rank n of B(t). The cases n = 0 are trivial.
Up to relabeling vertices, let us assume that n is a source point in t. Denote t ′ = µ n t.
It suffices to show that every E a , a ∈ Z m , is (≺ t , m)-triangular to L t . If so, combined with Lemma 3.1.1, we obtain that every bar-invariant element C a is (≺ t , m)-triangular to L t and, consequently, must belong to L t . It follows that the two bases L t and C must agree.
(i) Assume a n ≥ 0. Consider the seed t[n −1 ] obtained by freezing the vertex n in t. It is acyclic whose matrix B(t[n −1 ]) has rank n − 1. By induction hypothesis, its triangular basis L t[n −1 ] agrees with its BZbasis C t[n −1 ] . Notice that the corresponding standard monomial E a is also a standard monomials for seed t[n −1 ]. Therefore, E a admits a finite decomposition in C t[n −1 ] = L t[n −1 ] with one term of coefficient 1 and other terms of coefficient in m. Recall that L t[n −1 ] is ≺ t[n −1 ]unitriangular to I t[n −1 ] . Composing these two transitions, we see that E a has a finite decomposition in I t[n −1 ] with one term of coefficient 1 and others of coefficient in m. Further notice that (deg E a ) n ≥ 0, by Lemma 3.2.1, the decomposition terms appearing belong to I t . By Lemma 2.2.5, E a is (≺ t , m)-unitriangular to I t , and consequently (≺ t , m)-unitriangular to L t .
(ii) When a n < 0, let us rewrite E a as [S [−an] + n * E a n ] t , where a n denote the vector obtained from a by setting the n-th component to 0. Notice that E a is also pointed in t ′ by Lemma 3.1.2, namely, E a = [S [−an] + n * E a n ] t ′ . For the seed t ′ , we freeze the vertex n and repeat the argument in (i), it follows that E a n is (≺ t ′ , m)-unitriangular to the triangular basis L t ′ of the seed t ′ . Notice that S n is the n-th cluster variable in the seed t ′ . By Lemma 2.2.9, we obtain that E a is (≺ t ′ , m)unitriangular to the triangular basis L t ′ of the seed t ′ . Because L t = L t ′ by Theorem 3.1.4, E a is (≺ t , m)-unitriangular to L t by Lemma 2.2.5.
3.3.
Bipartite skew-symmetrizable case. We say the seed t has a bipartite orientation (we say t is bipartite for short), if we have {1, · · · , n} = V 0 ⊔ V 1 , such that all the vertices in V 0 are source points and those in V 1 are sink points.
Assume that t is bipartite. Let we denote by t ′ the seed obtained from t by mutating at all the vertices in V 1 , namely,
µ V 1 = k∈V 1 µ k t ′ = µ V 1 t.
Notice that the mutations µ k , k ∈ V 1 , commute with each other. The following lemma follows from the definitions of the corresponding cluster variables, cf. Figure 3.1 for identification of cluster variables, where i ∈ V 0 , j ∈ V 1 , the graph are constructed via the knitting algorithm, cf. [Kel08].
Lemma 3.3.1. We have, for any 1 ≤ i, j ≤ n, It follows from Lemma 3.3.1 that those S(t ′ ), i ∈ V 0 q-commute with each other, and S j (t ′ ), j ∈ V 1 , q-commute with each other.
X i (t ′ ) = X i (t), i ∈ V 0 , (3.4) X j (t ′ ) = I j (t), j ∈ V 1 , (3.5) S i (t ′ ) = I i (t), i ∈ V 0 , (3.6) S j (t ′ ) = X j (t), j ∈ V 1 . (3.7) X i (t) X j (t) I j (t) I i (t) X i (t ′ ) S j (t ′ ) X j (t ′ ) S i (t ′ )
Notice that t ′ is still bipartite with the vertices in V 0 being sink points and the vertices in V 1 being source points. (1) For any 1 ≤ k = j ≤ n, such that j ∈ V 1 , X k (t) and I j (t) q-commute.
(2) For any 1 ≤ i = k ≤ n, such that i ∈ V 0 , X i (t) and I k (t) q-commute Proof.
(1) X k (t) and I j (t) are quantum cluster variables in the same seed µ j t.
(2) By (1), it remains to check the case i, k ∈ V 0 . Notice that V 0 consist of sink points in t ′ = µ V 1 t. X i (t) and I k (t) are quantum cluster variables in the same seed µ k t ′ .
Lemma 3.3.3. The pointed element E a defined in t ′ remains pointed in t = µ V 1 t ′ .
Proof. The vertices in V 1 are source points in t ′ which are not connected by arrows. We simply repeat the proof of Lemma 3.1.2.
Theorem 3.3.4. For bipartite t, the Berenstein-Zelevinsky's triangular basis C is also the triangular basis L t .
Proof. Notice that, in the seed t ′ , the vertices in V 1 are source points and ⊳-superior than those in V 0 . Using Lemma 3.3.2(ii), we have, for any a ∈ Z m ,
E a = [ j∈V 1 S j (t ′ ) [−a j ] + * i∈V 0 S i (t ′ ) [−a i ] + * j∈V 1 X j (t ′ ) [a j ] + * i∈V 0 X i (t ′ ) [a i ] + * n+1≤j≤m X j (t ′ ) a j ] t ′ = [ j∈V 1 X j (t) [−a j ] + * i∈V 0 I i (t) [−a i ] + * j∈V 1 I j (t) [a j ] + * i∈V 0 X i (t) [a i ] + * n+1≤j≤m X j (t ′ ) a j ] t ′ . = [ j∈V 1 X j (t) [−a j ] + * i∈V 0 X i (t) [a i ] + * i∈V 0 I i (t) [−a i ] + * j∈V 1 I j (t) [a j ] + * n+1≤j≤m X j (t ′ ) a j ] t ′ (3.8)
By Lemma 3.3.3, E a remains to be pointed in t. Then (3.8) tells us that it belongs to the injective pointed set I t . All elements of I t take this form. So we see the BZ-basis C verifies the conditions (i)(ii)(iv) in Definition 2.2.7. A closer examination tells us that the condition (iii) in Definition 2.2.7 is also verified by the basis C, cf. [BZ14]. So C is the triangular basis L t for the seed t. Its seed t ′ = µ V 1 t has the matrices B = 0 −2 2 0 and Λ = 0 −1 1 0 . The vertex 2 is the source point in t ′ . It is easy to compute that S 1 (t ′ ) = X(t ′ ) −e 1 + X(t ′ ) −e 1 +2e 2 S 2 (t ′ ) = X(t ′ ) −e 2 +2e 1 + X(t ′ ) −e 2 Y 1 (t ′ ) = X(t ′ ) 2e 2 Y 2 (t ′ ) = X(t ′ ) −2e 1 .
By [BZ14, (6.4)] [DX12], we have the following bar-invariant pointed element X δ in the BZ-basis C, given by X δ = q 1 2 S 1 (t ′ ) * S 2 (t ′ ) − q 3 2 X 2 (t ′ ) * X 1 (t ′ ) = X(t ′ ) e 1 −e 2 · (1 + Y 2 (t ′ ) + Y 1 (t ′ )Y 2 (t ′ )) = X(t ′ ) e 1 −e 2 + X(t ′ ) −e 1 −e 2 + X(t ′ ) e 2 −e 1 .
Taking the bar-involution, we obtain
X δ = q − 1 2 S 2 (t ′ ) * S 1 (t ′ ) − q − 3 2 X 1 (t ′ ) * X 2 (t ′ ) = [S 2 (t ′ ) * S 1 (t ′ )] t ′ − q −2 [X 1 (t ′ ) * X 2 (t ′ )] t ′ .
We have S 2 (t ′ ) = X 2 (t) S 1 (t ′ ) = I 1 (t) = X(t) −e 1 (1 + Y 1 (t) + (q + q −1 )Y 1 (t)Y 2 (t) + Y 1 (t)Y 2 (t) 2 ) X 2 (t ′ ) = I 2 (t) = X(t) −e 2 (1 + Y 2 (t)) X 1 (t ′ ) = X 1 (t)
Then X δ can be rewritten as X δ = [X 2 (t) * I 1 (t)] t − q −2 [X 1 (t) * I 2 (t)] t = X(t) −e 1 +e 2 (1 + Y 1 (t) + (1 + q −2 )Y 1 (t)Y 2 (t) + q −2 Y 1 (t)Y 2 (t) 2 ) − q −2 X e 1 −e 2 (1 + Y 2 (t)) = X(t) −e 1 +e 2 (1 + Y 1 (t) + Y 1 (t)Y 2 (t)) = X(t) −e 1 +e 2 + X(t) −e 1 −e 2 + X(t) e 1 −e 2 .
Notice that the normalization factors do not change:
Λ(t)(deg t X 2 (t), deg t I 1 (t)) = Λ(t)(e 2 , −e 1 ) = 1 = Λ(t ′ )(deg t ′ S 2 (t ′ ), deg t ′ S 1 (t ′ )) Λ(t)(deg t X 1 (t), deg t I 2 (t)) = Λ(t)(e 1 , −e 2 ) = −1 = Λ(t ′ )(deg t ′ X 1 (t ′ ), deg t X 2 (t ′ )).
Therefore, the pointed element X δ is (≺ t , m)-unitriangular to the injective pointed set I t , and consequently (≺ t , m)-unitriangular to the triangular basis L t . It follows from its bar-invariance that X δ belongs to the triangular basis L t .
Lemma 2.2.5 ([Qin15][Lemma 3.1.9]). (i) If (2.2) is a finite decomposition of a pointed element Z, then it is ≺ t -unitriangular.
The basis elements are invariant under the bar involution in T (t). • (parametrization) The basis elements are pointed, and we have the bijection deg t : L t ≃ D(t) = Z m .
Figure 2 . 1 .
21Acyclic A3 quiver with z-patternReplace Z by this decomposition in [ n+1≤i≤m X f i * X u * Z * I v ], the result is (≺ t , m)-unitriangular to L t , by the triangularity of L t and comparison of q-powers (cf. [Qin15, Lemma 6.2.4]).
[BZ14, (1.17) (1.22) Remark 1.3]. Define r(a) = 1≤k≤n [−a k ] + . Define partial order a ≺ BZ a ′ if and only if r(a) < r(a ′ ).
The Berenstein-Zelevinsky's acyclic triangular basis for the seed t is defined to be the basis C t = {C a } of A such that each C a is bar-invariant and (≺ BZ , )-triangular to the basis {E a }.
It follows that deg I t[n −1 ] (f (s) , u (s) , v (s) ) has non-negative n-th component if and only if f (s) n ≥ 0. The claim follows.
Figure 3 . 1 .
31Part of knitting graphs for the seeds t and t ′ .
.
Example 3.3.5 (Kronecker quiver type). Let us look at the quantum cluster algebra with the seed t given by B We have the set of source points V 0 = {1} and the set of sink points V 1 = {2}.
We use the notation I k because this cluster variable corresponds to the k-th indecomposable injective module of a quiver with potential[DWZ08,DWZ10].
Quantum cluster algebras. Arkady Berenstein, Andrei Zelevinsky, math/0404446v2Adv. Math. 1952Arkady Berenstein and Andrei Zelevinsky. Quantum cluster algebras. Adv. Math., 195(2):405-455, 2005, math/0404446v2.
Triangular bases in quantum cluster algebras. Arkady Berenstein, Andrei Zelevinsky, Arkady Berenstein and Andrei Zelevinsky. Triangular bases in quantum cluster algebras. 2012, 1206.3586.
Triangular bases in quantum cluster algebras. Arkady Berenstein, Andrei Zelevinsky, International Mathematics Research Notices. 6Arkady Berenstein and Andrei Zelevinsky. Triangular bases in quan- tum cluster algebras. International Mathematics Research Notices, 2014(6):1651-1688, 2014, 1206.3586.
Jerzy Harm Derksen, Andrei Weyman, Zelevinsky, Quivers with potentials and their representations I: Mutations. Selecta Mathematica. 14Harm Derksen, Jerzy Weyman, and Andrei Zelevinsky. Quivers with potentials and their representations I: Mutations. Selecta Mathematica, 14:59-119, 2008, 0704.0649v4.
Quivers with potentials and their representations II: Applications to cluster algebras. Jerzy Harm Derksen, Andrei Weyman, Zelevinsky, J. Amer. Math. Soc. 233Harm Derksen, Jerzy Weyman, and Andrei Zelevinsky. Quivers with potentials and their representations II: Applications to cluster algebras. J. Amer. Math. Soc., 23(3):749-790, 2010, 0904.0676v2.
Bases of the quantum cluster algebra of the kronecker quiver. Ming Ding, Fan Xu, Acta Mathematica Sinica, English Series. 286Ming Ding and Fan Xu. Bases of the quantum cluster algebra of the kronecker quiver. Acta Mathematica Sinica, English Series, 28(6):1169- 1178, 2012.
Cluster ensembles, quantization and the dilogarithm. V V Fock, A B Goncharov, math.AG/0311245Ann. Sci.École Norm. Sup. 424V. V. Fock and A. B. Goncharov. Cluster ensembles, quantization and the dilogarithm. Ann. Sci.École Norm. Sup. (4), 42(6):865-930, 2009, math.AG/0311245.
Cluster algebras I: foundations. Sergey Fomin, Andrei Zelevinsky, Journal of the American Mathematical Society. 152Sergey Fomin and Andrei Zelevinsky. Cluster algebras I: foundations. Journal of the American Mathematical Society, 15(2):497-529, 2002.
Sergey Fomin, Andrei Zelevinsky, math/0602259v3Cluster algebras IV: Coefficients. Compositio Mathematica. 143Sergey Fomin and Andrei Zelevinsky. Cluster algebras IV: Coefficients. Compositio Mathematica, 143:112-164, 2007, math/0602259v3.
Canonical bases for cluster algebras. Mark Gross, Paul Hacking, Sean Keel, Maxim Kontsevich, 1411.1394Mark Gross, Paul Hacking, Sean Keel, and Maxim Kontsevich. Canon- ical bases for cluster algebras. 2014, 1411.1394.
Kac-Moody groups and cluster algebras. Christof Geiß, Bernard Leclerc, Jan Schröer, Advances in Mathematics. 2281Christof Geiß, Bernard Leclerc, and Jan Schröer. Kac-Moody groups and cluster algebras. Advances in Mathematics, 228(1):329-433, 2011, 1001.3545v2.
Generic bases for cluster algebras and the Chamber Ansatz. Christof Geiß, Bernard Leclerc, Jan Schröer, J. Amer. Math. Soc. 251Christof Geiß, Bernard Leclerc, and Jan Schröer. Generic bases for clus- ter algebras and the Chamber Ansatz. J. Amer. Math. Soc., 25(1):21- 76, 2012, 1004.2781v3.
Cluster structures on quantum coordinate rings. Christof Geiß, Bernard Leclerc, Jan Schröer, 1104.0531Selecta Mathematica. 192Christof Geiß, Bernard Leclerc, and Jan Schröer. Cluster structures on quantum coordinate rings. Selecta Mathematica, 19(2):337-397, 2013, 1104.0531.
Cluster algebras and quantum affine algebras. David Hernandez, Bernard Leclerc, Duke Math. J. 1542David Hernandez and Bernard Leclerc. Cluster algebras and quantum affine algebras. Duke Math. J., 154(2):265-341, 2010, 0903.1452.
Bases cristallines. Masaki Kashiwara, C. R. Acad. Sci. Paris Sér. I Math. 3116Masaki Kashiwara. Bases cristallines. C. R. Acad. Sci. Paris Sér. I Math., 311(6):277-280, 1990.
Cluster algebras, quiver representations and triangulated categories. Bernhard Keller, 807Bernhard Keller. Cluster algebras, quiver representations and triangu- lated categories. 2008, 0807.1960v11.
Monoidal categorification of cluster algebras II. S.-J Kang, M Kashiwara, M Kim, S.-J Oh, 1502.06714ArXiv e-printsS.-J. Kang, M. Kashiwara, M. Kim, and S.-j. Oh. Monoidal categorifi- cation of cluster algebras II. ArXiv e-prints, 2015, 1502.06714.
Graded quiver varieties, quantum cluster algebras and dual canonical basis. Yoshiyuki Kimura, Fan Qin, Advances in Mathematics. 262Yoshiyuki Kimura and Fan Qin. Graded quiver varieties, quantum cluster algebras and dual canonical basis. Advances in Mathematics, 262:261-312, 2014, 1205.2066.
Greedy bases in rank 2 quantum cluster algebras. Kyungyong Lee, Li Li, Dylan Rupel, Andrei Zelevinsky, Proceedings of the National Academy of Sciences. the National Academy of Sciences111Kyungyong Lee, Li Li, Dylan Rupel, and Andrei Zelevinsky. Greedy bases in rank 2 quantum cluster algebras. Proceedings of the National Academy of Sciences, 111(27):9712-9716, 2014.
Greedy elements in rank 2 cluster algebras. Kyungyong Lee, Li Li, Andrei Zelevinsky, Selecta Mathematica. 201Kyungyong Lee, Li Li, and Andrei Zelevinsky. Greedy elements in rank 2 cluster algebras. Selecta Mathematica, 20(1):57-82, 2014.
Canonical bases arising from quantized enveloping algebras. G Lusztig, J. Amer. Math. Soc. 32G. Lusztig. Canonical bases arising from quantized enveloping algebras. J. Amer. Math. Soc., 3(2):447-498, 1990.
Bases for cluster algebras from surfaces. Gregg Musiker, Ralf Schiffler, Lauren Williams, Compositio Mathematica. 14902Gregg Musiker, Ralf Schiffler, and Lauren Williams. Bases for clus- ter algebras from surfaces. Compositio Mathematica, 149(02):217-263, 2013.
Quiver varieties and cluster algebras. Hiraku Nakajima, 0905.0002v5Kyoto J. Math. 511Hiraku Nakajima. Quiver varieties and cluster algebras. Kyoto J. Math., 51(1):71-126, 2011, 0905.0002v5.
Cluster characters for cluster categories with infinite-dimensional morphism spaces. Pierre-Guy Plamondon, 1002.4956v2Adv. in Math. 2271Pierre-Guy Plamondon. Cluster characters for cluster categories with infinite-dimensional morphism spaces. Adv. in Math., 227(1):1-39, 2011, 1002.4956v2.
Fan Qin. t-analog of q-characters, bases of quantum cluster algebras, and a correction technique. International Mathematics Research Notices. 22Fan Qin. t-analog of q-characters, bases of quantum cluster algebras, and a correction technique. International Mathematics Research No- tices, 2014(22):6175-6232, 2014, 1207.6604.
Triangular bases in quantum cluster algebras and monoidal categorification conjectures. Fan Qin, 1501.04085Fan Qin. Triangular bases in quantum cluster algebras and monoidal categorification conjectures. 2015, 1501.04085.
Positive basis for surface skein algebras. Dylan Paul , Thurston , Proceedings of the National Academy of Sciences. the National Academy of Sciences111E-mail address: [email protected] Paul Thurston. Positive basis for surface skein algebras. Proceed- ings of the National Academy of Sciences, 111(27):9725-9732, 2014. E-mail address: [email protected]
|
[] |
[
"A NEW AUTOMORPHISM OF X 0 (108)",
"A NEW AUTOMORPHISM OF X 0 (108)"
] |
[
"Michael Harrison "
] |
[] |
[] |
Let X0(N ) denote the modular curve classifying elliptic curves with a cyclic Nisogeny, A0(N ) its group of algebraic autmorphisms and B0(N ) the subgroup of automorphisms coming from matrices acting on the upper halfplane. In a well-known paper, Kenku and Momose showed that A0(N ) and B0(N ) are equal (all automorphisms come from matrix action) when X0(N ) has genus ≥ 2, except for N = 37 and 63.However, there is a mistake in their analysis of the N = 108 case. In the style of Kenku and Momose, we show that B0(108) is of index 2 in A0(108) and construct an explicit new automorphism of order 2 on a canonical model of X0(108).
| null |
[
"https://arxiv.org/pdf/1108.5595v3.pdf"
] | 117,178,679 |
1108.5595
|
1572c01d52168ab257c47b148fea862ddaede928
|
A NEW AUTOMORPHISM OF X 0 (108)
Aug 2011
Michael Harrison
A NEW AUTOMORPHISM OF X 0 (108)
Aug 2011
Let X0(N ) denote the modular curve classifying elliptic curves with a cyclic Nisogeny, A0(N ) its group of algebraic autmorphisms and B0(N ) the subgroup of automorphisms coming from matrices acting on the upper halfplane. In a well-known paper, Kenku and Momose showed that A0(N ) and B0(N ) are equal (all automorphisms come from matrix action) when X0(N ) has genus ≥ 2, except for N = 37 and 63.However, there is a mistake in their analysis of the N = 108 case. In the style of Kenku and Momose, we show that B0(108) is of index 2 in A0(108) and construct an explicit new automorphism of order 2 on a canonical model of X0(108).
Introduction
For a positive integer N , the modular curve X 0 (N ) parametrises elliptic curves with a cyclic N -isogeny. Over C, it is isomorphic, as a Riemann surface, to the quotient of the extended upper half-plane (τ ∈ C : ℑ(τ ) > 0 ∪ Q ∪ i∞) by the subgroup Γ 0 (N ) of SL 2 (Z) consisting of determinant 1, integral matrices a c b d with c ≡ 0 mod N , acting as τ → (aτ + b)/(cτ + d) ( [Shi71] or [Miy89]).
X 0 (N ) has a natural structure of an algebraic curve over Q, defined in [Shi71,Ch. 7] or more technically, as the generic fibre of the compactification of a modular scheme over Z [KM85].
The normaliser N m(N ) of Γ 0 (N ) in SL 2 (R) acts on the extended upper half-plane and leads to a finite subgroup B 0 (N ) of A 0 (N ) def = Aut C (X 0 (N )) isomorphic to N m(N )/Γ 0 (N ). The group-theoretic structure of B 0 (N ) is given in [Bar08]. The natural question, when the genus g N of X 0 (N ) ≥ 2 and so A 0 (N ) is finite, is whether B 0 (N ) is all of A 0 (N ). For N = 37 this was famously known not to be the case: X 0 (37) is of genus 2 and is thus hyperelliptic, but the only non-trivial element of B 0 (37) is not a hyperelliptic involution. Ogg showed that this is the only case with A 0 (N ) larger than B 0 (N ) when N is squarefree [Ogg77].
Using deeper properties of the minimal models of X 0 (N ) and its Jacobian J 0 (N ) and some further techniques, Kenku and Momose extended the analysis to all N (with g N ≥ 2) in [KM88], claiming that A 0 (N ) = B 0 (N ) except for N = 37 and possibly N = 63, and that the index of B 0 (N ) in A 0 (N ) is 1 or 2 in the latter case. Subsequently, Elkies showed that N = 63 is indeed an exceptional case and gave an elegant construction of an additional automorphism [Elk90]. Kenku and Momose eliminate almost all N by a combination of arguments that lead to only seven values (including 63 and 108) for which case-by-case detailed analysis is required.
For N = 108, a hypothetical automorphism u not in B 0 (N ) is considered and is used to construct a non-trivial automorphism γ in B 0 (N ) with various properties. J 0 (N ) decomposes (up to isogeny) into 10 elliptic curve factors and it is claimed that γ must act upon a particular one E as ±1. From this, further analysis leads to a contradiction. However, E has j-invariant 0 and it isn't clear from the construction why γ (which has order 2 or 3) could not act as a 3rd root of unity on E. I tried to derive a different contradiction assuming this, but everything seemed consistent if γ was assumed to have order 3 and act on each of the 6 CM components of J 0 (N ) as an appropriate 3rd root of unity.
B 0 (108) has order 108 by [Bar08]. Kenku and Momose show that all automorphisms of X 0 (108) are defined over the field they denote by k ′ (108), which is the ring class field mod 6 of k(108) = Q(
√ −3) : explicitly k ′ (108) is Q( √ −3, 3 √ 2)
. The primes p splitting completely in k ′ (108) are 31, 43, 109, . . ..
Out of interest, I decided to compute the automorphisms over the finite fields F p for the first few split primes using Magma [BCP97]. Equations for a canonical model of X 0 (108) are found directly and reduced mod p using the modular form machinery. The full set of automorphisms is then returned very quickly by the built-in functions provided by Florian Hess. To my surprise, in each case the number of automorphisms was 216, twice the order of B 0 (108)! A slightly longer Magma computation over F 31 returned an abstract group G representing the automorphism group and it was readily verified by further Magma function calls that G did contain a subgroup H of index 2 with the structure of B 0 (108) as described by Bars. Finally, having gleaned this information from mod 31 computer computations, it remains to construct a new u in characteristic 0. It is possible to just mechanically run the Magma routines again but working over k ′ (108) for the automorphism group computations is much slower and besides, as an important special case, it is desirable to provide a construction with some level of transparent mathematical detail rather than just the output of a generic computer program.
In the next section, I review Kenku and Momose's analysis of the N = 108 case and show that, after removing the error, it can be adapted to prove that B 0 (108) is of index 1 or 2 in A 0 (108) and also give the isomorphism type of A 0 (108) in the index 2 case.
In the third section, I give my construction of a new automorphism u of order 2, using detailed modular information about X 0 (108) and its Jacobian. Specifically, I use the explicit action of standard generators of B 0 (108) on a natural modular form basis for the differentials of X 0 (108) along with the commutator relations within A 0 (108) for u to find a fairly simple matrix, involving 2 undetermined parameters a and b, giving the action of u on the differentials. To proceed further, I used computer computations to first determine the relations for the canonical model of X 0 (108) w.r.t. the differential basis and then to solve for a and b. The latter involves computing a Gröbner basis for the zero-dimensional ideal in a and b that comes from substituting the matrix for u into the canonical relations. The resulting automorphism on the canonical model is defined over k ′ (108) but not k(108) (the field of definition of B 0 (108)), as it should be. Subsequent to my discovery and semi-computerised construction of a new automorphism, Elkies learned of the error through Mark Watkins. Using a neat function-theoretic argument, similar in some respects to the N = 63 case, he was able to derive a particularly simple geometric model of X 0 (108) as the intersection of two cubics in P 3 as well as explicitly writing down all automorphisms without the need for computer computations. Elkies construction gives an independent verification of the corrected result for N = 108 and will be published by him elsewhere.
Finally, I believe that there are no other exceptional N . [KM88] is a very nice paper but it does seem to contain a number of mistakes, most of which have no bearing on the final result. In particular
• The last two numbers in the statement of Lemma 1.6 should be 2 3 · 3 2 and 2 2 · 3 3 • A number of expressions in the proof of Lemma 1.6 are incorrect. Especially, most of the expressions in the table for µ(D, p) in different cases are wrong.
• N = 216 = 2 3 3 3 listed in Corollary 1.11 and treated as a special case in the rest of the paper along with the other 3 values is not actually a special case! This is clearly harmless. • In Lemma 2.15, p 2 − 1 should be p 2 + 1.
• In the first part of the main theorem, 2.17, the list of N for various l ≤ 11 that cannot be eliminated by lemmas 2.14 and 2.15 contain a number of cases that can be easily eliminated by Corollary 2.11. However, there are a number of unlisted cases that cannot be eliminated by any of these results. If I have worked it out correctly, I think that these are (I'm ignoring the obvious typos in the lists here -e.g. the second N listed for l = 11 should have 2 2 rather than 2 3 as a factor) 2 2 · 3 2 · 7 for l = 5, 2 4 · 11 for l = 3 and 3 3 · 5 and 3 2 · 19 for l = 2. The first two can be eliminated using Lemma 2.16 as is done for the other listed values of N . For the other two (l = 2) cases, a slightly improved version of lemma 2.15 can be applied over p = 2 with D = D 2 that takes into account multiplicities in both the zeroes and poles of D. This gives an upper bound 26 for X 0 (N )(F 4 ) which is less than the actual values (30 and 28 respectively) for the two cases. The special analyses for the final six cases (ignoring N = 37 and 63) in the proof of Theorem 2.17 all seem OK to me except when N = 108.
Automorphisms of X 0 (108): Generalities
We reconsider the analysis of the X 0 (108) case as given on pages 72 and 73 of [KM88] and show that the correct conclusion is that B 0 (108) is of index 1 or 2 in A 0 (108) rather than that A 0 (108) is necessarily equal to B 0 (108).
Notation:
X 0 (108), J = J 0 (108), A 0 (108), B 0 (108), k(108) and k ′ (108) are as described in the introduction. σ denotes one of the two generators of G(k ′ (108)/k(108)). w n will denote the Atkin-Lehner involution on X 0 (108) for n|108, (n, 108/n) = 1 (see, eg, [Miy89] or [Bar08]). Explicitly, we could take matrix representatives mod R * Γ 0 (108) for the actions of w 4 , w 27 and w 108 on the extended upper half-plane as w 4 = 28 1 108 4 w 27 = 27 −7 108 −27 w 108 = 0 −1 108 0 w 1 is trivial. Up to scalars, the matrix for w 27 is an involution and the matrix for w 108 is the product of those for w 27 and w 4 .
For v|6, S v will denote the element of B 0 (108) represented by the matrix 1
0 (1/v) 1 .
B 0 (108) is generated by w 4 , w 27 , S 2 and S 3 . Its group structure is described fully later in the next section. We note here that the first three generators have order 2 and the last has order 3. The subgroup S of B 0 (108) commuting with w 4 , w 27 is
w 4 × w 27 × τ 3
where τ 3 is the element of order 3 in the centre of B 0 (108) defined by τ 3 := S 3 w 27 S 3 w 27 (S 3 and w 27 S 3 w 27 commute).
The argument on page 73 of [KM88] considers a hypothetical automorphism u in A 0 (108) not lying in B 0 (108). It is shown that u is defined over k ′ (108) but not over k(108) and the non-trivial automorphism γ is defined as u σ u −1 . Note that all cusps are defined over k(108), that B 0 (108) acts transitively on the cusps and that, by their Corollary 2.3, any automorphism is determined by its images of ∞ and any other cusp. This shows, in particular, that all elements of B 0 (108) are defined over k(108). γ is shown to lie in B 0 (108).
Let f 27 , f 36 and f 108 denote the primitive cusp forms associated to the unique isogeny classes of elliptic curves with conductors 27, 36 and 108 respectively. These curves all have complex multiplication by orders of k(108). Kenku and Momose consider the decomposition up to isogeny of J into the product
J H × J C 1 × J C 2 where J H is the part without CM, J C 1 is associated to the eigenforms {f 36 (z), f 36 (3z), f 108 (z)} and J C 2 is associated to the eigenforms {f 27 (z), f 27 (2z), f 27 (4z)}.
They show that γ acts trivially on the J H factor and that its order d and the genus g Y of the quotient X 0 (108)/ γ satisfies (i) d = 2, g Y = 4, 5 or (ii) d = 3, g Y = 4. It is also shown that γ commutes with w 4 and w 27 and so lies in S.
E is the new elliptic curve factor of J C 1 corresponding to f 108 . The error comes with the line "Then γ acts on E under ±1". This eliminates case (ii) above and leads to a contradiction on the existence of u. However, there is the possibility that (*) γ acts on (the optimal quotient isogeny class of) E by a non-trivial 3rd root of unity and case (ii) occurs.
We see in the next section that this actually can occur when we explicitly construct such a u. To have order 3 and lie in S, γ must equal τ 3 or τ −1 3 . That (*) holds for such γ follows from the determination of the action of the generators of B 0 (108) on a nice basis for the cusp forms given in the construction. This shows that τ 3 fixes the non-CM forms defining the J H factor and multiplies each of the six CM Hecke eigenforms given above by some non-trivial 3rd root of unity as required.
So for any automorphism u, u σ u −1 is trivial or equal to τ 3 or τ −1 3 . As Kenku and Momose show that all automorphisms defined over k(108) are in B 0 (108), this implies that B 0 (108) is of index at most three in A 0 (108).
However, this can be improved by considering the action on the reduction mod 31 of X 0 (108) and arguing as Kenku and Momose do to show that γ is defined over k(108). All automorphisms are defined over F 31 as 31 splits in k ′ (108). If u and v are two automorphisms not in B 0 (108), then exactly the same argument near the top of page 73 applied to u and u σ can be applied to u and v to show that vu −1 lies in B 0 (108). This shows that B 0 (108) is of index at most 2 in A 0 (108). Note that the sentence on page 73 starting "Applying lemma 2.16 to p = 7 . . ." should contain p = 31 rather than p = 7 and there should be a comment that lemma 2.16 is being applied here with any pair of cusps replacing 0 and ∞, which is permissible as the same proof works. So, replacing σ by σ −1 if necessary, we have that (remembering that τ 3 is in the centre of B 0 (108)) (+) B 0 (108) is of index 1 or 2 in A 0 (108) and any automorphism u / ∈ B 0 (108) satisfies u σ u −1 = τ 3 . Now, we assume that A 0 (108) is bigger than B 0 (108) and show that its group structure can then be determined from the above information and the abstract group structure of B 0 (108).
We denote a cyclic group of order n by C n . [Bar08] gives the structure of B 0 (108). Abstractly, it is the direct product D 6 × (C 3 ≀ C 2 ) where the first factor is the dihedral group of order 6 and the second is the order 18 wreath product (the semidirect product of C 3 × C 3 by C 2 , the generator of C 2 swapping the two C 3 factors). The D 6 factor is generated by S 2 and w 4 , which both have order 2. The wreath product is generated by S 3 (order 3) and w 27 (order 2), so that S 3 and w 27 S 3 w 27 are two commuting elements of order 3 generating the order 9 subgroup. The centre of B 0 (108) is of order 3, generated by τ 3 = S 3 w 27 S 3 w 27 .
The automorphism group of B 0 (108) is easy to determine from the decomposition of as D 6 × D 6 × C 3 . The outer automorphism group is C 2 × C 2 . This can also be easily checked in Magma, for example. Now if u is not in B 0 (108), u 2 is in B 0 (108) and so is fixed by σ. Then, (+) above shows that uτ 3 u −1 = τ −1 3 so that u acts by conjugation on B 0 (108) (which is normal in A 0 (108) having index 2) as an outer automorphism, since τ 3 is central in B 0 (108). Also, we can assume u has 2-power order and, as the kernel of the map of B 0 (108) to its inner automorphism group is of order 3, A 0 (108) is then determined up to isomorphism if we can determine the image of u in the outer automorphism group H of B 0 (108). H has 3 non-trivial elements giving extensions of B 0 (108) of degree 2. But the condition that u doesn't centralise τ 3 excludes one of these elements. Another element would leads to a u of order 2 commuting with w 4 , w 27 and S 2 . From the explicit action of S 2 on weight 2 forms (see next section) we see that u would have to preserve the (w 27 − 1)J C 1 (= E) and (w 4 − 1)J C 2 elliptic curve factors of the Jacobian, so act as ±1 on E. γ would then act trivially on E and the argument of Kenku and Momose would properly lead to a contradiction. Thus, there is only one possibility for u in H and one possible group structure for A 0 (108). Explicitly, we find Lemma 2.1. If A 0 (108) is larger than B 0 (108), then it contains B 0 (108) as a subgroup of index 2 and is generated by B 0 (108) and an element u of order 2 that acts on B 0 (108) by conjugacy as follows:
uw 4 u = w 27 uw 27 u = w 4 uS 2 u = S 3 w 27 S −1 3 = S −1 3 τ −1 3 w 27 uS 3 u = S 2 w 4 τ 3
For an appropriate choice of σ, u σ u −1 = τ 3 .
Construction of a new automorphism
The notation introduced at the start of the last section is still in force.
Conventions:
If u is an automorphism of X 0 (108), then we also think of it as an automorphism of J by the "Albanese" action: a degree zero divisor i a i P i → i a i u(P i ). If X 0 (108) is embedded in J in the usual way by i : P → (P ) − (∞) then the actions are compatible up to translation by (u(∞))−(∞). As global differentials on J are translation invariant, this means that the pullback action u * on global differentials of J or X 0 (108) is the same if we identify global differentials of J and X 0 (108) by the pullback i * .
When we identify the weight 2 cusp form f (z) of Γ 0 (108) with the complex differential (1/2πi)f (z)dz on X 0 (108), if u ∈ B 0 (108) then u * f is f | 2 u in the notation of Sec. 2.1 [Miy89], identifying u with the 2x2 matrix representing it. As we only deal with weight 2 forms we omit the subscript 2.
If we say that u is represented by matrix M w.r.t. a basis f 1 , . . . , f n of cusp forms/differentials, we mean that u * f i = j M ji f j . So if u and v are represented by M and N , then uv is represented by M N . J 0 (108) decomposes up to isogeny into a product of 10 elliptic curves defined over Q, as partially described in [KM88]. We work with a natural basis for the weight 2 cusp forms of level 108 coming from multiples of the primitive forms f 27 , f 36 and f 108 which generate the CM part as in the last section, and the two primitive level 54 forms f (1) 54 , f
(2) 54 and their multiples by 2 which generate a 4-dimensional non-CM complement. All the forms have rational q-expansions. We give the initial q-expansions of the primitive forms, isogeny classes of elliptic curves over Q that they correspond to and the eigenvalues for the Atkin-Lehner involutions of the base level (which we refer to as W n to differentiate from the w n involutions for level 108). 27) is of genus 1 and this is a well-known case (see [Lig75]). f 27 is {η(3z)η(9z)} 2 where η is the Dedekind eta function ( §4.4 [Miy89]).
Conductor 27
f 27 = q − 2q 4 − q 7 + 5q 13 + 4q 16 − 7q 19 + O(q 25 ) W 27 = −1 E 27 : y 2 + y = x 3 ≃ y 2 = x 3 + 16 X 0 (
Conductor 36
f 36 = q − 4q 7 + 2q 13 + 8q 19 + O(q 25 ) W 9 = 1, W 4 = −1
E 36 : y 2 = x 3 + 1 X 0 (36) is of genus 1 and this is a well-known case (see [Lig75]). f 36 is η(6z) 4 .
Conductor 108
f 108 = q + 5q 7 − 7q 13 − q 19 + O(q 25 )
w 27 = 1, w 4 = −1 E 108 : y 2 = x 3 + 4 f 108 and the action of the Atkin-Lehner operators come from Tables 3 and 5 of [BK75] or from a modular form computer package such as William Stein's. It is easy to check that the CM elliptic curve E 108 has conductor 108 with f 108 as its associated modular form (which is of the type described in Thm 4.8.2 of [Miy89] with K = k(108)).
Conductor 54
f (1) 54 = q − q 2 + q 4 + 3q 5 − q 7 − q 8 − 3q 10 − 3q 11 + O(q 12 ) w 27 = −1, w 2 = 1 f (2) 54 = q + q 2 + q 4 − 3q 5 − q 7 + q 8 − 3q 10 + 3q 11 + O(q 12 ) w 27 = 1, w 2 = −1 E (1) 54 : y 2 + xy = x 3 − x 2 + 12x + 8 E (2) 54 : y 2 + xy + y = x 3 − x 2 + x − 1 The f (i)
54 and the action of the Atkin-Lehner operators again come from Tables 3 and 5 of [BK75] or from a modular form computer package. It is easy to check that the elliptic curves given have Definition 3.1. δ n is the operator on modular forms given by the matrix n 0 0 1 , so that if f (z) is a weight 2 form, (f |δ n )(z) is the form nf (nz).
Definition 3.2. e 1 , . . . , e 10 are the basis for the weight 2 cusp forms of Γ 0 (108) defined as follows: The standard decomposition into new and old forms (Section 4.6, [Miy89]) shows that e 1 , . . . , e 10 form a basis for the weight 2 cusp forms of Γ 0 (108) Identifying these cusp forms with differential forms on X 0 (108) and J, V := e 1 , . . . , e 4 is the subspace corresponding to differentials of J H and W := e 5 , . . . , e 10 the subspace corresponding to J C 1 + J C 2 . All endomorphisms of J preserve these subspaces.
e 1 = f (2) 54 − f 1 2 −1 −3 0 0 −1 1 0 0 0 0 −1 −3 0 0 −1 1 1 2 −1 0 √ −3 0 0 −1 0 √ −3 √ −3 0 −1 0 0 √ −3 0 −1
With respect to the basis e 5 , . . . , e 10 of W ,w 4 , w 27 , S 2 and S 3 act by the following matrices
1 0 0 0 0 0 0 1 0 0 0 0 0 0 −1 0 0 0 0 0 0 −1 0 0 0 0 0 0 −1 0 0 0 0 0 0 −1 −1 0 0 0 0 0 0 −1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 −1 0 0 0 0 0 0 −1 −1/2 0 0 0 −3/2 0 0 1 0 0 0 0 0 0 −1 0 0 0 0 0 0 −1 0 0 −1/2 0 0 0 1/2 0 0 0 0 0 0 −1 ζ 0 0 0 0 0 0 ζ −1 0 0 0 0 0 0 −(1/2)ζ −1 0 0 −(1 − ζ)/2 0 0 0 ζ 0 0 0 0 0 0 ζ 0 0 0 −(1 − ζ)/2 0 0 −(1/2)ζ −1 Proof.
The proof is a straightforward computation using relations between Atkin-Lehner involutions and the δ i and congruence conditions on the exponents of the non-zero terms of the q-expansions to find the S 2 and S 3 actions.
For S 2 : All forms f |δ i where i is 2 or 4 are clearly fixed by S 2 . Generally, considering qexpansions, if a form f is an eigenvalue of the Hecke operator T 2 at its even base level with eigenvalue e, then we see that f |S 2 = −f + ef |δ 2 . Note that f 36 |δ 3 is still an eigenvector of T 2 with eigenvalue 0. This leaves only f 27 to consider. As it is killed by T 2 , the definition of the T 2 action quickly leads to f 27 |S 2 = −(f 27 + f 27 |δ 4 ).
For S 3 : f 27 , f 36 and f 108 all have q-expansions where all non-zero terms a n q n have n = 1 mod 3. This follows from the fact that they are eigenvalues of all Hecke operators and that a p = 0 if p = 2 mod 3, p > 2, as the associated elliptic curves have supersingular reduction at these primes so p|a p and |a p | < 2 √ p. a 2 and a 3 are clearly also zero. As f Note: From the above lemma, we see that τ 3 acts trivially on V and multiplies each e i for i ≥ 5 by ζ or ζ −1 as asserted in the last section.
Using the commutator conditions for a new automorphism u of order 2 as described in Lemma 2.1, it is now easy to show that u acts on weight two forms by a matrix ±M (w.r.t. the e i basis) with M of the form
1 0 0 0 0 0 z −1 0 0 z 0 0 0 0 0 1 0 0 0 0 −za 0 0 0 0 0 0 b 0 0 (−za) −1 0 0 0 0 0 0 b −1 0 0 0 0 0 0 0 0 0 a 0 0 0 0 a −1 0
with a, b ∈ C * and z = √ −3 as defined in Lemma 3.3.
We can now complete the construction of u on a canonical model of X 0 (108) with detailed computations that can be carried out using a suitable computer algebra system. The author performed these with Magma. There are 3 steps.
(1) Compute a basis R for the degree 2 canonical relations for X 0 (108) embedded into P 9 via the differential basis corresponding to the forms e i . These relations generate the full ideal defining X 0 (108) in P 9 .
(2) Substitute the automorphism of P 9 given by M into R treating a and b as indeterminates. Clear powers of a and b from denominators. The condition that each new degree 2 form must lie in the the span of R gives a number of polynomial relations on a and b that generate a zero dimensional ideal I a,b of k(108)[a, b].
(3) Compute a lex Gröbner basis of I a,b . From this, we can read off all solutions for a and b such that M gives an automorphism of X 0 (108).
For the first step we need to find a basis for the linear relations between the 55 weight 4 cusp forms e i e j , 1 ≤ i ≤ j ≤ 10. Considering it as a regular differential of degree 2 (see Section 2.3 [Miy89] -note that there are no elliptic points here), a weight 4 form for Γ 0 (108) that vanishes to order at least 2 at each cusp is zero iff it has a q-expansion n≥2 a n q n with a n = 0, ∀n ≤ 38. So the computation reduces to finding a basis for the kernel of a 55 × 37 matrix with integer entries. In practise, it is good to work to a higher q-expansion precision than 38 and we actually did the computation with the expansions up to q 150 . This still only took a fraction of a second in Magma. Applying an LLL-reduction to get a nice basis for the relations, the result is that the canonical model for X 0 (108) in P 9 with coordinates x i is defined by the ideal generated by the following 28 degree 2 polynomials:
x 3 x 4 + x 6 x 9 − x 5 x 10 , x 1 x 2 − x 6 x 9 − x 5 x 10 , x 2 x 6 − x 3 x 7 + x 1 x 10 , x 4 x 5 − x 1 x 8 + x 6 x 10
x 1 x 8 − x 3 x 9 + x 6 x 10 , x 4 x 6 + x 1 x 7 − x 3 x 10 , x 4 x 7 − 2x 8 x 9 + x 2 x 10 , x 3 x 7 − 2x 5 x 8 + x 1 x 10 x 2 x 5 − 2x 3 x 8 + x 1 x 9 , x 2 x 5 − 2x 6 x 7 − x 1 x 9 , x 2 x 4 + x 7 x 9 − 2x 8 x 10 , x 1 x 7 − 2x 5 x 9 + x 3 x 10 x 2 x 3 − x 5 x 7 − 2x 6 x 8 , x 2 x 3 + x 1 x 4 − 2x 5 x 7 , x 2 7 − x 2 x 8 − x 4 x 9 + x 2 10 , x 2 1 − x 2 3 + 2x 5 x 6 3x 1 x 5 − 2x 4 x 8 − x 2 x 9 , 3x 2 6 − x 2 7 + x 2 10 , 3x 1 x 3 − x 7 x 9 − 2x 8 x 10 , 3x 1 x 6 + x 4 x 7 − x 2 x 10 3x 3 x 6 − x 2 x 7 + x 4 x 10 , 3x 3 x 5 − x 2 7 − x 2 x 8 − x 2 10 , 3x 2 1 − x 2 4 − 2x 9
x 10 , x 2 2 − 3x 2 3 + 2x 9 x 10 x 2 2 +x 2 4 −4x 7 x 8 +2x 9 x 10 , x 2 x 7 −4x 2 8 +2x 2 9 +x 4 x 10 , x 4 x 8 +x 2 x 9 −2x 7 x 10 , 3x 2 5 −x 2 x 7 −x 2 9 −x 4 x 10
We are using the fact that X 0 (108) is not hyperelliptic [Ogg74]. This follows from the above anyway, since there would be 36 canonical quadric relations if it were. However, it needs to be checked that X 0 (108) is not trigonal (having a degree 3 rational function) when there would be independent degree 3 relations. For this, it is only necessary, for example, to verify that the ideal defined by the above polynomials has the right Hilbert series. This was easily verified in Magma which uses a standard Gröbner based algorithm [BS92].
For the second step, we work over K(a, b), K = Q(z), apply the substitution x i → 1≤j≤10 M j,i x j to the above polynomials, and the rest is strightforward linear algebra. Applying the Gröbner basis algorithm, we find that I a,b is generated as an ideal by the two polynomials b − za 2 , a 3 + (1/2) which gives 3 possibilities for u with a any cube root of −1/2. We remark that if u is one of these automorphisms then the other two are uτ 3 and uτ −1 3 as expected. We also check that u σ u −1 = τ 3 if a σ = exp(2πi/3)a.
Theorem 3.4. B 0 (108) is of index two in A 0 (108), which has the structure described in Lemma 2.1. u is given explicitly on the canonical model of X 0 (108) with the above defining equations by [x 1 : x 2 : x 3 : x 4 : x 5 : x 6 : x 7 : x 8 : x 9 : x 10 ] → [x 1 : zx 3 : (1/z)x 2 : x 4 : (c/z)x 7 : (c 2 /z)x 8 : (z/c)x 5 : (z/c 2 )x 6 : −cx 10 : −(1/c)x 9 ] where c 3 = 2 and z = √ −3 with ℑ(z) > 0.
Remarks:
(1) The action of u on differentials is given by ±M where M is the matrix on page 8. M has 1 (resp. −1) as an eigenvalue of multiplicity 6 (resp. 4). Let Y = X 0 (108)/ u . If the action was by M , then the genus of Y , g Y , would be 6. The Hurwitz formula would then give a value of −2 for the number of fixed points of u. Thus u acts on differentials by −M , g Y = 4, and u has two fixed points on X 0 (108).
(2) On our canonical model of X 0 (108), the cusp ∞ is given by the point (1 : 1 : 1 : 1 : 1 : 0 : 1 : 1 : 1 : 1) and the generators of B 0 (108) act via the matrices given in Lemma 3.3. It is then easy to compute all of the cuspidal points as the cusps form a single orbit under B 0 (108). It is then easily verified that u({cusps}) ∩ {cusps} = ∅.
their associated modular forms. In fact the Es are quadratic twists of each other by −3 and the two f
= f 27 + f 27 |δ 4 e 6 = f 27 |δ 2 e 7 = f 36 + f 36 |δ 3 e 8 = f 108 e 9 = f 27 − f 27 |δ 4 e 10 = f 36 − f 36 |δ 3
Lemma 3 . 3 .
33With respect to the basis e 1 , . . . , e 4 of V , w 4 , w 27 , S 2 and S 3 act by the following matrices ( ζ := exp(2πi/3), √ −3 := ζ − ζ −1 )
.
) 2 quadratic character and are killed by the T 3 operator, f n=2(3) b n q n . From these facts, the action of S 3 on the basis follows easily.For w 4 : Simple matrix computations show that (f |δ 2 )|w 4 = f |W 2 and f |w 4 = (f |W 2 )|δ 2 for a level 54 form f . Similarly, f |w 4 = f |δ 4 , (f |δ 2 )|w 4 = f |δ 2 and (f |δ 4 )|w 4 = f for level 27 forms; f |w 4 = f |W 4 and (f |δ 3 )|w 4 = (f |W 4 )|δ 3 for level 36 forms. The full w 4 action follows. For w 27 : Again, simple matrix computations show that f |w 27 = f |W 27 and (f |δ 2 )|w 27 = (f |W 27 )|δ 2 for level 54 forms; f |w 27 = f |W 27 and (f |δ i )|w 27 = (f |W 27 )|δ i ( i = 2 or 4) for level 27 forms; f |w 27 = (f |W 9 )|δ 3 and (f |δ 3 )|w 27 = (f |W 9 ) for level 36 forms.The full w 27 action follows.
F Bars, The group structure of the normaliser of Γ0(N ). 36F. Bars, The group structure of the normaliser of Γ0(N ), Communications in Algebra 36 (2008), 2160- 2170.
The Magma algebra system I. The user language. W Bosma, J Cannon, C Playoust, J. Symbolic Computation. 24W. Bosma, J. Cannon, and C. Playoust, The Magma algebra system I. The user language., J. Symbolic Computation 24 (1997), 235-265.
Modular Functions of One Variable IV. B.J. Birch and W. KuykSpringer-Verlag476B.J. Birch and W. Kuyk (eds.), Modular Functions of One Variable IV, LNM 476, Springer-Verlag, 1975.
Computation of Hilbert functions. D Bayer, M Stillman, J. Symbolic Computation. 14D. Bayer and M. Stillman, Computation of Hilbert functions, J. Symbolic Computation 14 (1992), 31-50.
The automorphism group of the modular curve X0(63). N Elkies, Compositio Mathematica. 74N. Elkies, The automorphism group of the modular curve X0(63), Compositio Mathematica 74 (1990), 203-208.
Arithmetic Moduli of Elliptic Curves. N Katz, B Mazur, Princeton University PressN. Katz and B. Mazur, Arithmetic Moduli of Elliptic Curves, Princeton University Press, 1985.
M A Kenku, F Momose, Automorphism groups of the modular curves X0(N ). 65M.A. Kenku and F. Momose, Automorphism groups of the modular curves X0(N ), Compositio Mathe- matica 65 (1988), 51-80.
Courbes modulaires de genre 1. G Ligozat, Bull. Soc. Math. France. 43MemoireG. Ligozat, Courbes modulaires de genre 1, Bull. Soc. Math. France (Suppl.), Memoire 43 (1975).
T Miyake, Modular Forms. Springer-VerlagT. Miyake, Modular Forms, Springer-Verlag, 1989.
Hyperelliptic modular curves. A Ogg, Bull. Soc. Math. France. 228A. Ogg, Hyperelliptic modular curves, Bull. Soc. Math. France 228 (1974), 449-462.
Über die Automorphismengruppe von X0(N ). Math. Ann. 228,Über die Automorphismengruppe von X0(N ), Math. Ann. 228 (1977), 279-292.
Introduction to the Arithmetic Theory of Automorphic Functions. G Shimura, School of Mathematics and Statistics. 07Princeton University PressG. Shimura, Introduction to the Arithmetic Theory of Automorphic Functions, Princeton University Press, 1971. School of Mathematics and Statistics F07, University of Sydney, NSW 2006, Australia
|
[] |
[
"Citation Analysis May Severely Underestimate the Impact of Clinical Research as Compared to Basic Research",
"Citation Analysis May Severely Underestimate the Impact of Clinical Research as Compared to Basic Research"
] |
[
"Nees Jan Van Eck \nCentre for Science and Technology Studies\nLeiden University\nLeidenThe Netherlands\n",
"Anthony F J Van Raan \nCentre for Science and Technology Studies\nLeiden University\nLeidenThe Netherlands\n",
"Robert J M Klautz \nCentre for Science and Technology Studies\nLeiden University\nLeidenThe Netherlands\n\nDepartment of Cardiothoracic Surgery\nLeiden University Medical Center\nThe NetherlandsLeiden\n",
"Wilco C Peul \nDepartment of Neurosurgery\nLeiden University Medical Center\nLeidenThe Netherlands\n"
] |
[
"Centre for Science and Technology Studies\nLeiden University\nLeidenThe Netherlands",
"Centre for Science and Technology Studies\nLeiden University\nLeidenThe Netherlands",
"Centre for Science and Technology Studies\nLeiden University\nLeidenThe Netherlands",
"Department of Cardiothoracic Surgery\nLeiden University Medical Center\nThe NetherlandsLeiden",
"Department of Neurosurgery\nLeiden University Medical Center\nLeidenThe Netherlands"
] |
[] |
Background: Citation analysis has become an important tool for research performance assessment in the medical sciences. However, different areas of medical research may have considerably different citation practices, even within the same medical field. Because of this, it is unclear to what extent citation-based bibliometric indicators allow for valid comparisons between research units active in different areas of medical research.Methodology: A visualization methodology is introduced that reveals differences in citation practices between medical research areas. The methodology extracts terms from the titles and abstracts of a large collection of publications and uses these terms to visualize the structure of a medical field and to indicate how research areas within this field differ from each other in their average citation impact.Results: Visualizations are provided for 32 medical fields, defined based on journal subject categories in the Web of Science database. The analysis focuses on three fields: Cardiac & cardiovascular systems, Clinical neurology, and Surgery. In each of these fields, there turn out to be large differences in citation practices between research areas. Low-impact research areas tend to focus on clinical intervention research, while high-impact research areas are often more oriented on basic and diagnostic research.Conclusions: Popular bibliometric indicators, such as the h-index and the impact factor, do not correct for differences in citation practices between medical fields. These indicators therefore cannot be used to make accurate between-field comparisons. More sophisticated bibliometric indicators do correct for field differences but still fail to take into account within-field heterogeneity in citation practices. As a consequence, the citation impact of clinical intervention research may be substantially underestimated in comparison with basic and diagnostic research.
|
10.1371/journal.pone.0062395
| null | 18,658,226 |
1210.0442
|
0c123fa3ee3e4086313f90f2702a3895fcead551
|
Citation Analysis May Severely Underestimate the Impact of Clinical Research as Compared to Basic Research
Nees Jan Van Eck
Centre for Science and Technology Studies
Leiden University
LeidenThe Netherlands
Anthony F J Van Raan
Centre for Science and Technology Studies
Leiden University
LeidenThe Netherlands
Robert J M Klautz
Centre for Science and Technology Studies
Leiden University
LeidenThe Netherlands
Department of Cardiothoracic Surgery
Leiden University Medical Center
The NetherlandsLeiden
Wilco C Peul
Department of Neurosurgery
Leiden University Medical Center
LeidenThe Netherlands
Citation Analysis May Severely Underestimate the Impact of Clinical Research as Compared to Basic Research
Background: Citation analysis has become an important tool for research performance assessment in the medical sciences. However, different areas of medical research may have considerably different citation practices, even within the same medical field. Because of this, it is unclear to what extent citation-based bibliometric indicators allow for valid comparisons between research units active in different areas of medical research.Methodology: A visualization methodology is introduced that reveals differences in citation practices between medical research areas. The methodology extracts terms from the titles and abstracts of a large collection of publications and uses these terms to visualize the structure of a medical field and to indicate how research areas within this field differ from each other in their average citation impact.Results: Visualizations are provided for 32 medical fields, defined based on journal subject categories in the Web of Science database. The analysis focuses on three fields: Cardiac & cardiovascular systems, Clinical neurology, and Surgery. In each of these fields, there turn out to be large differences in citation practices between research areas. Low-impact research areas tend to focus on clinical intervention research, while high-impact research areas are often more oriented on basic and diagnostic research.Conclusions: Popular bibliometric indicators, such as the h-index and the impact factor, do not correct for differences in citation practices between medical fields. These indicators therefore cannot be used to make accurate between-field comparisons. More sophisticated bibliometric indicators do correct for field differences but still fail to take into account within-field heterogeneity in citation practices. As a consequence, the citation impact of clinical intervention research may be substantially underestimated in comparison with basic and diagnostic research.
Introduction
Citation analysis is widely used in the assessment of research performance in the medical sciences [1]. Especially the h-index [2] and the impact factor [3][4][5] are extremely popular bibliometric indicators. However, the use of these indicators for performance assessment has important limitations. In particular, both the hindex and the impact factor fail to take into account the enormous differences in citation practices between fields of science [6]. For instance, the average length of the reference list of a publication is much larger in molecular biology than in mathematics. As a consequence, publications in molecular biology on average are cited much more frequently than publications in mathematics. This difference can be more than an order of magnitude [7].
More sophisticated bibliometric indicators used by professional bibliometric centers perform a normalization to correct for differences in citation practices between fields of science [8,9]. These field-normalized indicators typically rely on a field classification system in which the boundaries of fields are explicitly defined (e.g., the journal subject categories in the Web of Science database). Unfortunately, however, practical applications of fieldnormalized indicators often suggest the existence of differences in citation practices not only between but also within fields of science. As shown in this paper, this phenomenon can be observed especially clearly in medical fields, in which the citation impact of clinical intervention research may be substantially underestimated in comparison with basic and diagnostic research. Within-field heterogeneity in citation practices is not corrected for by fieldnormalized bibliometric indicators and therefore poses a serious threat to the accuracy of these indicators. This paper presents an empirical analysis of the above problem, with a focus on the medical sciences. An advanced visualization methodology is used to show how citation practices differ between research areas within a medical field. In particular, substantial differences are revealed between basic and diagnostic research areas on the one hand and clinical intervention research areas on the other hand. Implications of the analysis for the use of bibliometric indicators in the medical sciences are discussed.
Methodology
The analysis reported in this paper starts from the idea that drawing explicit boundaries between research areas, for instance between basic and clinical areas, is difficult and would require many arbitrary decisions, for instance regarding the treatment of multidisciplinary topics that are in between multiple areas. To avoid the difficulty of drawing explicit boundaries between research areas, the methodology adopted in this paper relies strongly on the use of visualization. The methodology uses socalled term maps [10][11][12] to visualize scientific fields. A term map is a two-dimensional representation of a field in which strongly related terms are located close to each other and less strongly related terms are located further away from each other. A term map provides an overview of the structure of a field. Different areas in a map correspond with different subfields or research areas. In the term maps presented in this paper, colors are used to indicate differences in citation practices between research areas. For each term in a map, the color of the term is determined by the average citation impact of the publications in which the term occurs. We note that the use of visualization to analyze the structure and development of scientific fields has a long history [13], but visualization approaches have not been used before to study differences in citation practices between research areas. The use of term maps, also referred to as co-word maps, has a 30-year history, with early contributions dating back to the 1980s and the beginning of the 1990s [14][15][16].
The first methodological step is the definition of scientific fields. This study uses data from the Web of Science (WoS) bibliographic database. This database has a good coverage of the medical literature [17] and is the most popular data source for professional bibliometric analyses. Because of their frequent use in fieldnormalized bibliometric indicators, the journal subject categories in the WoS database are employed to define fields. There are about 250 subject categories in the WoS database, covering disciplines in the sciences, the social sciences, and the arts and humanities. The analyses reported in this paper are based on all publications in a particular subject category that are classified as article or review and that were published between 2006 and 2010. For each publication, citations are counted until the end of 2011.
Using natural language processing techniques, the titles and abstracts of the publications in a field are parsed. This yields a list of all noun phrases (i.e., sequences of nouns and adjectives) that occur in these publications. An additional algorithm [10] selects the 2000 noun phrases that can be regarded as the most characteristic terms of the field. This algorithm aims to filter out general noun phrases, like for instance result, study, patient, and clinical evidence. Filtering out these noun phrases is crucial. Due to their general meaning, these noun phrases do not relate specifically to one topic, and they therefore tend to distort the structure of a term map. Apart from excluding general noun phrases, noun phrases that occur only in a small number of publications are excluded as well. This is done in order to obtain sufficiently robust results. The minimum number of publications in which a noun phrase must occur depends on the total number of publications in a field. For the three fields discussed in the next section, thresholds between 70 and 135 publications were used.
Given a selection of 2000 terms that together characterize a field, the next step is to determine the number of publications in which each pair of terms co-occurs. Two terms are said to cooccur in a publication if they both occur at least once in the title or abstract of the publication. The larger the number of publications in which two terms co-occur, the stronger the terms are considered to be related to each other. In neuroscience, for instance, Alzheimer and short-term memory may be expected to co-occur a lot, indicating a strong relation between these two terms. The matrix of term cooccurrence frequencies serves as input for the VOS mapping technique [18]. This technique determines for each term a location in a two-dimensional space. Strongly related terms tend to be located close to each other in the two-dimensional space, while terms that do not have a strong relation are located further away from each other. The VOS mapping technique is closely related to the technique of multidimensional scaling [19], but for the purpose of creating term maps the VOS mapping technique has been shown to yield more satisfactory results, as discussed in detail in Ref. [18]. It is important to note that in the interpretation of a term map only the distances between terms are relevant. A map can be freely rotated, because this does not affect the inter-term distances. This also implies that the horizontal and vertical axes have no special meaning.
In the final step, the color of each term is determined. First, in order to correct for the age of a publication, each publication's number of citations is divided by the average number of citations of all publications that appeared in the same year. This yields a publication's normalized citation score. A score of 1 means that the number of citations of a publication equals the average of all publications that appeared in the same field and in the same year. Next, for each of the 2000 terms, the normalized citation scores of all publications in which the term occurs (in the title or abstract) are averaged. The color of a term is determined based on the resulting average score. Colors range from blue (average score of 0) to green (average score of 1) to red (average score of 2 or higher). Hence, a blue term indicates that the publications in which a term occurs have a low average citation impact, while a red term indicates that the underlying publications have a high average citation impact. The VOSviewer software [20] (freely available at www.vosviewer.com) is used to visualize the term maps resulting from the above steps. Only a limited level of detail is offered in Figures 1, 2, and 3. To explore the term maps in full detail, the reader is invited to use the interactive versions of the maps that are available at www. neesjanvaneck.nl/basic_vs_clinical/. The webpage also provides maps of 29 other medical fields as well as of all 32 medical fields taken together.
Results
The term maps shown in Figures 1, 2, and 3 all indicate a clear distinction between different research areas. Clinical research areas tends to be located mainly in the left part of a map and basic research areas mainly in the right part, although making a perfect distinction between basic and clinical research areas is definitely not possible. The basic-clinical distinction is best visible in the Cardiac & cardiovascular systems and Clinical neurology maps (Figures 1 and 2), in which the left part consists of clinical intervention research areas (e.g., cardiac surgery and neurosurgery) while the right part includes important basic and diagnostic research areas (e.g., cardiology and neurology). The Surgery map ( Figure 3) gives a somewhat different picture, probably because of the more clinical focus of surgical research. In this map, clinical research areas (e.g., orthopedic surgery, oncological surgery, and cardiac surgery) are concentrated in the left, middle, and upper parts, while research areas with a more basic focus can be found in the lower-right part.
Connections between basic research areas on the one hand and clinical research areas on the other hand are also visible in the term maps. The maps display 'bridges' that seem to represent translational research, that is, research aimed at translating basic research results into clinical practice. In the Cardiac & cardiovascular systems map (Figure 1), for instance, two bridges are visible, one in the upper part of the map and one in the lower part. In the upper part, the topic of atherosclerosis can be found, starting in the upper-right part of the map with basic research on vascular damage, continuing in the middle part with research on cholesterol and cholesterol lowering drugs, and extending in the upper-left part with interventional therapies such as coronary bypass surgery and percutaneous interventions (PCI) and its modifications (BMS and DES). In the lower part of the map, the topic of arrhythmias can be identified. It starts in the lower-right part of the map with basic research on electrophysiological phenomena, it continues in the middle part with diagnostic tools, and it ends in the lower-left part with the clinical application of ablation therapy for arrhythmias.
Looking at Figures 1, 2, and 3, a crucial observation is that the distinction between different research areas is visible not only in the structure of the maps but also in the colors of the terms. In general, in the right part of each map, in which the more basic and diagnostic research areas are located, there are many yellow, orange, and red terms, which clearly indicates an above-average citation impact. (As indicated by the color bar in the lower right in Figures 1, 2, and 3, yellow and orange correspond with a citation impact that is, respectively, about 25% and about 50% above the average of the field. Red corresponds with a citation impact that is 100% or more above average.) On the other hand, in the left part of each map, research areas can be found with mainly blue and green terms, implying a below-average citation impact. This pattern is most strongly visible in the Clinical neurology map ( Figure 2) and can also be observed in the Surgery map ( Figure 3). In the Cardiac & cardiovascular systems map ( Figure 1), a clear distinction between high-and low-impact research areas is visible as well, but it coincides only partially with the left-right distinction. We further note that within an area in a map terms are usually colored in a quite consistent way. In other words, terms tend to be surrounded mainly by other terms with a similar color. This is an important indication of the robustness of the maps.
The general picture emerging from Figures 1, 2, and 3, and supported by term maps for other medical fields provided online, is that within medical fields there is often a considerable heterogeneity in citation impact, with some research areas on average receiving two or even three times more citations per publication than other research areas. In general, low-impact research areas tend to focus on clinical research, in particular on surgical interventions. Research areas that are more oriented on basic and diagnostic research usually have an above average citation impact.
Discussion and Conclusion
The citation impact of a publication can be influenced by many factors. In the medical sciences, previous studies have for instance analyzed the effect of study design (e.g., case report, randomized controlled trial, or meta-analysis [21]), article type (i.e., brief report or full-size article [22]), and article length [23]. In this paper, the In general, the closer two terms are located to each other, the stronger their relation. The size and the color of a term indicate, respectively, the number of publications in which the term occurs and the average citation impact of these publications (where blue represents a low citation impact, green a normal citation impact, and red a high citation impact). Each term occurs in at least 70 publications. doi:10.1371/journal.pone.0062395.g001 effect of differences in citation practices between medical research areas has been investigated. Different fields of science have different citation practices. In some fields, publications have much longer reference lists than in others. Also, in some fields researchers mainly refer to recent work, while in other fields it is more common to cite older work. Because of such differences between fields, publications in one field may on average receive many more citations than publications in another field. Popular bibliometric indicators, such as the h-index and the impact factor, do not correct for this. The use of these indicators to make comparisons between fields may therefore easily lead to invalid conclusions. (This is by no means the only objection one may have against these indicators. An important objection against the impact factor for instance could be that the impact of a journal as a whole may not be representative of the impact of individual publications in the journal [24]. An objection against the h-index could be that it suffers from inconsistencies in its definition [25]).
The results obtained using the visualization methodology introduced in this paper go one step further and show that even within a single field of science there can be large differences in citation practices. Similar findings have been reported in earlier studies [26][27][28], but based on smaller analyses and not within the medical domain. The present results suggest that in medical fields low-impact research areas tend to be clinically oriented, focusing mostly on surgical interventions. Basic and diagnostic research areas usually have a citation impact above the field average, although not all high-impact research areas need to have a basic focus. The coloring of the term maps indicates that two-or even threefold impact differences between research areas within a single medical field are not uncommon.
Although differences in citation impact between basic and clinical research have been mentioned in earlier studies [24], only a limited amount of empirical evidence of such differences has been collected. We are aware of only a few earlier studies in which differences in citation impact between basic and clinical research have been analyzed [29][30][31][32]. These studies are based on much smaller amounts of data than the present analysis. Contrary to the present results, in Ref. [29] it is concluded that clinical research is cited more frequently than basic research. However, the study is limited in scope. It is restricted to a single medical field, and it considers publications from only a small set of journals. (Replicating the two analyses reported in Ref. [29] confirmed their results. The first analysis is based on six cardiovascular journals, three basic ones and three clinical ones. The difference between the outcomes of this analysis and the analysis reported in the present paper appears to be related to the particular characteristics of the selected journals. The publications in these journals turn out not to be fully representative for basic and clinical publications in all cardiovascular journals. The second analysis reported in Ref. [29] is based on the distinction between basic and clinical publications within a single cardiovascular journal (Circulation). In this case, the difference with the outcomes of the analysis reported in the present paper seems to indicate that the selected journal differs from the cardiovascular field as a whole In another relatively small study, reported in Ref. [30], no difference in citation impact between basic and clinical research is detected. This study has the limitation of being restricted to publications from only two journals. Two earlier studies [31,32] provide some evidence for a citation advantage for basic publications over clinical ones.
A number of limitations of the methodology of the present study need to be mentioned. First of all, because the visualization methodology does not draw explicit boundaries between research areas, no exact figures can be provided on citation impact differences between, for instance, basic and clinical research. On the other hand, by not drawing explicit boundaries, many arbitrary choices are avoided and more fine-grained analyses can be performed. Another methodological limitation is the ambiguity in the meaning and use of terms. Some terms may for instance be used both in basic and in clinical research. Although a term selection algorithm was employed to filter out the most ambiguous terms, some degree of ambiguity cannot be avoided when working with terms. Other limitations relate to the bibliographic database that was used. The WoS database has a good coverage of the medical literature, but to some extent the analysis might have been affected by gaps in the coverage of the literature. Also, the analysis depends strongly on the field definitions offered by the WoS database.
The results reported in this paper lead to the conclusion that one should be rather careful with citation-based comparisons between medical research areas, even if in a bibliographic database such as WoS the areas are considered to be part of the same field. Field-normalized bibliometric indicators, which are typically used by professional bibliometric centers, correct for differences in citation practices between fields, but at present they fail to correct for within-field differences. The use of bibliometric indicators, either the h-index and the impact factor or more sophisticated field-normalized indicators, may therefore lead to an underestimation of the impact of certain types of research compared with others. In particular, the impact of clinical intervention research may be underestimated, while the impact of basic and diagnostic research may be overestimated.
There is an urgent need for more accurately normalized bibliometric indicators. These indicators should correct not only for differences in citation practices between fields of science, but also for differences between research areas within the same field. Research areas could for instance be defined algorithmically based on citation patterns [33,34]. Alternatively, a normalization could be performed at the side of the citing publications by giving a lower weight to citations from publications with long reference lists and a higher weight to citations from publications that cite only a few references. A number of steps towards such citing-side normalization procedures have already been taken [35][36][37][38][39][40][41], but more research in this direction is needed. Using the presently In general, the closer two terms are located to each other, the stronger their relation. The size and the color of a term indicate, respectively, the number of publications in which the term occurs and the average citation impact of these publications (where blue represents a low citation impact, green a normal citation impact, and red a high citation impact). Each term occurs in at least 135 publications. doi:10.1371/journal.pone.0062395.g003
available bibliometric indicators, one should be aware of biases caused by differences in citation practices between areas of medical research, especially between basic and clinical areas.
Figures 1
1, 2, and 3 show the term maps obtained for the WoS fields Cardiac & cardiovascular systems, Clinical neurology, and Surgery. These fields were selected because they match well with our areas of expertise. The maps are based on, respectively, 75,314, 105,405, and 141,155 publications from the period 2006-2010.
Figure 1 .
1Term map of the Cardiac & cardiovascular systems field. The map shows 2000 terms extracted from titles and abstracts of publications in the WoS field Cardiac & cardiovascular systems.
Figure 2 .
2Term map of the Clinical neurology field. The map shows 2000 terms extracted from titles and abstracts of publications in the WoS field Clinical neurology. In general, the closer two terms are located to each other, the stronger their relation. The size and the color of a term indicate, respectively, the number of publications in which the term occurs and the average citation impact of these publications (where blue represents a low citation impact, green a normal citation impact, and red a high citation impact). Each term occurs in at least 100 publications. doi:10.1371/journal.pone.0062395.g002 in terms of the characteristics of its basic and clinical publications.)
Figure 3 .
3Term map of the Surgery field. The map shows 2000 terms extracted from titles and abstracts of publications in the WoS field Surgery.
PLOS ONE | www.plosone.org
April 2013 | Volume 8 | Issue 4 | e62395
AcknowledgmentsWe would like to thank Cathelijn Waaijer for helpful suggestions in the interpretation of the term maps.Author ContributionsConceived and designed the experiments: NJE LW. Performed the experiments: NJE LW. Analyzed the data: AFJR RJMK WCP. Contributed reagents/materials/analysis tools: NJE LW. Wrote the paper: NJE LW AFJR RJMK WCP.
How has healthcare research performance been assessed? A systematic review. V M Patel, H Ashrafian, K Ahmed, S Arora, S Jiwan, Journal of the Royal Society of Medicine. 1046Patel VM, Ashrafian H, Ahmed K, Arora S, Jiwan S, et al. (2011) How has healthcare research performance been assessed? A systematic review. Journal of the Royal Society of Medicine 104(6): 251-261.
An index to quantify an individual's scientific research output. J E Hirsch, Proceedings of the National Academy of Sciences. 10246Hirsch JE (2005) An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences 102(46): 16569-16572.
Life and times of the impact factor: Retrospective analysis of trends for seven medical journals (1994-2005) and their editors' views. M Chew, E V Villanueva, M B Van Der Weyden, Journal of the Royal Society of Medicine. 1003Chew M, Villanueva EV, Van der Weyden MB (2007) Life and times of the impact factor: Retrospective analysis of trends for seven medical journals (1994- 2005) and their editors' views. Journal of the Royal Society of Medicine 100(3): 142-150.
How can impact factors be improved?. E Garfield, British Medical Journal. 3137054Garfield E (1996) How can impact factors be improved? British Medical Journal 313(7054): 411-413.
The history and meaning of the journal impact factor. E Garfield, JAMA: Journal of the American Medical Association. 2951Garfield E (2006) The history and meaning of the journal impact factor. JAMA: Journal of the American Medical Association 295(1): 90-93.
Universality of citation distributions: Toward an objective measure of scientific impact. F Radicchi, S Fortunato, C Castellano, Proceedings of the National Academy of Sciences. 10545Radicchi F, Fortunato S, Castellano C (2008) Universality of citation distributions: Toward an objective measure of scientific impact. Proceedings of the National Academy of Sciences 105(45): 17268-17272.
Towards a new crown indicator: An empirical analysis. L Waltman, N J Van Eck, T N Van Leeuwen, M S Visser, Afj Van Raan, Scientometrics. 873Waltman L, Van Eck NJ, Van Leeuwen TN, Visser MS, Van Raan AFJ (2011) Towards a new crown indicator: An empirical analysis. Scientometrics 87(3): 467-481.
Subfield-specific normalized relative indicators and a new generation of relational charts: Methodological foundations illustrated on the assessment of institutional research performance. W Glä Nzel, B Thijs, A Schubert, K Debackere, Scientometrics. 781Glä nzel W, Thijs B, Schubert A, Debackere K (2009) Subfield-specific normalized relative indicators and a new generation of relational charts: Methodological foundations illustrated on the assessment of institutional research performance. Scientometrics 78(1): 165-188.
Towards a new crown indicator: Some theoretical considerations. L Waltman, N J Van Eck, T N Van Leeuwen, M S Visser, Afj Van Raan, Journal of Informetrics. 51Waltman L, Van Eck NJ, Van Leeuwen TN, Visser MS, Van Raan AFJ (2011) Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics 5(1): 37-47.
Text mining and visualization using VOSviewer. N J Van Eck, L Waltman, ISSI Newsletter. 73Van Eck NJ, Waltman L (2011) Text mining and visualization using VOSviewer. ISSI Newsletter 7(3): 50-54.
Journal editorials give indication of driving science issues. Cjf Waaijer, C A Van Bochove, Van Eck, N J , Nature. 463157Waaijer CJF, Van Bochove CA, Van Eck NJ (2010) Journal editorials give indication of driving science issues. Nature 463: 157.
On the map: Nature and Science editorials. Cjf Waaijer, C A Van Bochove, Van Eck, N J , Scientometrics. 861Waaijer CJF, Van Bochove CA, Van Eck NJ (2011) On the map: Nature and Science editorials. Scientometrics 86(1): 99-112.
Atlas of science: Visualizing what we know. K Börner, MIT PressBörner K (2010) Atlas of science: Visualizing what we know. MIT Press.
Co-word-based science maps of chemical engineering. Part I: Representations by direct multidimensional scaling. Hpf Peters, Afj Van Raan, Research Policy. 221Peters HPF, Van Raan AFJ (1993) Co-word-based science maps of chemical engineering. Part I: Representations by direct multidimensional scaling. Research Policy 22(1): 23-45.
Co-word maps of biotechnology: An example of cognitive scientometrics. A Rip, J P Courtial, Scientometrics. 66Rip A, Courtial JP (1984) Co-word maps of biotechnology: An example of cognitive scientometrics. Scientometrics 6(6): 381-400.
Mapping co-word structures: A comparison of multidimensional scaling and LEXIMAPPE. Rjw Tijssen, Afj Van Raan, Scientometrics. 153-4Tijssen RJW, Van Raan AFJ (1989) Mapping co-word structures: A comparison of multidimensional scaling and LEXIMAPPE. Scientometrics 15(3-4): 283- 295.
Citation analysis in research evaluation. H F Moed, SpringerMoed HF (2005) Citation analysis in research evaluation. Springer.
A comparison of two techniques for bibliometric mapping: Multidimensional scaling and VOS. N J Van Eck, L Waltman, R Dekker, J Van Den Berg, Journal of the American Society for Information Science and Technology. 6112Van Eck NJ, Waltman L, Dekker R, Van den Berg J (2010) A comparison of two techniques for bibliometric mapping: Multidimensional scaling and VOS. Journal of the American Society for Information Science and Technology 61(12): 2405-2416.
I Borg, Pjf Groenen, Modern multidimensional scaling. Springer2nd ed.Borg I, Groenen PJF (2005) Modern multidimensional scaling (2nd ed.). Springer.
Software survey: VOSviewer, a computer program for bibliometric mapping. N J Van Eck, L Waltman, Scientometrics. 842Van Eck NJ, Waltman L (2010) Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 84(2): 523-538.
Relative citation impact of various study designs in the health sciences. N A Patsopoulos, A A Analatos, Jpa Ioannidis, JAMA: Journal of the American Medical Association. 293Patsopoulos NA, Analatos AA, Ioannidis JPA (2005) Relative citation impact of various study designs in the health sciences. JAMA: Journal of the American Medical Association 293(19): 2362-2366.
Comparison of number of citations to full original articles versus brief reports. M N Mavros, V Bardakas, P I Rafailidis, T A Sardi, E Demetriou, Scientometrics. 941Mavros MN, Bardakas V, Rafailidis PI, Sardi TA, Demetriou E, et al. (2013) Comparison of number of citations to full original articles versus brief reports. Scientometrics 94(1): 203-206.
The impact of article length on the number of future citations: A bibliometric analysis of general medicine journals. M E Falagas, A Zarkali, D E Karageorgopoulos, V Bardakas, M N Mavros, PLoS ONE. 8249476Falagas ME, Zarkali A, Karageorgopoulos DE, Bardakas V, Mavros MN (2013) The impact of article length on the number of future citations: A bibliometric analysis of general medicine journals. PLoS ONE 8(2): e49476.
Why the impact factor of journals should not be used for evaluating research. P O Seglen, British Medical Journal. 3147079Seglen PO (1997) Why the impact factor of journals should not be used for evaluating research. British Medical Journal 314(7079): 498-502.
The inconsistency of the h-index. L Waltman, N J Van Eck, Journal of the American Society for Information Science and Technology. 632Waltman L, Van Eck NJ (2012) The inconsistency of the h-index. Journal of the American Society for Information Science and Technology 63(2): 406-415.
A new reference standard for citation analysis in chemistry and related fields based on the sections of Chemical Abstracts. C Neuhaus, H-D Daniel, Scientometrics. 782Neuhaus C, Daniel H-D (2009) A new reference standard for citation analysis in chemistry and related fields based on the sections of Chemical Abstracts. Scientometrics 78(2): 219-229.
Citation rates in mathematics: A study of variation by subdiscipline. L Smolinsky, A Lercher, Scientometrics. 913Smolinsky L, Lercher A (2012) Citation rates in mathematics: A study of variation by subdiscipline. Scientometrics 91(3): 911-924.
Redefining the field of economics: Improving field normalization for the application of bibliometric techniques in the field of economics. T N Van Leeuwen, Calero Medina, C , Research Evaluation. 211Van Leeuwen TN, Calero Medina C (2012) Redefining the field of economics: Improving field normalization for the application of bibliometric techniques in the field of economics. Research Evaluation 21(1): 61-70.
Differences in citation frequency of clinical and basic science papers in cardiovascular research. T Opthof, Medical and Biological Engineering and Computing. 496Opthof T (2011) Differences in citation frequency of clinical and basic science papers in cardiovascular research. Medical and Biological Engineering and Computing 49(6): 613-621.
The classification of biomedical journals by research level. G Lewison, G Paraje, Scientometrics. 602Lewison G, Paraje G (2004) The classification of biomedical journals by research level. Scientometrics 60(2): 145-157.
The effect of funding on the outputs of biomedical research. G Lewison, G Dawson, Scientometrics. 411-2Lewison G, Dawson G (1998) The effect of funding on the outputs of biomedical research. Scientometrics 41(1-2): 17-27.
Bibliometric methods for the evaluation of arthritis research. G Lewison, M E Devey, Rheumatology. 381Lewison G, Devey ME (1999) Bibliometric methods for the evaluation of arthritis research. Rheumatology 38(1): 13-20.
Toward an objective, reliable and accurate method for measuring research leadership. R Klavans, K W Boyack, Scientometrics. 823Klavans R, Boyack KW (2010) Toward an objective, reliable and accurate method for measuring research leadership. Scientometrics 82(3): 539-553.
A new methodology for constructing a publication-level classification system of science. L Waltman, N J Van Eck, Journal of the American Society for Information Science and Technology. 6312Waltman L, Van Eck NJ (2012) A new methodology for constructing a publication-level classification system of science. Journal of the American Society for Information Science and Technology 63(12): 2378-2392.
A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking. W Glä Nzel, A Schubert, B Thijs, K Debackere, Scientometrics. 872Glä nzel W, Schubert A, Thijs B, Debackere K (2011) A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking. Scientometrics 87(2): 415-424.
Scopus's source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. L Leydesdorff, T Opthof, Journal of the American Society for Information Science and Technology. 6111Leydesdorff L, Opthof T (2010) Scopus's source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. Journal of the American Society for Information Science and Technology 61(11): 2365-2369.
Measuring contextual citation impact of scientific journals. H F Moed, Journal of Informetrics. 43Moed HF (2010) Measuring contextual citation impact of scientific journals. Journal of Informetrics 4(3): 265-277.
How journal rankings can suppress interdisciplinary research: A comparison between Innovation Studies and Business & Management. I Rafols, L Leydesdorff, A O'hare, P Nightingale, A Stirling, Research Policy. 417Rafols I, Leydesdorff L, O'Hare A, Nightingale P, Stirling A (2012) How journal rankings can suppress interdisciplinary research: A comparison between Innovation Studies and Business & Management. Research Policy 41(7): 1262-1282.
Source normalized indicators of citation impact: An overview of different approaches and an empirical comparison. L Waltman, N J Van Eck, Scientometrics in pressWaltman L, Van Eck NJ (2012) Source normalized indicators of citation impact: An overview of different approaches and an empirical comparison. Sciento- metrics in press.
Some modifications to the SNIP journal impact indicator. L Waltman, N J Van Eck, T N Van Leeuwen, M S Visser, Journal of Informetrics. 72Waltman L, Van Eck NJ, Van Leeuwen TN, Visser MS (2013) Some modifications to the SNIP journal impact indicator. Journal of Informetrics 7(2): 272-285.
Modifying the journal impact factor by fractional citation weighting: The audience factor. M Zitt, H Small, Journal of the American Society for Information Science and Technology. 5911Zitt M, Small H (2008) Modifying the journal impact factor by fractional citation weighting: The audience factor. Journal of the American Society for Information Science and Technology 59(11): 1856-1860.
|
[] |
[
"Towards a comprehensive knowledge of the open cluster Haffner 9",
"Towards a comprehensive knowledge of the open cluster Haffner 9"
] |
[
"Andrés E Piatti \nObservatorio Astronómico\nUniversidad Nacional de Córdoba\nLaprida 8545000CórdobaArgentina\n\nConsejo Nacional de Investigaciones Científicas y Técnicas\nAv. Rivadavia 1917C1033AAJBuenos AiresArgentina\n"
] |
[
"Observatorio Astronómico\nUniversidad Nacional de Córdoba\nLaprida 8545000CórdobaArgentina",
"Consejo Nacional de Investigaciones Científicas y Técnicas\nAv. Rivadavia 1917C1033AAJBuenos AiresArgentina"
] |
[
"MNRAS"
] |
We turn our attention to Haffner 9, a Milky Way open cluster whose previous fundamental parameter estimates are far from being in agreement. In order to provide with accurate estimates we present high-quality Washington CT 1 and Johnson BV I photometry of the cluster field. We put particular care in statistically clean the colourmagnitude diagrams (CMDs) from field star contamination, which was found a common source in previous works for the discordant fundamental parameter estimates. The resulting cluster CMD fiducial features were confirmed from a proper motion membership analysis. Haffner 9 is a moderately young object (age ∼ 350 Myr), placed in the Perseus arm -at a heliocentric distance of ∼ 3.2 kpc-, with a lower limit for its present mass of ∼ 160 M and of nearly metal solar content. The combination of the cluster structural and fundamental parameters suggest that it is in an advanced stage of internal dynamical evolution, possibly in the phase typical of those with mass segregation in their core regions. However, the cluster still keeps its mass function close to that of the Salpeter's law.
|
10.1093/mnras/stw2987
|
[
"https://arxiv.org/pdf/1611.04859v1.pdf"
] | 119,448,206 |
1611.04859
|
1aa7ecdc89fb3872c81c907bd1f2d251e5153a77
|
Towards a comprehensive knowledge of the open cluster Haffner 9
2016
Andrés E Piatti
Observatorio Astronómico
Universidad Nacional de Córdoba
Laprida 8545000CórdobaArgentina
Consejo Nacional de Investigaciones Científicas y Técnicas
Av. Rivadavia 1917C1033AAJBuenos AiresArgentina
Towards a comprehensive knowledge of the open cluster Haffner 9
MNRAS
0002016Accepted XXX. Received YYY; in original form ZZZPreprint 9 October 2018 Compiled using MNRAS L A T E X style file v3.0techniques: photometric -Galaxy: open clusters and associations: gen- eral
We turn our attention to Haffner 9, a Milky Way open cluster whose previous fundamental parameter estimates are far from being in agreement. In order to provide with accurate estimates we present high-quality Washington CT 1 and Johnson BV I photometry of the cluster field. We put particular care in statistically clean the colourmagnitude diagrams (CMDs) from field star contamination, which was found a common source in previous works for the discordant fundamental parameter estimates. The resulting cluster CMD fiducial features were confirmed from a proper motion membership analysis. Haffner 9 is a moderately young object (age ∼ 350 Myr), placed in the Perseus arm -at a heliocentric distance of ∼ 3.2 kpc-, with a lower limit for its present mass of ∼ 160 M and of nearly metal solar content. The combination of the cluster structural and fundamental parameters suggest that it is in an advanced stage of internal dynamical evolution, possibly in the phase typical of those with mass segregation in their core regions. However, the cluster still keeps its mass function close to that of the Salpeter's law.
INTRODUCTION
Different statistical procedures have been proposed with an acceptable success, in order to avoid as much as possible the field contamination in cluster colour-magnitude diagrams (CMDs) analysis (e.g. Bonatto & Bica 2007;Pavani & Bica 2007;Maia et al. 2010) The developed statistical methods basically involve: (i) dividing the full range of magnitude and colour of a given CMD into a grid whose cells have axes along the magnitude and colour directions, (ii) computing the expected number-density of field stars in each cell based on the number of comparison field stars with magnitude and colour compatible with those of the cell, and (iii) subtracting randomly the expected number of field stars from each cell. Although the methods reapply the cleaning procedure using different cell sizes in the CMDs, they are fixed each time, i.e., they do not vary across the CMDs.
From our experience in cleaning the field star contamination in the cluster CMDs, we have identified some situations which still need our attention. It frequently happens that some parts of the CMDs are more populated than others, so that fixing the size of the cells in the CMDs becomes E-mail: [email protected] a difficult task. Small cells do not usually carry out a satisfactory job in CMD regions with a scarce number of fields stars, while big cells fail in populous CMD regions. Thus, relatively bright field red giants with small photometric errors could not be subtracted and, consequently, the cluster CMD could show spurious red giant features. A compromise between minimizing the residuals left after the subtractions of field stars from the cluster CMDs and maximizing the cleaning of field stars is always desiderable.
In this paper, we present a comprehensive multi-band photometric analysis of Haffner 9 from BV I and Washington CT1 photometry. The cluster has been previously studied from 2MASS and BV I photometry. However, a relatively lower photometric accuracy and shallower limited magnitude, in combination with a not much pinpointed treatment of field decontamination led those studies to discordant results. In Section 2 we describe the collection and reduction of the available photometric data and their thorough treatment in order to build a extensive and reliable data set. The cluster structural and fundamental parameters are derived from star counts and colour-magnitude and colour-colour diagrams, respectively, as described in Section 3. The analysis of the results of the different astrophysical parameters obtained is carried out in Section 4, where implications about the stage of its dynamical evolution are suggested. Finally, Section 5 summarizes the main conclusion of this work.
DATA COLLECTION AND REDUCTION
We used CCD images 1 obtained on the nights of December 18 th and 19 th , 2004 with a 2048×2048 pixel Tektronix CCD attached to the 0.9 m telescope (scale 0.396 arcsec/pixel) at Cerro Tololo Inter-American Observatory (CTIO, Chile). Its field of view is 13.6×13.6 arcmin 2 . We used the Washington C (Canterna 1976) and Kron-Cousins R filters. The latter has a significantly higher throughput as compared with the standard Washington T1 filter so that R magnitudes can be accurately transformed to yield T1 magnitudes Geisler (1996). We used a series of bias, dome and sky flat-field exposures per filter to calibrate the CCD instrumental signature. We also utilised images for the Small Magellanic Cloud (SMC) cluster Lindsay 106, which was previously observed at La Silla (ESO, Chile) with the C and T1 filters (Piatti et al. 2007). Lindsay 106 was used here only as control cluster, i.e., to verify the quality of the present CTIO photometry. Table 1 shows the log of the observations with filters, exposure times, airmasses and seeing estimates. A large number (typically 20) of standard stars from the list of Geisler (1996) was also observed on each night. They cover wide colour and airmass ranges, so that we could calibrate properly the program stars observed on these nights.
The stellar photometry was performed using the star finding and point spread function (PSF) fitting routines in the DAOPHOT/ALLSTAR suite of programs (Stetson et al. 1990). Radially varying aperture corrections were applied to take out the effects of PSF variations across the field of view, although a quadratically varying PSF was employed. The resultant instrumental magnitudes were standardized using the equations:
c = (3.679 ± 0.021) + T1 + (C − T1) + (0.294 ± 0.014) × XC −(0.085 ± 0.005) × (C − T1),(1)r = (3.206 ± 0.021) + T1 + (0.115 ± 0.014) × XR −(0.014 ± 0.004) × (C − T1),(2)
where X represents the effective airmass. Capital and lowercase letters stand for standard and instrumental magnitudes, respectively. The coefficients were derived through the IRAF 2 routine FITPARAM. The root mean square (rms) deviations of the fitted values from the fits to the standards were 0.021 for c and 0.014 for r, which indicates that the nights were photometric. We combined all the independent measurements using the stand-alone DAOMATCH Figure 1. Observed CMD for all the stars measured in the field of haffner 9. Errorbars at the left-hand margin represent the photometric uncertainties given by DAOPHOT.
and DAOMASTER programmes, kindly provided by Peter Stetson. The final information gathered for each cluster consists of a running number per star, the x and y coordinates, the measured T1 magnitudes, C − T1 colours, and the observational errors σ(T1) and σ(C − T1). In the case of Haffner 9, for which we combined respectively four different photometric tables, we also included the number of measures performed in T1 and C − T1. Table 2 gives this information, where only a portion of it is shown for guidance regarding its form and content. The whole content of Table 2, as well as that for Lindsay 106 (Table 3), is available in the online version of the journal. The calculated internal accuracy of the photometry has been computed according to the criteria given by Stetson et al. (1990) and includes random noise, errors in the modelling and centering of the stellar profile. These internal standard errors provided by DAOPHOT for the T1 magnitude and C − T1 colour have been represented by errorbars at the left-hand margin of the observed CMD in Fig. 1. Fig. 2 shows how the differences between our T1 magnitudes and C − T1 colours and those obtained by Piatti et al. (2007) vary as a function of T1. Offsets of ∆T1 = T1 pub -T1 thiswork = -0.055 ± 0.092 and ∆(C − T1) = (C − T1) pub -(C − T1) thiswork = 0.050 ± 0.112 have been computed from 425 stars measured in common. Carraro et al. (2013, hereafter C13) have carried out a photometric BV I campaign focused on the study of open clusters preferentially located towards the Third Galactic Quadrant. They used the Y4KCAM camera attached to the CTIO 1-m telescope, operated by the SMARTS consortium during an observation run in 2005 November-December. In the afore- (1958, L) and Lauberts (1982, ESO). Table 2. CT 1 data of stars in the field of Haffner 9.
BV I photometric data sets
Star x y T 1 σ(T 1 ) n T 1 C − T 1 σ(C − T 1 ) n C−T 1 (pixel) (pixel) (mag) (mag) (mag) (mag) - - - - - - - - -- - - - - - - - - Figure 2.
Comparison of CT 1 photometry between that of Piatti et al. (2007) and this work for the control cluster Lindsay 106. mentioned work they did not include Haffner 9, although they made their BV I photometry publicly available 3 . The camera used to obtain those data was equipped with an STA 4064×4064 CCD with 15-µm pixels, yielding a scale of 0.289 /pixel and a field-of-view (FOV) of 20 ×20 . The CCD was operated without binning, at a nominal gain 3 VizieR On-line Data Catalog: J/MNRAS/428/502 of 1.44 e-/ADU, implying a readout noise of 7 e-per quadrant (four amplifiers). Their available photometry add significant value to our study, because of the larger FOV and the fainter magnitude limit reached using a different telescope/photometric system setup. We refer the reader to the work by C13 for details concerning the data processing, the measurement of photometric magnitudes and the standardization of their photometry to the BV I system.
We compared Carraro et al. (2013)'s photometry with that obtained by Hasegawa et al. (2008, hereafter H08) and concluded that the former needs to be corrected by :
V std = VC13 − 0.9 (3) B std = (B − V )C13 * 0.74 + 0.4 (4) I std = −(V − I)C13 * 0.87 + V carr * 0.05 + 0.45(5)
in order to match the latter. As for the H08 photometry, we used the ridgelines in their CMDs -thought to be the mean locus of the fiducial cluster sequence-, since the data are not available from the authors. We finally merged our Washington CT1 and the corrected available BV I data sets into only one master table, which we use in our subsequent an analysis.
ANALYSIS OF THE PHOTOMETRIC DATA
The pipeline followed in the analysis of the present photometric data sets involved: i) define the cluster central coordinates and trace its stellar density radial profile; ii) derive the cluster structural parameters; iii) decontaminate the cluster CMD from field stars and; iv) estimate the cluster fundamental parameters.
Cluster extension
In order to obtain the stellar density radial profile of Haffner 9, we started by estimating its geometrical centre. We did that by fitting Gaussian distributions to the star counts in the x and y directions. The number of stars projected along the x and y directions were counted within intervals of 20, 40, 60, 80, 100 pixel wide, and the Gaussian fits repeated each time. The fits of the Gaussians were performed using the ngaussfit routine in the stsdas/iraf package. We adopted a single Gaussian and fixed the constant to the corresponding background levels (i.e. stellar field density assumed to be uniform) and the linear terms to zero. The centre of the Gaussian, its amplitude, and its F W HM acted as variables. Finally, we averaged the five different Gaussian centres with a typical standard deviation of ± 40 pixels (± 16.0 ).
The estimated geometrical centre was then used to built the cluster stellar density profile from star counts performed within boxes of 60 pixels per side distributed throughout the whole observed field. The chosen box size allowed us to statistically sample the stellar spatial distribution. Thus, the number of stars per unit area at a given radius r can be directly calculated through the expression:
(nr+30 − nr−30)/(mr+30 − mr−30),(6)
where nr and mr represent the number of stars and boxes, respectively, included in a circle of radius r. The advantage of this method over the frequent counting of stars in annular regions around the cluster centre relies on the fact that is not required a complete circle of radius r within the observed field to compute the mean stellar density at that distance. Therefore, it is possible to estimate the background level with high precision using regions located far away from the cluster centre. With a good placement of the background level, the cluster radius (r cls ) results, in turn, in a more reliable estimate. Fig. 3 shows with open and filled circles the observed and background subtracted density profilesexpressed as number of stars per arcsec 2 -, respectively, while the errobars represent the rms errors. In the case of the background subtracted density profile we added the mean error of the background star counts. The background level and the cluster radius (r cls = 200 +50 −40 arcsec) are indicated by solid horizontal and vertical lines, respectively; their uncertainties are in dotted lines.
We fitted the background corrected density profile using both King (1962) and Plummer (1911) models in order to get independent estimates of the cluster core (rc), half-mass (r h ) and tidal (rt) radii, respectively. We used a grid of rc, r h and rt values spanning the known range of radii of open clusters (Piskunov et al. 2007) and minimised χ 2 . We finally derived rc = 70 ± 10 arcsec, r h = 125 ± 15 arcsec and rt = 400 ± 100 arcsec, respectively. In Fig. 3 we superimposed the respective King's and Plummer's curves with blue and oorange solid lines, respectively.
Field star decontamination
Because Fig. 1 reveals that both cluster and field star sequences are more or less superimposed, we must firstly separate the cluster stars from those belonging to the surround- ing fields on a statistical basis in order to meaningfully use that CMD to estimate the cluster parameters. Note that both cluster and field stars are affected by nearly the same interstellar reddening, which is indeed what causes the overlapping of their main sequences (MSs). This fact makes the cleaning of the cluster CMD even more challenging.
To filter the field stars from the CMDs, we applied a statistical procedure which consists, firstly, in adopting two CMDs from different regions located far from the cluster. The dimension of each selected field region was πr cls 2 and acted as references to statistically filter an equal circular area centred on Haffner 9.
Secondly, by starting with reasonably large boxes -typically (∆(T1),∆(C − T1)) = (1.00, 0.50) mag -centred on each star in both field CMDs and by subsequently reducing their sizes until they reach the stars closest to the boxes' centres in magnitude and colour, separately, we defined boxes which result in use of larger areas in field CMD regions containing a small number of stars, and vice versa. Note that the definition of the position and size of each box involves two field stars, one at the centre of the box and another -the closest one to box centre -placed on the boundary of that box. Piatti & Bica (2012) have shown that this is an effective way of accounting for the local field-star signature in terms of stellar density, luminosity function and/or colour distribution.
Next, we plotted all these boxes for each field CMD on the cluster CMD and subtracted the star located closest to each box centre. Since we repeated this task for each of the two field CMD box samples, we found that some stars have remained unsubtracted once or twice. Finally, we adopted as the cluster CMD that built from stars that have not been subtracted any time. In order to illustrate the performance of the statistical cleaning procedure, Fig. 4 depicts a single field-star CMD (left-hand panel) for an annular region around Haffner 9 with an area equal to that of the cluster, and the cleaned cluster CMD superimposed to the observed ones represented by black and gray filled circles, respectively (right-hand panel). As can be seen, differences in stellar composition become noticeable when comparing field and cleaned cluster CMDs. Particularly, the cluster evolved MS is now clearly seen.
Cluster's fundamental parameters
We present in Fig. 5 the whole set of CMDs and colourcolour (CC) diagrams for Haffner 9 that can be exploited from the present multi-band photometry. They include every magnitude and colour for the whole sample of observed stars, those located within the cluster radius and those considered cluster stars from the field star decontamination procedure plotted with grey, red and black filled circles, respectively. At first glance, the cleaned cluster CMDs resemble those of a cluster with a moderate age, projected on to a star field not easy to disentangle from the cluster.
The availability of three CMDs and three different CC diagrams covering wavelengths from the blue up to the near-infrarred allowed us to derive reliable ages, reddenings and distances for Haffner 9 from the matching of theoretical isochrones. In order to enter the isochrones into the CMDs and CC diagrams we used the following ratios: (Cardelli et al. 1989); E(C −T1)/E(B −V ) = 1.97 and AT 1 /E(B −V ) = 2.62 (Geisler 1996).
E(V − I)/E(B − V ) = 1.25, AV /E(B − V ) = 3.1
We started by selecting theoretical isochrones (Bressan et al. 2012) with solar metal content in order to choose that which best match the cluster's features in the CMDs. In this sense, the shape of the MS, its curvature, the relative distance between the red giants and the MS turn-off (MSTO) in magnitude and colour separately, among others, are features tightly related to the cluster age, regardless their reddenings and distances. Note also that, by considering the whole metallicity range of the Milky Way open clusters (see, e.g. Paunzen et al. 2010;Heiter et al. 2014) and by using the theoretical isochrones of Bressan et al. (2012), the differences at the zero age main sequence (ZAMS) in V − I colour is smaller than ∼ 0.08 mag. This result implies that negligible differences between the ZAMSs for the cluster metallicity and that of solar metal content would appear, keeping in mind the intrinsic spread of the stars in the V vs V − I CMD.
From our first choices, we derived the cluster reddening by shifting those isochrones in the three CC diagrams following the reddening vectors until their bluest points coincided with the observed ones. Note that this requirement allowed us to use the three CC diagrams, even though the reddening vectors run almost parallell to the cluster sequence. Finally, the mean E(B −V ) colour excesse was used to properly shift the chosen isochrones in the three CMDs in order to derive the cluster true distance modulus by shifting the isochrones along the magnitude axes. We iterated this procedure for different ages as well as for metallicities [Fe/H] = -0.5 up to 0.2 dex. We found that isochrones bracketing the cluster age (log(t yr −1 ) = 8.55) by ∆log(t yr −1 ) = ±0.05 and the cluster metallicity ([Fe/H] = 0.0 dex) by ∆[Fe/H] = ±0.1 dex represent the overall age and metallicity uncertainties owing to the observed dispersion in the cluster CMDs and CC diagrams. As for the cluster reddening and true distance modulus, we obtained E(B − V ) = 0.60 ± 0.05 mag and m − Mo = 12.5 ± 0.2 mag, respectively. Fig. 5 shows the adopted best matched isochrone overplotted on to the CMDs and CC diagrams with a blue solid line.
The cluster mass and its present mass function (MF) were derived by summing the individual masses of stars not eliminated during the complete cleaning procedure. Those individual masses were obtained by interpolation in the theoretical isochrone traced in Fig. 5 from the observed T1 magnitude of each star, properly corrected by reddening and distance modulus. We finally attained log(M cls /M ) = 2.2 ± 0.2. The uncertainty comes from propagation of the T1 magnitude errors in the mass distribution along the theoretical isochrone as well as from considering stars subtracted once in the field star cleaning procedure. Note that, when building the stellar density profile from stars not eliminated at any time, we obtained a curve which matches the King's curve drawn in Fig. 3. This very good agreement implies that the cleaning procedure subtracted an appropriate number of stars according to the stellar density of the backgroung/foreground field, so that the mass uncertainty would be smaller if we did not considered stars subtracted once. The resulting MF is shown in Fig. 6 where the errorbars come from applying Poisson statistics. For comparison porpuses we superimposed the relationship given by Salpeter (1955, slope = -2.35) for the stars in the solar neighbourhood.
Using the resulting mass and the half-mass radius r h , we computed the half-mass relaxation times using the equation (Spitzer & Hart 1971):
tr = 8.9 × 10 5 M 1/2 cls r 3/2 h mlog10(0.4M cls /m) ,(7)
where M cls is the cluster mass andm is the mean mass of the cluster stars (m = 1.7 M ). We obtained tr = 16 ± 2 Myr. If we considered non-oberved stars with masses between 1 and 0.5 M and the Salpeter's mass function, the relaxation times would increase in ∼ 10 per cent. Note that, despite the advanced state of dynamical evolution of Haffner 9 (<age/tr> = 22), the cluster still keeps its MF close to that of Salpeter's law (see Fig. 6). Finally, we employed the ASteCA suit of functions (Perren et al. 2015) to generate ≈ 2.2×10 5 synthetic CMDs of a star cluster covering ages from log(t yr −1 ) = 8.5 up to 8.6 (∆log(t yr −1 ) = 0.01), metallicities in the range Z = 0.012 -0.019 (∆Z = 0.001), interstellar extinction between 0.55 and 0.65 mag (∆E(B − V ) = 0.01 mag), distance modulus between 12.3 and 12.7 mag (∆(m − M )o = 0.05 mag) and total mass in the range 100 -250 M (∆M = 10M ), respectively.
The steps by which a synthetic star cluster for a given set of age, metalicity, distance modulus, and reddening values is generated by ASteCA is as follows: i) a theoretical isochrone is picked up, densely interpolated to contain a thousand points throughout its entire length, including the most evolved stellar phases. ii) The isochrone is shifted in colour and magnitude according to the E(B − V ) and (m−M )o values to emulate the effects these extrinsic param- eters have over the isochrone in the CMD. iii) The isochrone is trimmed down to a certain faintest magnitude according to the limiting magnitude thought to be reached. iv) An initial mass function (IMF) is sampled in the mass range [∼0.01−100] M up to a total mass value M provided that evolved CMD regions result properly populated. The distribution of masses is then used to obtain a properly populated synthetic star cluster by keeping one star in the interpolated isochrone for each mass value in the distribution. v) A random fraction of stars are assumed to be binaries, which is set by default to 50% (von Hippel 2005), with secondary masses drawn from a uniform distribution between the mass of the primary star and a fraction of it given by a mass ratio parameter set to 0.7. vi) An appropriate magnitude completeness and an exponential photometric error functions are finally applied to the synthetic star cluster. Fig. 7 shows the synthetic CMD which best matches the cluster's parameters, with the generated uncertainties in T1 and C − T1, the range of stellar masses drawn in colourscaled filled circles and the theoretical isochrone for log(t yr −1 ) = 8.55 and [Fe/H] = 0.0 dex superimposed. Dias et al. (2014) presented a catalog of mean proper motions and membership probabilities of individual stars for optically visible open clusters, among them, Haffner 9, using data from the UCAC4 catalog (Zacharias et al. 2013). We cross-correlated the stars with proper motions in our data sets and built the corresponding T1 vs. C − T1 CMD. The left-hand panel of Fig. 8 shows with grey, red and black filled circles all these stars, those with membership probabilities P > 75 and 90 per cent, respectively. As can be seen, the proper motion memberships confirm that the cleaned CMD produced in Section 3.2, which is very well matched by an isochrone of log(t yr −1 ) = 8.55 and [Fe/H] = 0.0 dex (see details in Section 3.3), corresponds to that of the cluster. This is an important probe, since previous detailed photometric studies of this object led to different fundamental parameters.
ANALYSIS AND DISCUSSION
For instance, Bica & Bonatto (2005) and Kharchenko et al. (2013) used 2MASS data (Skrutskie et al. 1997), separately, and derived an age of 140 ± 20 Myr, a true distance modulus (m − M )o = 11.40 ± 0.1 mag, and a reddening E(B −V ) = 0.50 ± 0.05 from the fit of theoretical isochrones in the J versus J − H CMD. However, Buckner & Froebrich (2014), also from 2MASS data, obtained an age of 250 ± 100 Myr, (m − M )o = 12.40 ± 0.4 mag, and E(B − V ) = 0.40 ± 0.10, respectively. All three studies assumed a solar metal content. In order to seek for any source of discrepancy in these 2MASS data analyses, we took advantage of the photometric and proper motion membership probabilities analyzed above. Fig. 8 (middle and right-hand panels) depict the 2MASS CMDs used by Bica & Bonatto (2005) and Buckner & Froebrich (2014) to derive the cluster's fundamental parameters. We plotted every star with proper motion using the same colour code as in the left-hand panel. Note that the fainter magnitude limit reached by stars with proper motion measurements is nearly similar to that of the 2MASS data. For the sake of the reader, we superimposed the isochrone of log(t yr −1 ) = 8.55 and [Fe/H] = 0.0 dex as well as those for the reddenings, distance modulii, ages and metallicities Figure 5. CMDs and CC diagrams for stars measured in the field of Haffner 9. Grey, red and black filled circles represent all the measured stars, those within the cluster radius and the cluster stars from the field star cleaning procedure, respectively. We overplotted the isochrone which best matches the cluster features (see text for details).
used by Bica & Bonatto (2005) and Buckner & Froebrich (2014) with blue and magenta solid lines, respectively.
From the examination of these CMDs some conclusions can be drawn. Firstly, the magnitude depth of 2MASS data is noticeable much brigher than that of the present data set (see Fig. 5), whereas the colour baseline of the infrared colours is much shorter than that of C − T1, which do not favour an accurate isochrone matching. Secondly, the isochrone adopted in this work reasonably matches the cluster sequences, although the scatter is larger than that seen in the T1 vs. C −T1 CMD. Seemingly, Bica & Bonatto (2005) have fitted a MS with some field contamination (middle panel). In the case of Buckner & Froebrich (2014) we especulate with the possibility that they could have considered a larger cluster area with field stars with bluer infrared colours and no proper motions. Hasegawa et al. (2008) presented BV I photometry of stars in the cluster field and estimated an age of 500 Myr, a metal content [Fe/H] = -0.4 dex, a true distance modulus of (m−M )o = 12.7 mag, and a reddening E(B−V ) = 0.75 mag. For comparison purposes, we plotted such an isochrone in the left-hand panel of Fig. 8 with a magenta solid line. Once again, the employment of a field star contaminated CMD could lead them to derive unreliable cluster parameters.
We computed Galactic coordinates using the derived cluster heliocentric distances, their angular Galactic coordinates and a Galactocentric distance of the Sun of RGC = 8.3 kpc (Hou & Han 2014, and references therein). The resulting spatial distribution is depicted in the top-left panel of Fig. 9, where we added for comparison purposes the 2167 open clusters catalogued by Dias et al. (2002, version 3.5 as of January 2016) and the schematic positions of the spiral arms (Drimmel & Spergel 2001;Moitinho et al. 2006). Haffner 9 is located at the Perseus arm, beyond the cirle around the Sun (d ∼ 2.0 kpc) where the catalogued clusters are mostly concentrated.
The age/tr ratio is a good indicator of the internal dynamical evolution, since it gives the number of times the characteristic time-scale to reach some level of energy equipartition (Binney & Merrifield 1998) has been surpassed. Star clusters with large age/tr ratios have reached a higher degree of relaxation and hence are dynamically more evolved. As Fig. 9 shows, Haffner 9 appears to have had enough time to evolve dynamically. In the figure we included in grey colour 236 open clusters analysed by Piskunov et al. (2007), who derived from them homogeneous scales of radii and masses. They derived core and tidal radii for their cluster sample, from which we calculated the half-mass radii and, with their clusters masses and eq. 4, relaxation times, by assuming that the cluster stellar density profiles can be indistinguishably reproduced by King and Plummer models. Their cluster sample are mostly distributed inside a circle of ∼ 1 kpc from the Sun. As compared to the Piskunov Since dynamical evolution implies the loss of stars (mass loss), we expect some trend of the present-day cluster mass with the age/tr ratio. This is confirmed in the top-right panel of Fig. 5, where the larger the present-day mass the less the dynamical evolution of a cluster in the solar neighbourhood, with a noticeable scatter. Haffner 9 appears to have a relatively large mass for its particular internal dynamical state. Curiously, selection against poor and old clusters could suggest the beggining of cluster dissolution, with some exceptions. Trenti et al. (2010) presented a unified picture for the evolution of star clusters on the two-body relaxation timescale from direct N-body simulations of star clusters in a tidal field. Their treatment of the stellar evolution is based on the approximation that most of the relevant stellar evolution occurs on a timescale shorter than a relaxation time, when the most massive stars lose a significant fraction of mass and consequently contribute to a global expansion of the system. Later in the life of a star cluster, two-body relaxation tends to erase the memory of the initial density profile and concentration. They found that the structure of the system, as measured by the core to half mass radius ratio, the concentration parameter c= log(rt/rc), among others, evolve toward a universal state, which is set by the efficiency of heating on the visible population of stars induced by dynamical interactions in the core of the system. In the bottom panels of Fig. 9 we plotted the dependence of the concentration parameter c with the cluster mass and the age/tr ratio, respectively. They show that our dynamically evolved cluster is within those with relatively high c values, and that star clusters tend to initially start their dynam-ical evolution with relatively small concentration parameters. Likewise, star clusters in an advanced dynamical state can also have relatively lower c values due to their smaller masses.
SUMMARY AND CONCLUSIONS
In order to continue our Washington CT1 and Johnson BV I photometric studies on Milky Way star clusters, we turned our attention to Haffner 9, a previously studied open cluster with a noticeable spread in the values obtainted of its fundamental parameters.
The analysis of the current photometric data sets leads to the following main conclusions:
(i) To disentagle cluster features from those belonging to their surrounding fields, we applied a subtraction procedure to statistically clean the cluster CMDs from field star contamination. The employed technique makes use of variable cells in order to reproduce the field CMD as closely as possibly. The stellar density profile built from stars that reamined unsubtracted very well matches that obtained from star counts carried out throughout the observed field, once the background level is subtracted. Moreover, the main cluster features in the cleaned CMDs are confirmed when proper motion membership probabilities are taken into account.
(ii) Using the cleaned cluster CMDs and CC diagrams, we estimated the cluster fundamental parameters in a selfconsistent way. The availability of three CC diagrams and three CMDs covering wavelengths from the blue up to the near-infrarred allowed us to derive reliable values of age, metallicity, reddenings and distances for Haffner 9. We exploited such a wealth in combination with theoretical isochrones computed by Bressan et al. (2012) to find out that the cluster is 350 Myr old, is placed in the Perseus arm at a solar heliocentric distance of 3.2 kpc, and has nearly solar metal content. The lower limits of its present mass is ∼ 160M . We confirmed such a limit from the generation of thousand synthetic CMDs.
(iii) We found that a less deep photometry, a narrower colour baseline and a less effective CMD cleaning procedure, could have been some sources that led previous studies to derive cluster fundamental parameters which do not correspond to the fiducial cluster features. By using the same photometric data sets as in previous works and proper motion memberships, we confirm the present cluster fundamental parameters.
(iv) Finally, we estimated the half-mass relataxion time for Haffner 9, which turned out to be ∼ 22 smaller than the cluster age. This result suggests that Haffner 9 is facing an advanced state of its internal dynamical evolution. However, the cluster still keeps its MF close to that of the Salpeter's law. When combined with the obtained structural parameters, we found that the cluster is possibly in the phase typical of those with mass segregation in their core regions. Figure 8. CMDs for stars in the field of Haffner 9 with proper motion measurements (grey filled circles). Stars with proper motion membership probabilities higher than 75 and 90 per cent are drawn with red and black filled circles, respectively. The isochrones adopted in this work is superimposed with a blue solid line, while those for the parameters estimated by Hasegawa et al. (2008), Bica & Bonatto (2005) and Buckner & Froebrich (2014) are plotted with a magenta line in the left-hand, middle and right-hand panels, respectively (see text in Section 4 for details. Figure 9. Top-left: Galactic spatial position of Haffner 9 (red filled cirled). Open clusters from the catalogue of Dias et al. (2002, version 3.5 as of January 2016) are drawn with gray dots, while the schematic positions of spiral arms (Drimmel & Spergel 2001;Moitinho et al. 2006) are traced with black solid lines. Top-right and bottom: Relationships between cluster concentration parameter (c), mass, age and relaxation time (tr). Grey dots correspond to 236 star clusters with homogeneous estimations of masses and radii derived by Piskunov et al. (2007).
Figure 3 .
3Stellar density profile obtained from star counts. Open and filled circles refer to measured and background subtracted density profiles, respectively. Blue and orange solid lines depict the fitted King and Plummer curves, respectively.
Figure 4 .
4CMDs for stars in the field of Haffner 9: a field CMD for annular region centred on the cluster and with a size equal to the cluster area, and the corresponding defined set of boxes overplotted (left-hand panel) and; the observed and cleaned CMD composed of the stars distributed within the cluster radius represented by gray and black filled circles, respectively (left-hand panel).
Figure 6 .
6The present mass function of Haffner 9. TheSalpeter (1955)' relationship for stars in the solar neighbourhood is superimposed with a solid line.
Figure 7 .
7Best-generated star cluster CMD with the uncertainties in T 1 and C − T 1 , the stellar masses in colour-scaled filled circles, and the theoretical isochrone for log(t yr −1 ) = 8.55 and [Fe/H] = 0.0 dex superimposed. et al. (2007)'s sample, Haffner 9 is placed towards the most evolved limit of the age/tr distribution (right-hand panels).
Table 1 .
1Observations log of the open cluster Haffner 9 and the control field Lindsay 106.Cluster a
R.A.
Dec.
l
b
date
filter exposure airmass seeing
(h m s)
( •
)
( • )
( • )
(sec)
( )
Lindsay 106, ESO 29-SC44
1 30 38
-76 03 16 299.82 -40.84 Dec. 19
C
2400
1.47
1.4
R
900
1.45
1.3
Haffner 9
7 24 42
-17 00 10 231.80
-0.59
Dec. 18
C
300
1.05
1.3
C
300
1.05
1.3
R
10
1.06
1.1
R
10
1.06
1.1
R
30
1.07
1.2
R
30
1.07
1.2
a Cluster identifications are from Lindsay
The images are made available to the public through http://www.noao.edu/sdm/archives.php, SMARTS Consortium, DDT, PI: Clariá. 2 IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation
MNRAS 000, 1-9 (2016)
This paper has been typeset from a T E X/L A T E X file prepared by the author.MNRAS 000, 1-9 (2016)
ACKNOWLEDGEMENTSWe thank Giovanni Carraro for revising the manuscript and making useful suggestions. We thank the anonymous referee whose thorough comments and suggestions allowed us to improve the manuscript.
. E Bica, C Bonatto, 10.1051/0004-6361:20053194A&A. 443465Bica E., Bonatto C., 2005, A&A, 443, 465
. J Binney, M Merrifield, 10.1111/j.1365-2966.2007.11691.xGalactic Astronomy Bonatto C., Bica E. 3771301MNRASBinney J., Merrifield M., 1998, Galactic Astronomy Bonatto C., Bica E., 2007, MNRAS, 377, 1301
. A Bressan, P Marigo, L Girardi, B Salasnich, C Dal Cero, S Rubele, A Nanni, 10.1111/j.1365-2966.2012.21948.xMNRAS. 427127Bressan A., Marigo P., Girardi L., Salasnich B., Dal Cero C., Rubele S., Nanni A., 2012, MNRAS, 427, 127
. A S M Buckner, D Froebrich, 10.1093/mnras/stu1440MNRAS. 444290Buckner A. S. M., Froebrich D., 2014, MNRAS, 444, 290
. R Canterna, 10.1086/111878AJ. 81228Canterna R., 1976, AJ, 81, 228
. J A Cardelli, G C Clayton, J S Mathis, 10.1086/167900ApJ. 345245Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245
. G Carraro, Y Beletsky, G Marconi, 10.1093/mnras/sts038MNRAS. 428502Carraro G., Beletsky Y., Marconi G., 2013, MNRAS, 428, 502
. W S Dias, B S Alessi, A Moitinho, J R D Lépine, 10.1051/0004-6361:20020668A&A. 389871Dias W. S., Alessi B. S., Moitinho A., Lépine J. R. D., 2002, A&A, 389, 871
. W S Dias, H Monteiro, T C Caetano, J R D Lépine, M Assafin, A F Oliveira, 10.1051/0004-6361/201323226A&A. 56479Dias W. S., Monteiro H., Caetano T. C., Lépine J. R. D., Assafin M., Oliveira A. F., 2014, A&A, 564, A79
. R Drimmel, D N Spergel, 10.1086/321556ApJ. 556181Drimmel R., Spergel D. N., 2001, ApJ, 556, 181
. D Geisler, 10.1086/117799AJ. 111480Geisler D., 1996, AJ, 111, 480
. T Hasegawa, T Sakamoto, H L Malasan, 10.1093/pasj/60.6.1267PASJ. 601267Hasegawa T., Sakamoto T., Malasan H. L., 2008, PASJ, 60, 1267
. U Heiter, C Soubiran, M Netopil, E Paunzen, 10.1051/0004-6361/201322559A&A. 56193Heiter U., Soubiran C., Netopil M., Paunzen E., 2014, A&A, 561, A93
. L G Hou, J L Han, 10.1051/0004-6361/201424039A&A. 569125Hou L. G., Han J. L., 2014, A&A, 569, A125
. N V Kharchenko, A E Piskunov, E Schilbach, S Röser, R.-D Scholz, 10.1051/0004-6361/201322302A&A. 55853Kharchenko N. V., Piskunov A. E., Schilbach E., Röser S., Scholz R.-D., 2013, A&A, 558, A53
. I King, 10.1086/108756AJ. 67471King I., 1962, AJ, 67, 471
A Lauberts, 10.1093/mnras/118.2.172ESO/Uppsala survey of the ESO(B) atlas Lindsay E. M. 118172Lauberts A., 1982, ESO/Uppsala survey of the ESO(B) atlas Lindsay E. M., 1958, MNRAS, 118, 172
. F F S Maia, W J B Corradi, J F C SantosJr, 10.1111/j.1365-2966.2010.17034.x4071875MN-RASMaia F. F. S., Corradi W. J. B., Santos Jr. J. F. C., 2010, MN- RAS, 407, 1875
. A Moitinho, R A Vázquez, G Carraro, G Baume, E E Giorgi, W Lyra, 10.1111/j.1745-3933.2006.00163.xMNRAS. 36877Moitinho A., Vázquez R. A., Carraro G., Baume G., Giorgi E. E., Lyra W., 2006, MNRAS, 368, L77
. E Paunzen, U Heiter, M Netopil, C Soubiran, 10.1051/0004-6361/201014131A&A. 51732Paunzen E., Heiter U., Netopil M., Soubiran C., 2010, A&A, 517, A32
. D B Pavani, E Bica, 10.1051/0004-6361:20066240A&A. 468139Pavani D. B., Bica E., 2007, A&A, 468, 139
. G I Perren, R A Vázquez, A E Piatti, 10.1051/0004-6361/201424946A&A. 5766Perren G. I., Vázquez R. A., Piatti A. E., 2015, A&A, 576, A6
. A E Piatti, E Bica, 10.1111/j.1365-2966.2012.21694.xMNRAS. 4253085Piatti A. E., Bica E., 2012, MNRAS, 425, 3085
. A E Piatti, A Sarajedini, D Geisler, C Gallart, M Wischnjewsky, 10.1111/j.1365-2966.2007.12439.xMNRAS. 3821203Piatti A. E., Sarajedini A., Geisler D., Gallart C., Wischnjewsky M., 2007, MNRAS, 382, 1203
. A E Piskunov, E Schilbach, N V Kharchenko, S Röser, R.-D Scholz, 10.1051/0004-6361:20077073A&A. 468151Piskunov A. E., Schilbach E., Kharchenko N. V., Röser S., Scholz R.-D., 2007, A&A, 468, 151
. H C Plummer, 10.1093/mnras/71.5.460MNRAS. 71460Plummer H. C., 1911, MNRAS, 71, 460
. E E Salpeter, 10.1086/145971ApJ. 121161Salpeter E. E., 1955, ApJ, 121, 161
The Impact of Large Scale Near-IR Sky Surveys. M F Skrutskie, 10.1007/978-94-011-5784-1_4Astrophysics and Space Science Library. Garzon F., Epchtein N., Omont A., Burton B., Persi P.21025Skrutskie M. F., et al., 1997, in Garzon F., Epchtein N., Omont A., Burton B., Persi P., eds, Astrophysics and Space Science Library Vol. 210, The Impact of Large Scale Near-IR Sky Surveys. p. 25, doi:10.1007/978-94-011-5784-1˙4
. . L SpitzerJr, M H Hart, 10.1086/150855ApJ. 164399Spitzer Jr. L., Hart M. H., 1971, ApJ, 164, 399
P B Stetson, L E Davis, D R Crabtree, Astronomical Society of the Pacific Conference Series. Jacoby G. H.8CCDs in astronomyStetson P. B., Davis L. E., Crabtree D. R., 1990, in Jacoby G. H., ed., Astronomical Society of the Pacific Conference Series Vol. 8, CCDs in astronomy. pp 289-304
. M Trenti, E Vesperini, M Pasquato, 10.1088/0004-637X/708/2/1598ApJ. 7081598Trenti M., Vesperini E., Pasquato M., 2010, ApJ, 708, 1598
. N Zacharias, C T Finch, T M Girard, A Henden, J L Bartlett, D G Monet, M I Zacharias, 10.1088/0004-6256/145/2/44AJ. 14544Zacharias N., Finch C. T., Girard T. M., Henden A., Bartlett J. L., Monet D. G., Zacharias M. I., 2013, AJ, 145, 44
. T Von Hippel, 10.1086/428035ApJ. 622565von Hippel T., 2005, ApJ, 622, 565
|
[] |
[
"The X-ray coronae in NuSTAR bright active galactic nuclei",
"The X-ray coronae in NuSTAR bright active galactic nuclei"
] |
[
"Jia-Lai Kang \nDepartment of Astronomy\nCAS Key Laboratory for Research in Galaxies and Cosmology\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n\nSchool of Astronomy and Space Science\nUniversity of Science and Technology of China\n230026HefeiChina\n",
"Jun-Xian Wang \nDepartment of Astronomy\nCAS Key Laboratory for Research in Galaxies and Cosmology\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n\nSchool of Astronomy and Space Science\nUniversity of Science and Technology of China\n230026HefeiChina\n"
] |
[
"Department of Astronomy\nCAS Key Laboratory for Research in Galaxies and Cosmology\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina",
"School of Astronomy and Space Science\nUniversity of Science and Technology of China\n230026HefeiChina",
"Department of Astronomy\nCAS Key Laboratory for Research in Galaxies and Cosmology\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina",
"School of Astronomy and Space Science\nUniversity of Science and Technology of China\n230026HefeiChina"
] |
[] |
We present systematic and uniform analysis of NuSTAR data with 10-78 keV S/N > 50, of a sample of 60 SWIFT BAT selected AGNs, 10 of which are radio-loud. We measure their high energy cutoff E cut or coronal temperature T e using three different spectral models to fit their NuSTAR spectra, and show a threshold in NuSTAR spectral S/N is essential for such measurements. High energy spectral breaks are detected in the majority of the sample, and for the rest strong constraints to E cut or T e are obtained. Strikingly, we find extraordinarily large E cut lower limits (> 400 keV, up to > 800 keV) in 10 radio-quiet sources, whereas none in the radio-loud sample. Consequently and surprisingly, we find significantly larger mean E cut /T e of radio-quiet sources compared with radio-loud ones. The reliability of these measurements are carefully inspected and verified with simulations. We find a strong positive correlation between E cut and photon index Γ, which can not be attributed to the parameter degeneracy. The strong dependence of E cut on Γ, which could fully account for the discrepancy of E cut distribution between radio-loud and radio-quiet sources, indicates the X-ray coronae in AGNs with steeper hard X-ray spectra have on average higher temperature and thus smaller opacity. However, no prominent correlation is found between E cut and λ edd . In the l-Θ diagram, we find a considerable fraction of sources lie beyond the boundaries of forbidden regions due to runaway pair production, posing (stronger) challenges to various (flat) coronal geometries.
|
10.3847/1538-4357/ac5d49
|
[
"https://arxiv.org/pdf/2203.07118v2.pdf"
] | 247,446,883 |
2203.07118
|
a30ebb26cd6826b09609fcba4ee2e902dc353da9
|
The X-ray coronae in NuSTAR bright active galactic nuclei
March 24, 2022
Jia-Lai Kang
Department of Astronomy
CAS Key Laboratory for Research in Galaxies and Cosmology
University of Science and Technology of China
230026HefeiAnhuiChina
School of Astronomy and Space Science
University of Science and Technology of China
230026HefeiChina
Jun-Xian Wang
Department of Astronomy
CAS Key Laboratory for Research in Galaxies and Cosmology
University of Science and Technology of China
230026HefeiAnhuiChina
School of Astronomy and Space Science
University of Science and Technology of China
230026HefeiChina
The X-ray coronae in NuSTAR bright active galactic nuclei
March 24, 2022Draft version Typeset using L A T E X twocolumn style in AASTeX63Galaxies: active -Galaxies: nuclei -X-rays: galaxies
We present systematic and uniform analysis of NuSTAR data with 10-78 keV S/N > 50, of a sample of 60 SWIFT BAT selected AGNs, 10 of which are radio-loud. We measure their high energy cutoff E cut or coronal temperature T e using three different spectral models to fit their NuSTAR spectra, and show a threshold in NuSTAR spectral S/N is essential for such measurements. High energy spectral breaks are detected in the majority of the sample, and for the rest strong constraints to E cut or T e are obtained. Strikingly, we find extraordinarily large E cut lower limits (> 400 keV, up to > 800 keV) in 10 radio-quiet sources, whereas none in the radio-loud sample. Consequently and surprisingly, we find significantly larger mean E cut /T e of radio-quiet sources compared with radio-loud ones. The reliability of these measurements are carefully inspected and verified with simulations. We find a strong positive correlation between E cut and photon index Γ, which can not be attributed to the parameter degeneracy. The strong dependence of E cut on Γ, which could fully account for the discrepancy of E cut distribution between radio-loud and radio-quiet sources, indicates the X-ray coronae in AGNs with steeper hard X-ray spectra have on average higher temperature and thus smaller opacity. However, no prominent correlation is found between E cut and λ edd . In the l-Θ diagram, we find a considerable fraction of sources lie beyond the boundaries of forbidden regions due to runaway pair production, posing (stronger) challenges to various (flat) coronal geometries.
INTRODUCTION
The generally accepted disc-corona paradigm illustrates that the powerful hard X-ray emission universally found in active galactic nuclei (AGNs) is produced in the so-called corona (e.g., Haardt & Maraschi 1991, 1993. In this scenario the UV/optical photons from the accretion disk are upscattered to X-ray band through inverse Compton process by the hot electrons in the corona. However the physical nature of the corona remains yet unclear. Particular matters of concern, for instance, include the location and geometry of the corona (Fabian et al. 2009;Alston et al. 2020), the underlying mechanism for X-ray spectra variability in individual sources (e.g. Wu et al. 2020), potential interactions within the corona like pair-production (Fabian et al. 2015), and the Corresponding author: Jia-Lai Kang & Jun-Xian Wang [email protected], [email protected] relation between coronal and blackhole properties (Ricci et al. 2018;Hinkle & Mushotzky 2021).
One of the most fundamental physical parameters of the corona is the temperature kT e . The typical X-ray spectrum produced by the inverse Compton scattering within the corona is a power-law continuum, with a high energy cutoff. Such a cutoff (E cut ) is a direct indicator of the coronal temperature, with E cut ∼ 2 kT e or 3 kT e for an optically thin or thick corona (Petrucci et al. 2001). The Nuclear Spectroscopic Telescope Array (NuSTAR; Harrison et al. 2013) is the first hard X-ray telescope with direct-imaging capability above 10 keV. With its broad spectral coverage of 3-78 keV, NuSTAR has enabled the measurements (or lower limits) of E cut /kT e in a number of AGNs (e.g., Ballantyne et al. 2014;Matt et al. 2015;Ursini et al. 2016;Kamraj et al. 2018;Tortosa et al. 2018;Molina et al. 2019;Rani et al. 2019;Panagiotou & Walter 2020;Porquet et al. 2021;Hinkle & Mushotzky 2021;Akylas & Georgantopoulos 2021;Kamraj et al. 2022). Meanwhile, variations of E cut /T e are also reported in a few individual sources (e.g., Keek & Ballantyne 2016;Zhang et al. 2018;Kang et al. 2021).
However, even with NuSTAR spectra, the measurements of E cut /kT e are highly challenging for most AGNs, primarily due to the limited spectral quality at the high energy end. In many sample studies, only poorly constrained lower limits could be obtained for the dominant fraction of sources in the samples (e.g. Ricci et al. 2018;Kamraj et al. 2018;Panagiotou & Walter 2020;Kamraj et al. 2022), hindering further reliable statistical studies, e.g., to probe the dependence of E cut /kT e on other physical parameters. Meanwhile, the E cut measurements are often sensitive to the choice of spectral models. From this perspective, it is essential to perform uniform spectral fitting to a statistical sample with various models adopted.
Recently, we uniformly analyzed the NuSTAR spectra for a sample of 28 radio-loud AGNs (Kang et al. 2020). We found that E cut could be ubiquitously (9 out 11) detected in radio AGNs with NuSTAR net counts above 10 4.5 , and the ubiquitous detections of E cut in FR II galaxies indicate their X-ray emission is dominated by the thermal corona, instead of the jet. While for sources with lower NuSTAR counts, only a minor fraction of E cut detections (4 out of 17) were achieved. This motivates this work to perform systematic analyses of NuSTAR spectra of a sample of radio-quiet AGNs with sufficiently high signal to noise ratio of NuSTAR spectra (to avoid too many lower limits), and to statistically study the distribution of E cut /kT e , its dependence on other parameters, and the comparison with radio-loud AGNs.
The paper is organized as follows. In §2, we present the sample selection and data reduction. The spectral fitting process as well as the fitting results are shown in §3. Discussions are put in §4.
THE SAMPLE AND DATA REDUCTION
We match the 817 Seyfert galaxies in the 105-month BAT catalogue (Oh et al. 2018) with the archival NuS-TAR observations (as of October 2020). We drop observations with exposure time < 3 ks, or with total net counts (FPMA + FPMB) < 3000, for which no valid E cut measurement can be obtained. We exclude a few exposures contaminated by solar activity or other unknown issues (through visually checking the images). Furthermore, we exclude Compton-thick or heavily obscured sources (with n H > 10 23 cm −2 fitted with a simple neutral absorber model). Based on the spectral fitting introduced in §3, several observations with extremely hard spectra (photon index Γ < 1.3) or poor fitting statistics (χ 2 ν > 1.2), for which more complicated spec-tral models would be required, are also dropped. After these steps, 198 sources are kept, including 20 radio-loud sources and 178 radio-quiet sources. Kang et al. (2020) presented a radio-loud sample of 28 sources with NuSTAR exposures, 20 of which are included in the sample described above, while the rest 8 sources are classified as "beamed AGN" in the BAT catalog (Oh et al. 2018). Among them, 3C 279 is later found to be a jet-dominated blazar (e.g., Blinov et al. 2021) and is excluded from this work. Besides, we drop NGC 1275 (3C 84) due to the strong contamination from the diffuse thermal emission of the Perseus cluster to its spectra (Rani et al. 2018).
For sources with multiple NuSTAR exposures observations, the ones with the most 3-78 keV net counts are adopted. Raw data are reduced using the NuS-TAR Data Analysis Software within the latest version of HEASoft package (version 6.28), with calibration files CALDB version 20201101. These new versions of HEA-Soft and CALDB are applied to revise the recently noticed low-energy effective area issue of FPMA (Madsen et al. 2020), which may partly account for the different fitting results from previous literature. The standard pipeline nupipeline is used to generate the calibrated and cleaned event files. Following Kang et al. (2020Kang et al. ( , 2021, each source spectrum is extracted in a circular region with a radius of 60 centered on each source using nuproduct, while the background spectrum is derived using NUSKYBGD (Wik et al. 2014), handling the spatially non-uniform background. As the last step, spectra are rebinned using grppha to achieve a minimum of 50 counts bin −1 .
We note the E cut measurement is profoundly affected by the quality of the spectra, particularly at the high energy band. In Fig. 1 we plot the best-fit E cut (or lower limits, derived through fitting NuSTAR spectra with pexrav, see §3) for the 178 radio-quiet and 26 radio-loud AGNs, versus 10-78 keV S/N of NuSTAR FPMA net counts. Clearly the measurements of E cut for sources with low 10-78 keV S/N are dominated by poorly constrained lower limits for both radio-loud and radio-quiet sources. The lower limits systematically and significantly increase with 10-78 keV S/N at S/N < 50 and the increase saturates at S/N > 50. This indicates a threshold in S/N is essential to derive effective constraints to E cut . Thus in this work we focus only on sources with 10-78 keV NuSTAR spectral S/N > 50, including 50 radio-quiet and 10 radio-loud 1 sources (see Tab. 1). Figure 1. Ecut or lower limits from model pexrav vs. 10-78 keV NuSTAR (FPMA) spectral S/N. A cut at 50 is adopted, sources below which are dropped. Mean values for radioquiet and radio-loud samples are calculated using Kaplan-Meier estimator within ASURV in logarithm space (hereafter the same), and the shaded regions plot the 1σ scatter of the mean derived through bootstrapping the corresponding sample (hereafter the same).
We notice some NuSTAR observations have joint exposures from other missions like XMM-Newton or Swift. Those data are not included in this work mainly because different photon indices have been found between the spectra of NuSTAR and other missions (e.g., Cappi et al. 2016;Middei et al. 2019;Ponti et al. 2018), which may lead to significantly biased E cut /kT e measurements. Such discrepancy is likely caused by the imperfect inter-instrument calibration, while the fact that joint exposures are not completely simultaneous (different start/end time, different livetime distribution) can also play a part due to rapid spectral variations. Significant loss of the valuable NuSTAR exposure time would be unavoidable if we require perfect simultaneity between NuSTAR and exposures from other missions. Considering the E cut /kT e measurement is sensitive to the photon index, and to avoid the potential bias due to the fact that only a fraction of the exposures have quasisimultaneous observations from other various missions, here we perform uniform spectral fitting to NuSTAR spectra alone for the whole sample. Note-Sources are ordered by BAT ID. The 10-78 keV signal-to-noise ratios are calculated using FPMA spectra. The blackhole masses are from Koss et al. (2017), unless marked with a number referring to the following literature.
SPECTRAL FITTING
Spectral fitting is carried out within the 3-78 keV band using XSPEC (Arnaud 1996). χ 2 statistics is adopted and all the errors together with the upper/lower limits in this paper correspond to 90% confidence level with ∆χ 2 = 2.71, unless otherwise stated. The relative element abundance is set to the default in XSPEC, given by Anders & Grevesse (1989). For each observation, the spectra of FPMA and FPMB are jointly fitted with a cross-normalization (Madsen et al. 2015a).
In this paper we intend to perform uniform measurements of the E cut / T e for the radio-quiet and radio-loud samples and bring them into comparison. In order to guarantee such comparison is model-independent, various models are employed, including pexrav, relxill, and relxillcp.
pexrav (Magdziarz & Zdziarski 1995) is the model we used to fit the radio-loud sample in Kang et al. (2020), which fits the spectra with an exponentially cutoff power law plus a neutral reflection component, and is the most widely used model in E cut measurement (e.g., Molina et al. 2019;Rani et al. 2019;Panagiotou & Walter 2020;Baloković et al. 2020;Kang et al. 2021). For simplicity, the solar element abundance for the reflector and an inclination of cosi = 0.45 are adopted, which are the default values of the model. We allow the photon index Γ, E cut and the reflection scaling factor R free to vary.
relxill (García et al. 2014) also models the underlying continuum with a cutoff powerlaw, but convolves the reflection component with disc relativistic broadening effect. However, some parameters are hard to constrain even with these high-quality NuSTAR spectra and hence have to be frozen. The inner and outer radius of the accretion disk, Rin and Rout, are fixed at 1 ISCO and 400 gravitational radii respectively as the default of the model. Besides we fix the blackhole spin a = 0.998 2 and the inclination angle i = 30 • . The accretion disk is presumed to be neutral and have the solar iron abundance, with the corresponding parameter logxi and Af e fixed at 0 and 1, respectively. We assume a disk with constant emissivity, setting the emissivity parameter Index2 tied with Index1. The free parameters include Index1, Γ, E cut and the reflection fraction (with different definition from the R in pexrav).
A Comptonization model, relxillcp, is also adopted to directly measure the coronal temperature T e . relxillcp is a Comptonization version of relxill, replacing the cutoff power law with a nthcomp continuum. Other parameters are set in the same way as relxill.
Meanwhile, a common component zphabs is added to all three models to represent the intrinsic photoelectric absorption, with the Galactic absorption ignored due to its inappreciable influence on NuSTAR spectra. As for the Fe Kα lines, in relxill and relxillcp the continuum reflection component and the Fe Kα line are jointly fitted, while a zgauss is added to pexrav to describe the Fe Kα line. Since a relativistically broadened Fe Kα line can not be well constrained in the majority of observations, we deal with the Gaussian component as follows.
In the first place we fix the line at 6.4 keV in the rest frame and the line width at 19 eV (the mean Fe Kα line width in AGNs measured with Chandra HETG, Shu et al. 2010) to model a neutral narrow Fe Kα line. Then we allow the line width free to vary. If a variable line width prominently improves the fitting (∆χ 2 > 5), the corresponding fitting results are adopted.
We summarize below the three models adopted in the XSPEC term and the corresponding free parameters.
• zphabs * (pexrav + zgauss)
Free parameters include absorption column density n H , photon index Γ, high energy cutoff E cut and the strength of the reflection component R.
• zphabs * relxill n H , Γ, E cut , emissivity parameter Index1 and the reflection fraction.
• zphabs * relxillcp Same as relxill, except that E cut is replaced with T e .
The best-fitting results of the key parameters are shown in Tab. 2. In a few sources the spectral fitting yields very high lower limits of E cut , up to 2360 keV (see §4 for further discussion on reliability of such high lower limits of E cut ). For the two sources with E cut lower limits above 800 keV (pexrav results; NGC 4051, > 2360 keV; NGC 4593, > 1420 keV), we manually and conservatively set their E cut lower limits at 800 keV. Simply adopting their best-fit lower limits would further strengthen the results of this work.
DISCUSSION
The best-fit E cut from pexrav is presented in Fig. 1. We plot the E cut /T e from the other two models versus 10-78 keV S/N in Fig. 2. Similar to Kang et al. (2020), we find the E cut in this radio-loud sample can be well constrained as long as the spectra have enough S/N. With pexrav we obtain E cut measurements for 9 out 10 radio-loud sources with 10-78 keV S/N > 50. The only radio-loud source without E cut detection is 3C 382, for which E cut detection was reported in another NuSTAR exposure with slightly less NuSTAR net counts than the one adopted in this work.
However, as shown in Fig. 1 and Fig. 2, the case is markedly different in the radio-quiet sample where only lower limits to E cut could be obtained for 28 out of 50 sources (pexrav results). In Fig. 1 & 2 we also plot the mean E cut /T e of the radio-quiet and loud samples. We adopt the so-called survival statistics within the package ASURV (Feigelson & Nelson 1985) to take the lower limits into account. We employ the Kaplan-Meier estimator to estimate the mean of E cut /T e for the two samples. As the Kaplan-Meier estimator is exceedingly sensitive to the value of the maximums, the calculation is performed in the logarithm space to weaken the imbalance of statistical weights 3 . Since the dispersion given by 3 The derived mean is like the traditional geometric mean the Kaplan-Meier estimator could be underestimated, we conservatively bootstrap the corresponding samples to obtain the dispersion to the mean. As shown in Tab. 3, the mean of E cut /T e of the radio-quiet sample is remarkably larger than that of the radio-loud one at a level above 3σ for all three models.
We note 10 out of the 50 radio-quiet sources have considerably high E cut lower limits (> 400 keV in pexrav model), while all E cut measurements or lower limit from the radio-loud sample are below 400 keV. Note excluding these 10 large E cut lower limits would yield a lower average E cut for the radio-quiet sample (mean pexrav E cut = 248 +26 −24 keV), and the difference between radioquiet and radio-loud samples is no longer statistically significant. We therefore carefully further inspect these 10 individual sources in §4.1 (ordered from low to high by the E cut lower limit) through comparing with results reported in literature.
4.1.
Notes on sources with extraordinarily high E cut lower limits 1. NGC 5506, Sy 1.9, E cut > 424 keV (pexrav, this work, hereafter the same for the rest 9 sources). Consistently, Matt et al. (2015) reported an E cut = 720 +130 −190 keV and a 3σ lower limit of 350 keV; Sun et al. (2018) reported an E cut = 500 +100 −240 keV; Panagiotou & Walter (2020) reported a 1σ lower limit of 8400 keV. The only exception came from Baloković et al. (2020), which reported an E cut = 110 ± 10 keV with contemporaneous Swift/BAT data. Baloković et al. (2020) claimed in its appendix that the BAT spectrum shows a much smaller cutoff than the NuSTAR one. E cut variation and background subtraction may have played a role. possible reason is that Parker et al. (2019) used quasi-simultaneous XMM-Newton data, which has a photon index Γ pn ∼ 1.43, quite different from our result (Γ ∼ 1.84). Besides, the reflection fraction R relxill is ∼ 0.09, smaller than that in this work (R relxill ∼ 0.25). The data are actually barely simultaneous, considering a start time offset of ∼ 10 ks and the fact that NuSTAR exposure is 57 ks while PN exposure is 100 ks. Meanwhile, the inter-instrument calibration issue and the pileup effect in PN data may also have played a part here.
3. NGC 5273, Sy 1.5, E cut > 467 keV. Panagiotou & Walter (2020) reported a 1σ lower limit of 1967 keV. Meanwhile, both Panagiotou & Walter (2020) and this work get Γ ∼ 1.9. Pahari et al. (2017) reported an E cut = 143 +96 −40 keV and Γ ∼ 1.8. Note Pahari et al. (2017) employed the quasi-simultaneous Swift-XRT data (6.5 ks XRT exposure, while 21 ks of NuSTAR ) and adopted a quite complex model, which may explain the discrepancy. Akylas & Georgantopoulos (2021) reported an E cut = 115 +95 −37 keV, Γ ∼ 1.6 and R pexmon ∼ 0.74. The reason behind the discrepancy between Akylas & Georgantopoulos (2021) and our result (both fitting only NuSTAR spectra) remains unclear and we can not reproduce their result following the same process with the same model of them (the same for NGC 3516 and Ark 120 below). (2021) reported an E cut lower limits of 450 keV, 6972 keV and 220 keV respectively. Ursini et al. (2016) reported an E cut = 470 +430 −150 keV.
10. NGC 4051, Sy 1.5, E cut > 800 keV. Akylas & Georgantopoulos (2021) reported an E cut lower limits of 846 keV.
In general, our large lower limits to E cut are consistent with most of those from the literature. Discrepancies do exist in some sources, mostly due to the inclusion of the data from other missions in some literature studies. In this work, the E cut /T e of the radio-loud and radioquiet sample are measured with solely NuSTAR spectra, uniformly processed and analyzed. We therefore anticipate the comparison between two samples in this work is unbiased, though the specific measurement of E cut in individual sources could be altered if including quasi-simultaneous observations or using a more complex model. The spectra and the best-fit data-to-model residuals of these ten sources (as shown in the Appendix) have been visually examined and no clear systematical residuals could be identified.
The reliability of large E cut
The large E cut lower limits reported in this work, and the generally consistent results from literature studies, appear to contradict our intuition as such large E cut lower limits are far beyond the NuSTAR spectral coverage (3-78 keV). For instance, the correcting factor of an 800 keV exponential cutoff to a single power law is only ≈ e −0.1 (around 10%) at 78 keV , making the measurements of large E cut only possible in a few brightest sources with sufficiently high NuSTAR spectral S/N at high energy end. However, as García et al. (2015) pointed out, the reflection component, which is sensitive to the spectral shape of the hardest coronal radiation, may assist the measurements of high E cut with NuS-TAR spectra . Based on the relxill model, they showed that E cut can be constrained at as high as 1 MeV for bright sources. Below we also demonstrate the effect of the reflection component in model pexrav with spectral simulations. Using the NuSTAR spectra of NGC 4051 as input, with E cut set at 10 6 keV and other parameters at the best-fit values, we generate artificial spectra assuming different R in pexrav using f akeit. Fitting the artificial spectra following the same process we apply on the real spectra, we successfully constrain the E cut lower limit to be above 800 keV in 0.2%, 23% and 40% of the mock spectra, for R = 0, 1 and 2, respectively. This clearly shows that large E cut can be better constrained in spectra with stronger reflection component.
We also check up other factors which may affect the reliability of the high E cut lower limits. The NuS-TAR images have been visually double-checked and confirmed to be normal. Moreover, using the traditional method of background subtraction instead of employing the NUSKYBGD, i.e., extracting the background within a region close to the source, would not alter the main results here.
In Fig. 3 we plot E cut versus the 50-78 keV S/N and E cut versus the 50-78 keV background fraction for our sample. Although in a considerable fraction (32%) of our sources, their NuSTAR spectra appear background dominated at > 50 keV (i.e., with 50-78 keV background fraction > 50%), in all but one sources the net 50-78 keV (FPMA) S/N are > 3. This indicates our spectral fitting results are unlikely biased by poor spectral quality or high background level at high energies. From Fig. 3 we also see that E cut lower limits increase with 50-78 keV S/N, and decrease with 50-78 keV background fraction. In other words, these high E cut lower limits (> 400 keV) can only be obtained at relatively higher 50-78 keV S/N and lower 50-78 keV background fraction. This confirms these high E cut lower limits are not due to strong background or poor spectral quality at highest energies.
Beside, we note complex parameter degeneracy may exist between E cut and other parameters (e.g., Hinkle & Mushotzky 2021). We hence review the fitting results in individual sources using two parameter contours, among which six sources with E cut lower limits > 400 keV but controversial E cut detections 4 reported in literature are presented in Fig. 4. For these six sources, the degeneracies between E cut , Γ and R are found to be weak, with a 2σ E cut lower limit ∼ 300 keV obtained even using two parameter confidence contours. In addition we also demonstrate how the low E cut detections reported in literature (see §4.1) deteriorate the spectral fitting in the lower panel for each source of Fig. 4. We conclude our results are robust in the sense of fitting statistics. Finally, the three models we adopted in this work ensure the main results are model independent. As shown in Tab. 2, the E cut measurements of pexrav generally agree with those of relxill, particularly for those sources with large E cut lower limits. As for the Comptonization model, T e is often harder to be constrained (more lower limits, less detections) than the E cut , and the lower limits to T e are smaller than 1/3 E cut (Petrucci et al. 2001) in some sources. This is likely because the e-folded power law produces a smoother break (thus extending to lower energy range and could be better constrained in case of large T e /E cut ) than Comptonization models (Zdziarski et al. 2003;Fabian et al. 2015). However, the overall results from the three models are accordant, i.e., sources with extremely large E cut lower limits do have relatively high T e , especially compared with radio-loud sources (see next section). We hence rule out the possibility that the large E cut /T e lower limits we obtained are due to unknown faults of certain models.
The difference between radio-quiet and loud samples
We have shown that our radio-quiet sample has considerably larger mean T e /E cut compared with the radioloud one. To explore the statistical reliability of the difference, we need to explore various biases behind the E cut measurements which might be significant here. The first is the complex degeneracies between the spectral parameters; although we have shown above an example that the degeneracies appear weak in individual sources, we need to quantitatively explore whether such effects could be responsible for the different E cut between two samples. The second fact is the radio-loud sample is known to have prominently flatter spectra and weaker reflection component than the radio-quiet one (see Fig. 6); while flatter spectra imply relatively more photons at high energy end, facilitating the E cut measurement, the weaker reflection could contrarily make it hard to constrain high E cut . The measurements of E cut also rely on the spectral S/N as shown in Fig. 1, the effect of which could vary from source to source. Last but might be most important, the Kaplan-Meier estimator itself can be sensitive to the size of the sample, the fraction of the censored data (lower limits), and the extremely large lower limits. As shown in Fig. 1 and especially in the right panel of Fig. 2, the mean values, even calculated in the logarithm space, are severely biased towards those large lower limits 5 .
To address the overall complicated biases, we employ the f akeit within XSPEC to create simulated spectra for each source using the best-fit results from pexrav 6 but manually assigning a set of E cut as input. We re- peat the spectral fitting to the mock spectra and then the measurement of mean E cut for the mock samples with the Kaplan-Meier estimator, to examine whether our overall procedures could well recover the input E cut or produce artificial different mean E cut between two samples. As shown in Fig. 5, while the simulations do could recover the input E cut in case of low E cut values, high input E cut values (400 keV and above) are clearly underestimated, because the limited bandwidth of NuSTAR, and because we have manually fixed the larger E cut lower limit to 800 keV. Since a considerably fraction of radio-quiet sources have rather large intrinsic E cut while none of radio-loud sources does, this indicates we may have underestimated the mean E cut for our real radio-quiet sample, further strengthening the difference between two samples we have observed. However, no statistical difference is found between the mock radioloud and radio-quiet samples. We therefore conclude the biases aforementioned put together are unable to account for the difference in the E cut distribution between two samples. The larger average E cut in radio-quiet sources is however surprising, as we would anticipate larger observed E cut in the radio-loud sample due to potential jet Figure 5. Output mean Ecut (in unit of keV) from the mock samples. Note the output mean Ecut saturates at ∼ 800 keV, partially because we manually set larger Ecut lower limits yielded from spectral fitting to 800 keV.
contamination (Madsen et al. 2015b) or the stronger Doppler boosting of an outflowing corona in radio AGNs (e.g. Beloborodov 1999;Liu et al. 2014;Kang et al. 2020), even if two populations have the same intrinsic coronal temperature. The key underlying reason might Figure 6. Upper: Ecut versus Γ. We over-plot the mean Ecut within several bins of Γ for all sources as we find no difference in the mean Ecut between radio-quiet and radioloud sources at given Γ. Meanwhile, the output mean Ecut versus output Γ derived from the mock spectra of the sample (with input Ecut = 400 keV) is over-plotted as green crosses. Lower: Ecut versus R.
be the different Γ distribution of the two samples. In Fig. 6 we plot E cut versus Γ and the reflection strength R from pexrav for the two samples. We find that E cut is positively correlated with Γ and those large E cut lower limits are mainly detected in sources with steep spectra. Besides, we find no difference in E cut between two populations at comparable Γ. Therefore, the difference in E cut between two populations could dominantly be attributed to the fact that E cut correlates with photon index Γ while the radio-loud sample is dominated by sources with flat spectra. Meanwhile, E cut exhibits no clear correlation with R, while RQ AGNs do show larger E cut compared with RL ones at given R, which could be attributed to the effect of Γ. We note that Kang et al. (2020) found the E cut distribution of their radio-loud sample is indistinguishable from that of a radio-quiet sample from Rani et al. (2019). This is likely because the sample of Rani et al. (2019) is incomplete, which only collected from literature sources with well-constrained E cut and most lower limits were excluded. In fact, if we drop lower limits from our samples in this work, we would find no difference either in mean E cut between two populations. Meanwhile, Gilli et al. (2007) has shown an average E cut of above 300 keV can saturate the X-ray Background at 100 keV. The fact that large E cut mainly exist in steeper spectra also renders our large mean value of E cut in radio-quiet AGNs compatible with Gilli et al. (2007), as sources with steep X-ray spectra make little contribution to the high energy X-ray background even with a large E cut .
The underlying mechanisms
Tentative positive correlation between E cut and Γ has been reported in other studies with NuSTAR (e.g. Kamraj et al. 2018;Molina et al. 2019;Hinkle & Mushotzky 2021), and previously with BeppoSAX data Figure 8. The compactness-temperature (l-Θ) diagrams, with Θ derived from three spectral models.
(e.g. Petrucci et al. 2001), however not as pronounced as we have found, likely because of smaller sample size or the domination by poorly constrained lower limits. For instance, the sample in Kamraj et al. (2018) consists of 46 sources, whereas E cut can be well constrained in only two of them. The samples in Molina et al. (2019) and Hinkle & Mushotzky (2021) consist of 18 and 33 sources respectively, considerably smaller than the one presented in this work; meanwhile, the inclusion of XRT and XMM-Newton data in those two works may have disturbed the measurements of E cut and Γ as already discussed above.
The tentative E cut -Γ correlation reported in literature had often been attributed to the parameter degeneracy between E cut and Γ. In this work, the correlation between E cut and Γ is rather strong, and Γ is well constrained thanks to the high-quality NuSTAR spectra. We thus expect the effect of such degeneracy to be insignificant. We perform simulations to quantify such effect in our sample. Utilizing an E cut =400 keV as input and other best-fit spectral parameters from pexrav, we simulate mock spectra for each source. We then examine the correlation between the output E cut and output Γ for the mock sample. As shown Fig. 6, while the parameter degeneracy does yield a weak artificial correlation between the output E cut and Γ, it is much weaker and negligible compared with the observed one.
The positive correlation between E cut and Γ found in this work indicates sources with steeper X-ray spectra tend to hold hotter coronae. Subsequently, to produce the steeper spectra, the hotter coronae need to have lower opacity. The negative link between coronal temperature and opacity could partly be attributed to the fact that the cooling is more efficient in coronae with higher opacity, i.e., sustainable hotter coronae are only possible with lower opacity. However, while lower opacity could lead to steeper spectra, higher temperature alters the spectral slope towards an opposite direction. While it is yet unclear what drives the positive E cut -Γ correlation reported in this work, it is intriguing to compare it with how E cut varies with Γ in individual AGNs. E cut variabilities detected in several individual AGNs show a common trend that when an individual source brightens in X-ray flux, its power law spectrum gets softer and E cut increases, also revealing a positive E cut -Γ correlation (hotter-when-softer/brighter, e.g. Zhang et al. 2018;Kang et al. 2021). However, the similarity between the two types of positive E cut -Γ correlation (intrinsic: in individual AGNs, versus global: in a large sample of AGNs) does not necessarily imply common underlying mechanisms. This is because, while the intrinsic E cut -Γ correlation, which could be accompanied with dynamical/geometrical changes of the coronae such as inflation/contraction (Wu et al. 2020), reflects variations in the inner most region of individual AGNs, the global E cut -Γ correlation we find in a sample of AGNs shall mainly reflect the differences in their physical properties, including SMBH mass, accretion rate and other unknown parameters. Kang et al. (2021) also found a tentative trend that E cut reversely decreases with Γ at Γ > 2.05 in one individual source, yielding a Λ shape in the E cut -Γ diagram. Such trend however is not seen in the global E cut -Γ relation.
The positive global E cut -Γ correlation also implies a potential positive correlation between E cut and Eddington ratio λ edd , as sources with higher accretion rate tend to have steeper spectra (e.g. Shemmer et al. 2006;Risaliti et al. 2009;Yang et al. 2015). However, we find no significant correlation between λ edd and Γ, or between λ edd and E cut in our sample (see Fig. 7), consistent with the results of Molina et al. (2019), Hinkle & Mushotzky (2021) and Kamraj et al. (2022). This is likely because the uncertainties in the measurements of λ edd are large, or the E cut -Γ correlation we find is not driven by Ed-dington ratio. However, our results disagree with Ricci et al. (2018), which claimed a negative correlation between E cut and λ edd based on SWIFT BAT spectra. But note a dominant fraction (144 out of 212) of the E cut measurements reported in Ricci et al. (2018) are lower limits 7 .
We note a couple of individual local sources with high Eddington ratios ( > 1) have been reported with NuS-TAR spectra to have low E cut /T e in literature (e.g., Ark 564, IRAS 04416+1215, Kara et al. 2017;Tortosa et al. 2022), seeming to suggest lower coronal temperature at higher Eddington ratio. While the NuSTAR spectral quality of Ark 564 is rather high (10-78 keV FPMA S/N = 88), it is not in the 105-month SWIFT/BAT catalog, thus not included in this work. The NuSTAR spectral quality of IRAS 04416+1215 (with 10-78 keV S/N of 10) is much poorer compared with the sample presented in this work, and our independent fitting to its NuSTAR spectra alone could only yield poorly constrained lower limits to its E cut or T e . Utilizing XMM-Newton and NuSTAR data, low E cut /T e is also detected in a high-redshift source with Eddington ratio > 1 (PG 1247+267, Lanzuisi et al. 2016). However, its NuSTAR spectra also have poor S/N (∼ 20 in the rest frame 10-78 keV). Meanwhile, simply collecting positive detections of E cut /T e from literature could suffer significant publication bias.
We finally plot the samples on the well-known compactness-temperature (l-Θ) diagram. Fabian et al. (2015) has shown that, the AGN coronae locate near the boundary of the forbidden region in the l-Θ diagram, suggesting the coronal temperature is governed and limited by runaway pair production. Following Fabian et al. (2015), we calculate the compactness, l = 4π(m p /m e )(r g /r)(L/L edd ), and dimensionless temperature, Θ = kT e /m e c 2 . We assume a r = 10 r g , adopt the unabsorbed 0.1-200 keV primary continuum luminosity extrapolated by the best-fit pexrav model to NuSTAR spectra (listed in Table 1), and calculate L edd using the SMBH mass in Table 1 8 . For pexrav and relxill, the T e is approximated by E cut /3 (Petrucci et al. 2001), while for relxillcp the measured T e is directly used. The l-Θ diagrams of the three models are 7 Besides, we are unable to reproduce the negative correlation given in Fig. 4 of Ricci et al. (2018) utilizing their data and the approach adopted in this work to estimate the median Ecut. Instead, we find no clear correlation either between Ecut and λ edd using their sample and data. 8 Note the λ edd presented in Table 1 was derived using up-scaled BAT 14-195 keV luminosity, thus the ratio of the compactness parameter (calculated using 0.1-200 keV measured with NuS-TAR spectra) to λ edd could deviate from a single constant.
shown in Fig. 8, with the boundaries of runaway pair production of the three geometries (Stern et al. 1995;Svensson 1996) over-plotted. Apparently, the sources in this work have a wider Θ range compared with that of Fabian et al. (2015), likely because of the large sample size of this work. On the one hand, there are many sources lying clearly to the left of the slab pair line, particularly sources in the upper left corner in the l-Θ diagram. They appear to support the existence of hybrid plasma in the coronae as hybrid plasma would shift the pair line to the left and the shift is more prominent in the top of the line (see Fig. 6 in Fabian et al. 2017). On the other hand, both the directly measured and conservatively estimated T e (1/3 E cut here, while 1/2 in Fabian et al. 2015) of a considerable fraction of sources lie beyond (to the right of) the slab pair line, consistent with Kamraj et al. (2022), favoring the sphere or hemisphere geometry. Considering the E cut -Γ relation shown above, it is implied that the coronal geometry might be spectral slope dependent, i.e., flatter shape for harder spectra, and rounder for softer spectra. Furthermore there are several sources with lower limits of Θ lying even beyond the boundaries of all three geometries, which suggests their coronae could be more extended than 10 r g we have assumed. Figure 9. NuSTAR source and background spectra (estimated by NUSKYBGD), the best-fit models and the data-to-model residual (pexrav) ratios of the 10 sources with large Ecut lower limits (> 400 keV), ordered from low to high by the Ecut lower limit). Note in only a few of them, e.g., NGC 3516 and NGC 4051, the spectra appear background dominated (i.e., the background fluxes larger than source fluxes) at above 50 keV. Spectra from both FPMA (black) and FPMB (red) modules are given and further rebinned for visualization purposes.
( 1 )
1Malizia et al. (2008); (2)McLure et al. (2006); (3)Lewis & Eracleous (2006); (4)Cappellari et al. (2009) . Lbol 14−195keV is the bolometric luminosity estimated by the BAT 14-195 keV flux(Koss et al. 2017) and used for λ edd calculation. L 0.1−200keV is the unabsorbed 0.1-200 keV luminosity, extrapolated using the best-fit results of pexrav to NuSTAR spectra and adopting the redshifts from the 105-month BAT catalogue and H0 = 70 km s −1 Mpc −1 . The compactness parameter l is derived from L 0.1−200keV .
2. NGC 1566, Sy 1.5, E cut > 434 keV.Akylas & Georgantopoulos (2021) reported an E cut = 336 +646 −140 keV, whileParker et al. (2019) reported an incompatible result of E cut = 167 ± 3 keV. A
Figure 2 .
2Similar to Fig. 1 but with Ecut (left) and Te (right) derived from relxill and relxillcp, respectively.
Figure 3 .
3pexrav Ecut versus 50-78 keV (FPMA) S/N and background fraction. The S/N would be elevated by a factor of √ 2 if considering both FPMA an FPMB, but the background fraction would remain unchanged. See §4.3 and §4.4 for further discussion on the effect of parameter degeneracy.
Figure 4 .
4Upper panel: the Γ -Ecut and R -Ecut contours (with confidence levels of 1σ and 2σ plotted, corresponding to ∆χ 2 = 2.3 and 4.21) of six sources with Ecut lower limits > 400 keV but with statistically inconsistent low Ecut detections reported in literature. We mark the reported small Ecut detections, together with powerlaw index Γ and pexrav/pexmon reflection parameter R (when available) from literature for comparison. Lower panel: the data-to-model ratios of the best-fitting results with Ecut fixed at 10 6 keV and at the reported low values from literature (see §4.1) respectively. For better illustration, the data have been rebinned and only the FPMA spectra are plotted.
Figure 7 .
7Ecut -λ edd and Γ -λ edd for our samples. λ edd is derived using the black hole mass from literature and upscaled BAT 14-195 keV luminosity (see Tab. 1).
Table 1 .
1Sample DetailsSource
obsID
10-78 keV S/N Log M Log Lbol 14−195keV Log λ edd Log L 0.1−200keV Compactness l
M
erg/s
erg/s
Radio-quiet
Mrk 1148
60160028002
51
7.82
45.3
-0.64
44.8
161
Fairall 9
60001130003
111
8.30
45.3
-1.14
44.6
34
NGC 931
60101002002
111
7.29
44.5
-0.96
43.8
59
HB89 0241+622
60160125002
66
8.09
45.6
-0.60
44.6
62
NGC 1566
80301601002
162
5.74
42.5
-1.38
43.1
449
1H 0419-577
60101039002
129
8.07
45.7
-0.46
45.1
181
Ark 120
60001044004
125
8.07
45.1
-1.07
44.5
46
ESO 362-18
60201046002
97
7.42
44.1
-1.41
43.2
11
2MASX J05210136-2521450 60201022002
52
-
-
-
43.8
-
NGC 2110
60061061002
174
9.25
44.6
-2.79
44.2
1.5
MCG +08-11-011
60201027002
187
7.62
45.0
-0.76
44.3
78
MCG +04-22-042
60061092002
51
7.34
44.9
-0.56
44.2
147
Mrk 110
60201025002
213
7.29
45.1
-0.29
44.6
354
NGC 2992
90501623002
190
5.42
43.4
-0.14
43.7
3413
MCG-05-23-016
60001046008
380
5.86
44.4
0.40
43.7
1172
NGC 3227
60202002014
148
6.77
43.6
-1.33
42.9
22
NGC 3516
60002042004
58
7.39
44.5
-1.03
42.7
4.2
HE 1136-2304
80002031003
65
7.62
44.4
-1.40
43.8
26
NGC 3783
60101110002
123
7.37
44.6
-0.90
43.4
20
UGC 06728
60376007002
75
5.66
43.3
-0.51
44.7
21819
Table 1 continued
Table 1 (continued)
1Source
obsID
10-78 keV S/N Log M Log Lbol 14−195keV Log λ edd Log L 0.1−200keV Compactness l
M
erg/s
erg/s
2MASX J11454045-1827149 60302002006
63
7.31
45.0
-0.42
44.3
181
NGC 3998
60201050002
63
8.93
42.8
-4.23
41.8
0.01
NGC 4051
60401009002
188
6.13
42.9
-1.34
41.8
8.2
Mrk 766
60001048002
98
6.82
43.8
-1.15
43.3
53
NGC 4593
60001149008
66
6.88
44.0
-1.05
43.2
38
WKK 1263
60160510002
58
8.25
44.7
-1.66
44.2
17
MCG-06-30-015
60001047003
185
5.82
43.8
-0.11
43.2
444
NGC 5273
60061350002
55
6.66
42.5
-2.26
42.3
7.9
4U 1344-60
60201041002
174
7.32
44.5
-0.95
43.7
48
IC 4329A
60001045002
341
7.84
45.1
-0.85
44.3
59
Mrk 279
60160562002
62
7.43
44.8
-0.75
44.2
119
NGC 5506
60061323002
158
5.62
44.1
0.37
43.3
794
NGC 5548
60002044006
123
7.72
44.6
-1.24
44.0
31
WKK 4438
60401022002
71
6.86
44.0
-0.98
43.2
39
Mrk 841
60101023002
51
7.81
45.0
-0.99
44.3
55
AX J1737.4-2907
60301010002
101
-
-
-
44.2
-
2MASXi J1802473-145454 60160680002
52
7.76
45.0
-0.92
44.4
79
ESO 141-G 055
60201042002
124
8.07
45.1
-1.06
44.4
39
2MASX J19373299-0613046 60101003002
77
6.56
43.6
-1.04
43.1
57
NGC 6814
60201028002
188
7.04
43.6
-1.58
42.9
12
Mrk 509
60101043002
228
8.05
45.3
-0.86
44.6
63
SWIFT J212745.6+565636 60402008004
124
7.20 (1)
-
-
43.7
51
NGC 7172
60061308002
127
8.45
44.3
-2.31
43.6
2.8
NGC 7314
60201031002
148
4.99
43.2
0.12
42.7
970
Mrk 915
60002060002
60
7.71
44.5
-1.33
43.7
18
MR 2251-178
60102025004
92
8.44
45.9
-0.66
45.3
126
NGC 7469
60101001014
72
6.96
44.5
-0.60
41.8
1.4
Mrk 926
60201029002
199
8.55
45.7
-1.01
45.0
56
NGC 4579
60201051002
64
7.80
-
-
42.2
0.43
M 81
60101049002
155
7.90
41.3
-4.72
41.1
0.03
Radio-loud
3C 109
60301011004
55
8.30 (2)
47.4
0.98
45.8
539
3C 111
60202061004
112
8.27
45.7
-0.67
44.9
82
3C 120
60001042003
207
7.74
45.3
-0.59
44.7
152
PicA
60101047002
74
7.60 (3)
44.9
-0.79
43.9
38
3C 273
10002020001
391
8.84
47.4
0.41
46.4
641
CentaurusA
60001081002
509
7.74 (4)
43.3
-2.61
42.7
1.6
3C 382
60001084002
133
8.19
45.7
-0.58
45.0
116
3C 390.3
60001082003
115
8.64
45.8
-0.99
45.1
50
4C 74.26
60001080006
131
9.60
46.1
-1.67
45.4
12
IGR J21247+5058
60301005002
175
7.63
44.9
-0.85
44.5
145
Table 2 .
2Spectral Fitting ResultsThe fitting results however are insensitive to this choice.Source
obsID
Γ pexrav
R pexrav
E pexrav
cut
χ 2
pexrav /dof E relxill
cut
χ 2
relxill /dof T relxillcp
e
χ 2
relxillcp /dof
keV
keV
keV
Radio-quiet
Mrk 1148
60160028002 1.79 +0.13
−0.08
< 0.45
113 +427
−47
0.91
> 65
0.92
> 18
0.92
Table 2 continued
2
Table 2 (continued)
2Source
obsID
Γ pexrav
R pexrav
E pexrav
cut
χ 2
pexrav /dof E relxill
cut
χ 2
relxill /dof T relxillcp
e
χ 2
relxillcp /dof
keV
keV
keV
Fairall 9
60001130003 1.96 +0.06
−0.03 0.71 +0.22
−0.17
> 396
0.91
> 400
0.94
> 182
0.96
NGC 931
60101002002 1.88 +0.06
−0.06 0.70 +0.21
−0.19
> 280
0.85
> 293
0.86
> 138
0.86
HB89 0241+622
60160125002 1.70 +0.06
−0.06 0.73 +0.33
−0.27 240 +489
−101
0.97
> 158
1.02
> 50
1.02
NGC 1566
80301601002 1.84 +0.05
−0.04 0.80 +0.15
−0.14
> 434
0.93
> 511
0.93
> 173
0.93
1H 0419-577
60101039002 1.64 +0.06
−0.05 0.38 +0.15
−0.13
54 +8
−6
0.99
54 +8
−6
0.99
16 +1
−1
1.00
Ark 120
60001044004 1.98 +0.03
−0.03 0.58 +0.14
−0.12
> 744
1.06
> 414
1.10
> 213
1.12
ESO 362-18
60201046002 1.57 +0.09
−0.08 0.58 +0.26
−0.22
133 +91
−40
1.03
135 +92
−33
1.08
> 34
1.09
2MASX J05210136-2521450 60201022002 2.06 +0.15
−0.12 0.33 +0.74
−0.30
> 111
0.99
> 129
1.00
> 49
1.00
NGC 2110
60061061002 1.67 +0.03
−0.03
< 0.03
> 327
0.95
> 382
0.96
> 217
0.99
MCG +08-11-011
60201027002 1.81 +0.04
−0.02 0.26 +0.10
−0.09 417 +688
−154
1.03
> 302
1.09
> 244
1.10
MCG +04-22-042
60061092002 1.95 +0.10
−0.09 0.59 +0.44
−0.33
> 216
0.87
> 167
0.88
> 37
0.88
Mrk 110
60201025002 1.74 +0.01
−0.01
< 0.04
160 +35
−24
1.05
159 +43
−32
1.10
57 +54
−18
1.11
NGC 2992
90501623002 1.68 +0.04
−0.04 0.08 +0.08
−0.07 395 +636
−152
1.05
> 316
1.15
> 260
1.16
MCG -05-23-016
60001046008 1.72 +0.02
−0.02 0.45 +0.05
−0.05
115 +11
−9
1.10
125 +10
−8
1.19
41 +3
−3
1.24
NGC 3227
60202002014 1.90 +0.05
−0.05 1.21 +0.22
−0.19 342 +417
−125
1.01
> 251
1.01
> 83
1.01
NGC 3516
60002042004 1.68 +0.09
−0.09 0.65 +0.39
−0.30
> 476
1.10
> 368
1.17
> 114
1.18
HE 1136-2304
80002031003 1.69 +0.10
−0.10
< 0.48
169 +871
−79
1.00
160 +573
−71
1.00
> 21
1.01
NGC 3783
60101110002 1.94 +0.07
−0.07 1.58 +0.33
−0.28
> 346
1.05
> 432
1.04
> 150
1.05
UGC 06728
60376007002 1.80 +0.10
−0.10 0.75 +0.33
−0.28 230 +933
−108
1.01
183 +452
−62
1.01
> 26
1.01
2MASX J11454045-1827149 60302002006 1.79 +0.11
−0.08 0.43 +0.33
−0.27 109 +124
−38
0.88
105 +98
−33
0.88
29 +44
−10
0.89
NGC 3998
60201050002 1.96 +0.08
−0.07
< 0.34
> 219
0.96
> 201
0.97
> 47
0.97
NGC 4051
60401009002 2.05 +0.03
−0.03 2.04 +0.22
−0.20
> 800
1.04
> 800
1.02
> 270
1.05
Mrk 766
60001048002 2.30 +0.07
−0.07 1.76 +0.40
−0.34
> 200
1.05
> 352
1.05
> 157
1.06
NGC 4593
60001149008 1.83 +0.05
−0.05 0.63 +0.25
−0.21
> 800
0.99
> 577
1.00
> 144
1.02
WKK 1263
60160510002 1.79 +0.09
−0.09
< 0.50
> 529
0.86
> 374
0.86
> 72
0.87
MCG -06-30-015
60001047003 2.29 +0.02
−0.04 1.83 +0.21
−0.20
> 707
1.07
> 720
1.05
> 280
1.07
NGC 5273
60061350002 1.90 +0.11
−0.11 1.30 +0.66
−0.50
> 467
1.11
> 362
1.11
> 85
1.12
4U 1344-60
60201041002 1.90 +0.05
−0.05 0.92 +0.17
−0.15 308 +265
−101
1.11
337 +204
−112
1.12
> 104
1.13
IC 4329A
60001045002 1.72 +0.02
−0.02 0.32 +0.05
−0.05
195 +37
−27
1.03
215 +37
−33
1.07
71 +37
−15
1.09
Mrk 279
60160562002 1.90 +0.05
−0.04 0.19 +0.20
−0.17
> 542
1.01
> 231
1.07
> 84
1.07
NGC 5506
60061323002 1.90 +0.05
−0.06 1.29 +0.22
−0.19
> 424
1.08
> 551
1.06
> 211
1.07
NGC 5548
60002044006 1.69 +0.06
−0.06 0.62 +0.19
−0.17
128 +53
−30
0.99
126 +45
−28
1.00
36 +11
−8
1.01
WKK 4438
60401022002 2.00 +0.08
−0.05 1.11 +0.44
−0.33
> 234
0.92
> 263
0.93
> 81
0.94
Mrk 841
60101023002 1.89 +0.06
−0.12 0.45 +0.45
−0.33
> 176
1.02
> 154
1.02
> 44
1.02
AX J1737.4-2907
60301010002 1.79 +0.08
−0.08 0.94 +0.29
−0.25
75 +23
−14
1.05
112 +95
−36
1.04
35 +184
−14
1.04
2MASXi J1802473-145454 60160680002 1.76 +0.09
−0.08
< 0.41
> 128
1.03
> 135
1.08
> 53
1.08
ESO 141-G 055
60201042002 1.92 +0.03
−0.03 0.67 +0.17
−0.15
> 351
1.04
> 293
1.05
> 125
1.05
2MASX J19373299-0613046 60101003002 2.45 +0.16
−0.11 2.01 +1.39
−0.65
> 143
0.99
> 217
1.13
> 137
1.14
NGC 6814
60201028002 1.83 +0.04
−0.03 0.46 +0.11
−0.10
> 311
1.06
371 +318
−108
1.10
> 147
1.11
Mrk 509
60101043002 1.75 +0.02
−0.02 0.41 +0.08
−0.07
104 +13
−10
1.06
98 +13
−8
1.08
24 +2
−2
1.12
SWIFT J212745.6+565636 60402008004 2.10 +0.05
−0.04 1.77 +0.45
−0.33
56 +7
−6
1.06
63 +11
−8
1.07
21 +8
−3
1.07
NGC 7172
60061308002 1.84 +0.06
−0.06 0.68 +0.19
−0.17 385 +1239
−174
1.05
337 +523
−137
1.05
> 54
1.05
NGC 7314
60201031002 2.06 +0.05
−0.05 1.18 +0.21
−0.19
> 267
1.05
> 346
1.07
> 188
1.07
Mrk 915
60002060002 1.81 +0.10
−0.09 0.37 +0.34
−0.27
> 378
1.03
> 286
1.05
> 69
1.06
MR 2251-178
60102025004 1.77 +0.07
−0.07
< 0.25
195 +310
−75
1.02
> 117
1.01
37 +97
−12
1.01
NGC 7469
60101001014 1.85 +0.08
−0.05 0.41 +0.30
−0.21
> 262
0.88
> 242
0.89
> 77
0.89
Mrk 926
60201029002 1.73 +0.02
−0.02
< 0.10
323 +241
−96
1.05
292 +178
−87
1.11
> 83
1.11
NGC 4579
60201051002 1.88 +0.04
−0.08
< 0.15
> 230
1.02
> 93
1.08
> 49
1.08
M 81
60101049002 1.88 +0.02
−0.02
< 0.05
358 +538
−135
1.00
225 +233
−85
1.10
> 83
1.11
Radio-loud
3C 109
60301011004 1.64 +0.16
−0.08 0.32 +0.32
−0.24
87 +86
−24
0.95
88 +97
−25
0.96
30 +60
−11
0.97
Table 2 continued
2
Table 2 (continued)
2Source
obsID
Γ pexrav
R pexrav
E pexrav
cut
χ 2
pexrav /dof E relxill
cut
χ 2
relxill /dof T relxillcp
e
χ 2
relxillcp /dof
keV
keV
keV
3C 111
60202061004 1.70 +0.06
−0.04
< 0.08
165 +202
−47
1.07
174 +166
−57
1.10
> 35
1.11
3C 120
60001042003 1.86 +0.03
−0.03 0.40 +0.09
−0.08 300 +188
−85
1.01
289 +138
−80
1.02
> 91
1.02
PicA
60101047002 1.72 +0.04
−0.04
< 0.10
202 +527
−87
0.98
161 +754
−74
1.00
> 29
1.01
3C 273
10002020001 1.62 +0.02
−0.01 0.05 +0.03
−0.03
226 +42
−26
1.02
> 237
1.03
> 79
1.03
CentaurusA
60001081002 1.75 +0.01
−0.01
< 0.01
335 +85
−56
1.00
209 +34
−24
1.04
101 +85
−24
1.08
3C 382
60001084002 1.76 +0.04
−0.05
< 0.13
> 297
0.95
> 268
0.98
> 105
0.98
3C 390.3
60001082003 1.72 +0.06
−0.06 0.14 +0.14
−0.12 208 +232
−73
0.98
235 +267
−85
1.00
> 46
1.00
4C 74.26
60001080006 1.80 +0.07
−0.04 0.66 +0.18
−0.15
121 +48
−22
0.99
165 +85
−42
1.01
62 +265
−23
1.01
IGR J21247+5058
60301005002 1.63 +0.04
−0.02
< 0.11
100 +22
−15
1.07
102 +22
−16
1.08
26 +6
−3
1.11
Table 3 .
3The mean Ecut/Te of our RQ and RL samples. The last row presents the statistical significance of the difference in the mean value between two samples.pexrav Ecut relxill Ecut relxillcp Te
RQ (keV)
364 +45
−40
390 +60
−52
174 +23
−20
RL (keV)
187 +27
−24
188 +26
−23
72 +13
−11
Significance (σ)
3.6
3.4
4.2
pexrav < 0.23. The involvement of the quasi-simultaneous Swift-XRT data (5.7 ks XRT exposure, while 16 ks of NuSTAR ) in Molina et al. (2019) may account for such discrepancy. 6. Mrk 279, Sy 1.5, E cut > 542 keV. Not reported elsewhere.4. NGC 3516, Sy 1.2, E cut > 476 keV. Panagiotou
& Walter (2020) reported a 1σ lower limit of 4940
keV. Akylas & Georgantopoulos (2021) reported
an inconsistent E cut = 89 +24
−48 keV and a R pexmon ∼
1.29.
5. WKK 1263 (IGR J12415-5750), Sy 1.5, E cut > 530
keV. Kamraj et al. (2018), Panagiotou & Walter
(2020) and Akylas & Georgantopoulos (2021) re-
ported lower limits of 224 keV, 1826 keV and 282
keV respectively, and all three works derive Γ ∼
1.8, similar to our results. Molina et al. (2019)
reported an E cut = 123 +54
−47 keV, Γ ∼ 1.6 and a
similar R 7. MCG-06-30-015, Sy 1.9, E cut > 707 keV. Pana-
giotou & Walter (2020) reported a 1σ lower limit
of 12000 keV.
8. Ark 120, Sy 1, E cut > 744 keV. Consistently, Pana-
giotou & Walter (2020) reported a 1σ lower limit of
1631 keV; Hinkle & Mushotzky (2021) reported an
E cut = 506 +814
−200 keV; Nandi et al. (2021) reported
an T e = 222 +105
−107 keV; Marinucci et al. (2019) re-
ported an T e = 155 +350
−55 keV. The only statistically
inconsistent result comes from Akylas & Georgan-
topoulos (2021) which reported an E cut = 233 +147
−67
keV.
9. NGC 4593, Sy 1, E cut > 800 keV. Zhang et al.
(2018), Panagiotou & Walter (2020) and Akylas
& Georgantopoulos
Including 7 FR II, 2 FR I, and 1 core-dominated sources
To highlight the discrepancies between our and literature results for these six sources, inFig. 4we also mark the reported statistically-inconsistent Ecut detections in literature, together with measurements of powerlaw index Γ and reflection parameter R (when available). We clearly see that, even considering two parameter confidence contours, our fitting results statistically challenge those low Ecut detections reported in literature. We note those low Ecut detections reported in literature are often (in 5 sources) accompanied by spectral indices flatter than our measurements, meanwhile the comparison between our R and literature measurements does not reveal a clear trend.
Using median instead of mean hardly improves the situation here, as median is also derived by the estimated probability distribution function when lower limits make up the majority. 6 This whole process is quite computer time consuming, so for simplicity we only perform with the pexrav model.
ACKNOWLEDGMENTSThis research has made use of the NuSTAR Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA). The work is supported by National Natural Science Foundation of China (grants No. 11890693, 12033006 & 12192221). The authors gratefully acknowledge the support of Cyrus Chun Ying Tang Foundations.Software: HEAsoft (v6.28; HEASARC 2014), NuS-TARDAS, NUSKYBGD(Wik et al. 2014), XSPEC (Arnaud 1996, ASURV(Feigelson & Nelson 1985), TOP-CAT(Taylor 2005), GNU Parallel Tool(Tange 2011).
. A Akylas, I Georgantopoulos, arXiv:2108.11337arXiv e-printsAkylas, A., & Georgantopoulos, I. 2021, arXiv e-prints, arXiv:2108.11337. https://arxiv.org/abs/2108.11337
. W N Alston, A C Fabian, E Kara, 10.1038/s41550-019-1002-xNature Astronomy. 2Alston, W. N., Fabian, A. C., Kara, E., et al. 2020, Nature Astronomy, 2, doi: 10.1038/s41550-019-1002-x
. E Anders, N Grevesse, 10.1016/0016-7037(89)90286-XGeochimica et Cosmochimica Acta. 53Anders, E., & Grevesse, N. 1989, Geochimica et Cosmochimica Acta, 53, 197 , doi: 10.1016/0016-7037(89)90286-X
K A Arnaud, Astronomical Society of the Pacific Conference Series. G. H. Jacoby & J. Barnes10117Astronomical Data Analysis Software and Systems VArnaud, K. A. 1996, in Astronomical Society of the Pacific Conference Series, Vol. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, 17
. D R Ballantyne, J M Bollenbacher, L W Brenneman, 10.1088/0004-637x/794/1/62The Astrophysical Journal. 79462Ballantyne, D. R., Bollenbacher, J. M., Brenneman, L. W., et al. 2014, The Astrophysical Journal, 794, 62, doi: 10.1088/0004-637x/794/1/62
. M Baloković, F A Harrison, G Madejski, 10.3847/1538-4357/abc342ApJ. 41Baloković, M., Harrison, F. A., Madejski, G., et al. 2020, ApJ, 905, 41, doi: 10.3847/1538-4357/abc342
. A M Beloborodov, 10.1086/311810ApJL. 510123Beloborodov, A. M. 1999, ApJL, 510, L123, doi: 10.1086/311810
. D Blinov, S G Jorstad, V M Larionov, 10.1093/mnras/stab1484MNRAS. 5054616Blinov, D., Jorstad, S. G., Larionov, V. M., et al. 2021, MNRAS, 505, 4616, doi: 10.1093/mnras/stab1484
. M Cappellari, N Neumayer, J Reunanen, 10.1111/j.1365-2966.2008.14377.xMNRAS. 394660Cappellari, M., Neumayer, N., Reunanen, J., et al. 2009, MNRAS, 394, 660, doi: 10.1111/j.1365-2966.2008.14377.x
. M Cappi, B De Marco, G Ponti, 10.1051/0004-6361/201628464A&A. 59227Cappi, M., De Marco, B., Ponti, G., et al. 2016, A&A, 592, A27, doi: 10.1051/0004-6361/201628464
. A C Fabian, A Lohfink, R Belmont, J Malzac, P Coppi, 10.1093/mnras/stx221MNRAS. 4672566Fabian, A. C., Lohfink, A., Belmont, R., Malzac, J., & Coppi, P. 2017, MNRAS, 467, 2566, doi: 10.1093/mnras/stx221
. A C Fabian, A Lohfink, E Kara, 10.1093/mnras/stv1218MNRAS. 4514375Fabian, A. C., Lohfink, A., Kara, E., et al. 2015, MNRAS, 451, 4375, doi: 10.1093/mnras/stv1218
. A C Fabian, A Zoghbi, R R Ross, 10.1038/nature08007Nature. 459540Fabian, A. C., Zoghbi, A., Ross, R. R., et al. 2009, Nature, 459, 540, doi: 10.1038/nature08007
. E D Feigelson, P I Nelson, 10.1086/163225ApJ. 293192Feigelson, E. D., & Nelson, P. I. 1985, ApJ, 293, 192, doi: 10.1086/163225
. J García, T Dauser, A Lohfink, 10.1088/0004-637X/782/2/76ApJ. 78276García, J., Dauser, T., Lohfink, A., et al. 2014, ApJ, 782, 76, doi: 10.1088/0004-637X/782/2/76
. J A García, T Dauser, J F Steiner, 10.1088/2041-8205/808/2/L37ApJL. 80837García, J. A., Dauser, T., Steiner, J. F., et al. 2015, ApJL, 808, L37, doi: 10.1088/2041-8205/808/2/L37
. R Gilli, A Comastri, G Hasinger, 10.1051/0004-6361:20066334A&A. 46379Gilli, R., Comastri, A., & Hasinger, G. 2007, A&A, 463, 79, doi: 10.1051/0004-6361:20066334
. F Haardt, L Maraschi, 10.1086/186171ApJL. 38051Haardt, F., & Maraschi, L. 1991, ApJL, 380, L51, doi: 10.1086/186171
. 10.1086/173020ApJ. 413507-. 1993, ApJ, 413, 507, doi: 10.1086/173020
. F A Harrison, W W Craig, F E Christensen, 10.1088/0004-637x/770/2/103The Astrophysical Journal. 770103Harrison, F. A., Craig, W. W., Christensen, F. E., et al. 2013, The Astrophysical Journal, 770, 103, doi: 10.1088/0004-637x/770/2/103
. J T Hinkle, R Mushotzky, 10.1093/mnras/stab1976MNRAS. 5064960Hinkle, J. T., & Mushotzky, R. 2021, MNRAS, 506, 4960, doi: 10.1093/mnras/stab1976
. N Kamraj, F A Harrison, M Baloković, A Lohfink, M Brightman, 10.3847/1538-4357/aadd0dThe Astrophysical Journal. 866Kamraj, N., Harrison, F. A., Baloković, M., Lohfink, A., & Brightman, M. 2018, The Astrophysical Journal, 866, 124, doi: 10.3847/1538-4357/aadd0d
. N Kamraj, M Brightman, F A Harrison, arXiv:2202.00895arXiv e-printsKamraj, N., Brightman, M., Harrison, F. A., et al. 2022, arXiv e-prints, arXiv:2202.00895. https://arxiv.org/abs/2202.00895
. J Kang, J Wang, W Kang, 10.3847/1538-4357/abadf5ApJ. 901111Kang, J., Wang, J., & Kang, W. 2020, ApJ, 901, 111, doi: 10.3847/1538-4357/abadf5
. J.-L Kang, J.-X Wang, W.-Y Kang, 10.1093/mnras/stab039MNRAS. 50280Kang, J.-L., Wang, J.-X., & Kang, W.-Y. 2021, MNRAS, 502, 80, doi: 10.1093/mnras/stab039
. E Kara, J A García, A Lohfink, 10.1093/mnras/stx792MNRAS. 4683489Kara, E., García, J. A., Lohfink, A., et al. 2017, MNRAS, 468, 3489, doi: 10.1093/mnras/stx792
. L Keek, D R Ballantyne, 10.1093/mnras/stv2882MNRAS. 4562722Keek, L., & Ballantyne, D. R. 2016, MNRAS, 456, 2722, doi: 10.1093/mnras/stv2882
. M Koss, B Trakhtenbrot, C Ricci, 10.3847/1538-4357/aa8ec9ApJ. 85074Koss, M., Trakhtenbrot, B., Ricci, C., et al. 2017, ApJ, 850, 74, doi: 10.3847/1538-4357/aa8ec9
. G Lanzuisi, M Perna, A Comastri, 10.1051/0004-6361/201628325A&A. 59077Lanzuisi, G., Perna, M., Comastri, A., et al. 2016, A&A, 590, A77, doi: 10.1051/0004-6361/201628325
. K T Lewis, M Eracleous, 10.1086/501419ApJ. 642711Lewis, K. T., & Eracleous, M. 2006, ApJ, 642, 711, doi: 10.1086/501419
. T Liu, J.-X Wang, H Yang, F.-F Zhu, Y.-Y Zhou, 10.1088/0004-637X/783/2/106ApJ. 783106Liu, T., Wang, J.-X., Yang, H., Zhu, F.-F., & Zhou, Y.-Y. 2014, ApJ, 783, 106, doi: 10.1088/0004-637X/783/2/106
. K K Madsen, B W Grefenstette, S Pike, arXiv:2005.00569arXiv e-printsMadsen, K. K., Grefenstette, B. W., Pike, S., et al. 2020, arXiv e-prints, arXiv:2005.00569. https://arxiv.org/abs/2005.00569
. K K Madsen, F A Harrison, C B Markwardt, 10.1088/0067-0049/220/1/8ApJS. 220Madsen, K. K., Harrison, F. A., Markwardt, C. B., et al. 2015a, ApJS, 220, 8, doi: 10.1088/0067-0049/220/1/8
. K K Madsen, F Fürst, D J Walton, 10.1088/0004-637X/812/1/14ApJ. 81214Madsen, K. K., Fürst, F., Walton, D. J., et al. 2015b, ApJ, 812, 14, doi: 10.1088/0004-637X/812/1/14
. P Magdziarz, A A Zdziarski, 10.1093/mnras/273.3.837MNRAS. 273837Magdziarz, P., & Zdziarski, A. A. 1995, MNRAS, 273, 837, doi: 10.1093/mnras/273.3.837
. A Malizia, L Bassani, A J Bird, 10.1111/j.1365-2966.2008.13657.xMNRAS. 3891360Malizia, A., Bassani, L., Bird, A. J., et al. 2008, MNRAS, 389, 1360, doi: 10.1111/j.1365-2966.2008.13657.x
. A Marinucci, D Porquet, F Tamborra, 10.1051/0004-6361/201834454A&A. 62312Marinucci, A., Porquet, D., Tamborra, F., et al. 2019, A&A, 623, A12, doi: 10.1051/0004-6361/201834454
. G Matt, M Baloković, A Marinucci, 10.1093/mnras/stu2653MNRAS. 4473029Matt, G., Baloković, M., Marinucci, A., et al. 2015, MNRAS, 447, 3029, doi: 10.1093/mnras/stu2653
. R J Mclure, M J Jarvis, T A Targett, J S Dunlop, P N Best, 10.1111/j.1365-2966.2006.10228.xMNRAS. 3681395McLure, R. J., Jarvis, M. J., Targett, T. A., Dunlop, J. S., & Best, P. N. 2006, MNRAS, 368, 1395, doi: 10.1111/j.1365-2966.2006.10228.x
. R Middei, S Bianchi, P O Petrucci, 10.1093/mnras/sty3379MNRAS. 4834695Middei, R., Bianchi, S., Petrucci, P. O., et al. 2019, MNRAS, 483, 4695, doi: 10.1093/mnras/sty3379
. M Molina, A Malizia, L Bassani, 10.1093/mnras/stz156Monthly Notices of the Royal Astronomical Society. 4842735Molina, M., Malizia, A., Bassani, L., et al. 2019, Monthly Notices of the Royal Astronomical Society, 484, 2735, doi: 10.1093/mnras/stz156
. P Nandi, A Chatterjee, S K Chakrabarti, B G Dutta, 10.1093/mnras/stab1699MNRAS. 5063111Nandi, P., Chatterjee, A., Chakrabarti, S. K., & Dutta, B. G. 2021, MNRAS, 506, 3111, doi: 10.1093/mnras/stab1699
. K Oh, M Koss, C B Markwardt, 10.3847/1538-4365/aaa7fdApJS. 2354Oh, K., Koss, M., Markwardt, C. B., et al. 2018, ApJS, 235, 4, doi: 10.3847/1538-4365/aaa7fd
. M Pahari, I M Mchardy, L Mallick, G C Dewangan, R Misra, 10.1093/mnras/stx1455MNRAS. 4703239Pahari, M., McHardy, I. M., Mallick, L., Dewangan, G. C., & Misra, R. 2017, MNRAS, 470, 3239, doi: 10.1093/mnras/stx1455
. C Panagiotou, R Walter, 10.1051/0004-6361/201937390A&A. 64031Panagiotou, C., & Walter, R. 2020, A&A, 640, A31, doi: 10.1051/0004-6361/201937390
. M L Parker, N Schartel, D Grupe, 10.1093/mnrasl/sly224MNRAS. 48388Parker, M. L., Schartel, N., Grupe, D., et al. 2019, MNRAS, 483, L88, doi: 10.1093/mnrasl/sly224
. P O Petrucci, F Haardt, L Maraschi, 10.1086/321629ApJ. 556716Petrucci, P. O., Haardt, F., Maraschi, L., et al. 2001, ApJ, 556, 716, doi: 10.1086/321629
. G Ponti, S Bianchi, T Muñoz-Darias, 10.1093/mnras/stx2425MNRAS. 4732304Ponti, G., Bianchi, S., Muñoz-Darias, T., et al. 2018, MNRAS, 473, 2304, doi: 10.1093/mnras/stx2425
. D Porquet, J N Reeves, N Grosso, V Braito, A Lobban, 10.1051/0004-6361/202141577A&A. 65489Porquet, D., Reeves, J. N., Grosso, N., Braito, V., & Lobban, A. 2021, A&A, 654, A89, doi: 10.1051/0004-6361/202141577
. B Rani, G M Madejski, R F Mushotzky, C Reynolds, Hodgson, 10.3847/2041-8213/aae48fJ. A. 86613ApJLRani, B., Madejski, G. M., Mushotzky, R. F., Reynolds, C., & Hodgson, J. A. 2018, ApJL, 866, L13, doi: 10.3847/2041-8213/aae48f
. P Rani, C S Stalin, K D Goswami, 10.1093/mnras/stz275Monthly Notices of the Royal Astronomical Society. 4845113Rani, P., Stalin, C. S., & Goswami, K. D. 2019, Monthly Notices of the Royal Astronomical Society, 484, 5113, doi: 10.1093/mnras/stz275
. C Ricci, L C Ho, A C Fabian, 10.1093/mnras/sty1879MNRAS. 4801819Ricci, C., Ho, L. C., Fabian, A. C., et al. 2018, MNRAS, 480, 1819, doi: 10.1093/mnras/sty1879
. G Risaliti, M Young, M Elvis, 10.1088/0004-637X/700/1/L6ApJL. 7006Risaliti, G., Young, M., & Elvis, M. 2009, ApJL, 700, L6, doi: 10.1088/0004-637X/700/1/L6
. O Shemmer, W N Brandt, H Netzer, R Maiolino, S Kaspi, 10.1086/506911ApJL. 64629Shemmer, O., Brandt, W. N., Netzer, H., Maiolino, R., & Kaspi, S. 2006, ApJL, 646, L29, doi: 10.1086/506911
. X W Shu, T Yaqoob, J X Wang, 10.1088/0067-0049/187/2/581The Astrophysical Journal Supplement Series. 187581Shu, X. W., Yaqoob, T., & Wang, J. X. 2010, The Astrophysical Journal Supplement Series, 187, 581, doi: 10.1088/0067-0049/187/2/581
. B E Stern, J Poutanen, R Svensson, M Sikora, M C Begelman, 10.1086/309617ApJL. 44913Stern, B. E., Poutanen, J., Svensson, R., Sikora, M., & Begelman, M. C. 1995, ApJL, 449, L13, doi: 10.1086/309617
. S Sun, M Guainazzi, Q Ni, 10.1093/mnras/sty1233MNRAS. 478Sun, S., Guainazzi, M., Ni, Q., et al. 2018, MNRAS, 478, 1900, doi: 10.1093/mnras/sty1233
. R Svensson, A&AS. 120Svensson, R. 1996, A&AS, 120, 475. https://arxiv.org/abs/astro-ph/9605078
O Tange, 10.5281/zenodo.16303login: The USENIX Magazine. 3642Tange, O. 2011, ;login: The USENIX Magazine, 36, 42, doi: 10.5281/zenodo.16303
M B Taylor, Astronomical Society of the Pacific Conference Series. P. Shopbell, M. Britton, & R. Ebert34729Astronomical Data Analysis Software and Systems XIVTaylor, M. B. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 347, Astronomical Data Analysis Software and Systems XIV, ed. P. Shopbell, M. Britton, & R. Ebert, 29
. A Tortosa, S Bianchi, A Marinucci, G Matt, P Petrucci, 10.1051/0004-6361/201732382A&A. 61437Tortosa, A., Bianchi, S., Marinucci, A., Matt, G., & Petrucci, P. O. 2018, A&A, 614, A37, doi: 10.1051/0004-6361/201732382
. A Tortosa, C Ricci, F Tombesi, 10.1093/mnras/stab3152MNRAS. 5093599Tortosa, A., Ricci, C., Tombesi, F., et al. 2022, MNRAS, 509, 3599, doi: 10.1093/mnras/stab3152
. F Ursini, P O Petrucci, G Matt, 10.1093/mnras/stw2022MNRAS. 463382Ursini, F., Petrucci, P. O., Matt, G., et al. 2016, MNRAS, 463, 382, doi: 10.1093/mnras/stw2022
. D R Wik, A Hornstrup, S Molendi, 10.1088/0004-637x/792/1/48The Astrophysical Journal. 79248Wik, D. R., Hornstrup, A., Molendi, S., et al. 2014, The Astrophysical Journal, 792, 48, doi: 10.1088/0004-637x/792/1/48
. Y.-J Wu, J.-X Wang, Z.-Y Cai, 10.1007/s11433-020-1611-7Science China Physics, Mechanics, and Astronomy. 63129512Wu, Y.-J., Wang, J.-X., Cai, Z.-Y., et al. 2020, Science China Physics, Mechanics, and Astronomy, 63, 129512, doi: 10.1007/s11433-020-1611-7
. Q.-X Yang, F.-G Xie, F Yuan, 10.1093/mnras/stu2571MNRAS. 4471692Yang, Q.-X., Xie, F.-G., Yuan, F., et al. 2015, MNRAS, 447, 1692, doi: 10.1093/mnras/stu2571
. A A Zdziarski, P Lubiński, M Gilfanov, M Revnivtsev, 10.1046/j.1365-8711.2003.06556.xMNRAS. 342355Zdziarski, A. A., Lubiński, P., Gilfanov, M., & Revnivtsev, M. 2003, MNRAS, 342, 355, doi: 10.1046/j.1365-8711.2003.06556.x
. J.-X Zhang, J.-X Wang, F.-F Zhu, 10.3847/1538-4357/aacf92ApJ. 86371Zhang, J.-X., Wang, J.-X., & Zhu, F.-F. 2018, ApJ, 863, 71, doi: 10.3847/1538-4357/aacf92
|
[] |
[
"Effective Potential in the 3D Massive 2-form Gauge Superfield Theory",
"Effective Potential in the 3D Massive 2-form Gauge Superfield Theory"
] |
[
"F S Gama \nDepartamento de Física\nUniversidade Federal da Paraíba Caixa Postal 5008\n58051-970João Pessoa, ParaíbaBrazil\n",
"J R Nascimento \nDepartamento de Física\nUniversidade Federal da Paraíba Caixa Postal 5008\n58051-970João Pessoa, ParaíbaBrazil\n",
"A Yu Petrov \nDepartamento de Física\nUniversidade Federal da Paraíba Caixa Postal 5008\n58051-970João Pessoa, ParaíbaBrazil\n",
"P J Porfírio [email protected] \nDepartament of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA\n"
] |
[
"Departamento de Física\nUniversidade Federal da Paraíba Caixa Postal 5008\n58051-970João Pessoa, ParaíbaBrazil",
"Departamento de Física\nUniversidade Federal da Paraíba Caixa Postal 5008\n58051-970João Pessoa, ParaíbaBrazil",
"Departamento de Física\nUniversidade Federal da Paraíba Caixa Postal 5008\n58051-970João Pessoa, ParaíbaBrazil",
"Departament of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA"
] |
[] |
In the N = 1, d = 3 superspace, we propose a massive superfield theory formulated in terms of a spinor gauge superfield, whose component content includes a two-form field, and a real scalar matter superfield.For this model, we explicitly calculate the one-loop correction to the superfield effective potential. In particular, we show that the one-loop effective potential is independent the gauge-fixing parameters. * Electronic address: [email protected] † Electronic address: [email protected] ‡ Electronic address: [email protected] § Electronic address:
|
10.1016/j.physletb.2019.02.052
|
[
"https://arxiv.org/pdf/1812.02495v2.pdf"
] | 119,498,036 |
1812.02495
|
91bbb7c15b7ab063c71d94216a4523d11b810f14
|
Effective Potential in the 3D Massive 2-form Gauge Superfield Theory
14 Feb 2019
F S Gama
Departamento de Física
Universidade Federal da Paraíba Caixa Postal 5008
58051-970João Pessoa, ParaíbaBrazil
J R Nascimento
Departamento de Física
Universidade Federal da Paraíba Caixa Postal 5008
58051-970João Pessoa, ParaíbaBrazil
A Yu Petrov
Departamento de Física
Universidade Federal da Paraíba Caixa Postal 5008
58051-970João Pessoa, ParaíbaBrazil
P J Porfírio [email protected]
Departament of Physics and Astronomy
University of Pennsylvania
19104PhiladelphiaPAUSA
Effective Potential in the 3D Massive 2-form Gauge Superfield Theory
14 Feb 2019
In the N = 1, d = 3 superspace, we propose a massive superfield theory formulated in terms of a spinor gauge superfield, whose component content includes a two-form field, and a real scalar matter superfield.For this model, we explicitly calculate the one-loop correction to the superfield effective potential. In particular, we show that the one-loop effective potential is independent the gauge-fixing parameters. * Electronic address: [email protected] † Electronic address: [email protected] ‡ Electronic address: [email protected] § Electronic address:
I. INTRODUCTION
As it is well known, the most studied supersymmetric models are based on a gauge multiplet, describing gauge fields and their superpartners, and a scalar multiplet describing the usual matter. Many issues related to these models in different cases were studied, both in classical and quantum contexts. Nevertheless, other supersymmetry multiplets, including those ones presented in [1], also deserve to be considered. One of the important examples is the tensor multiplet whose component content includes an antisymmetric tensor field [2], which, as is well known, plays an important role since it emerges in string theory [3], and it has been studied in many other contexts, such as Lorentz symmetry violation [4], quantum equivalence [5][6][7], paramagnetism-ferromagnetism phase transition [8], and cosmological inflation [9]. The quantum impacts of the tensor multiplet were studied in the four-dimensional space-time, where it is described by the chiral spinor superfield, in [10], where the one-loop effective potential was calculated in the model including this superfield some further development of this model has been carried out in [11]. Therefore, the natural problem consists in generalization of this study to three dimensions through treating a theory of the three-dimensional tensor multiplet which is known to be described by the gauge spinor superfield. The corresponding superfield description of a theory on the tree level has been developed already in [12]. Therefore, it is natural to promote this study to the quantum level, introducing a coupling of the gauge spinor superfield to some matter, and calculating the one-loop quantum corrections in this theory. This is the aim we pursue in this paper.
Our calculations are based on the methodology of calculating the superfield effective potential developed for the three-dimensional case originally in [13] and then used for various threedimensional superfield theories in a number of papers, f.e. [14,15]. We calculate the effective potential in the one-loop approximation.
The structure of our paper looks like follows. In the section 2 we consider the classical actions of a theory involving the real spinor superfield. In the section 3, we explicitly calculate the one-loop effective potential for this theory, and in the section 4, the results are summarized.
II. THE MODEL
By imposing some constraints on the field strength for the three-dimensional 2-form gauge superfield Γ AB , it is possible to show that Γ AB can be completely expressed in terms of a prepotential B α , which is an unconstrained real spinor gauge superfield [1]. Having this in mind, we start with the following definition
S k [B α , Φ] = − 1 2 d 5 z (D α G) 2 + (D α Φ) 2 ,(1)
where G ≡ −D α B α is a gauge invariant field strength and Φ is the usual real scalar matter su-
perfield. The identity D α D β D α = 0 ensures that S k is invariant under the gauge transformation Φ → Φ Λ = Φ ; B α → B Λ α = B α + 1 2 D β D α Λ β ,(2)
with a spinor gauge parameter Λ α . The model (1) is an example of first-stage reducible theory.
Indeed, the parameter Λ α in (2) is not unique, but it is defined up to the transformation
Λ ′ β = Λ β + D β L,
where L is an arbitrary scalar superfield, in other words, there are gauge transformations for gauge parameters. The methodology for studying reducible theories has been developed in [16], and the general discussion of such theories can be found in [17]. The four-dimensional analogue of the theory (1), within supergravity context, has been considered in [6]. Now, we want to introduce mass terms for the theory (1). These terms are defined as
S m [B α , Φ] = d 5 z mΦG + m 2 B B α B α + 1 2 m Φ Φ 2 .(3)
The term mΦG corresponds to the supersymmetric extension of the topological BF model [12].
It is worth to note that m 2 B B α B α explicitly breaks the gauge invariance of S m under the transformation (2).
Let us check that S k +S m indeed describes a massive gauge theory. For this, we need to obtain the free superfield equations for B α and Φ, which are derived from the principle of stationary action. Thus, we get from S k + S m :
δ(S k + S m ) δB α = D α D 2 G + mD α Φ + 2m 2 B B α = 0; (4) δ(S k + S m ) δΦ = D 2 Φ + mG + m Φ Φ = 0.(5)
On the one hand, if m B = m Φ = 0 and m = 0, then we can multiply Eq. (4) by D α /2 and use
(D 2 ) 2 = to obtain − m 2 G = 0 .(6)
We can carry out a similar calculation to show that D α Φ satisfies a Klein-Gordon equation.
On the other hand, if m B = 0 and m = 0, then we can multiply Eq. (4) by D 2 D α D γ and use D α D γ D α = 0 to obtain
D 2 D α D γ B α = 0.(7)
Substituting this back into the equation (4) and using
D 2 [D α , D β ] = 2C βα , we get − m 2 B B α = 0.(8)
It is trivial to show that Φ also satisfies a Klein-Gordon equation for m Φ = 0 and m = 0.
Therefore, we demonstrated that S k + S m describes a massive gauge theory.
Since the model under investigation S k + S m is a free superfield theory, and the main purpose of this paper is to calculate the one-loop effective potential, then we must to extend S k + S m to include interactions. Here, we define the interaction between B α and Φ as
S int [B α , Φ] = d 5 z V 0 (Φ) + V 1 (Φ)G + 1 2 V 2 (Φ)G 2 + V 3 (Φ)B α B α ,(9)
where V i (Φ)'s are analytical functions of their arguments. Note that we have ignored in (9) terms higher than quadratic in B α due to the fact that these terms will not contribute at the one-loop level to the effective potential. Moreover, in addition to (3), S int also lacks gauge invariance.
The lack of gauge invariance of S m and S int is inconvenient for quantum calculations. In order to improve the situation, we will restore the gauge symmetry by introducing a Stückelberg superfield Ω α [18]. Thus, instead of the theory S k + S m + S int , we will study in this work the following gauge-invariant theory, obtained from the previous one through adding some new terms, whose action is
S[B α , Ω α , Φ] = 1 2 d 5 z GD 2 G + ΦD 2 Φ + m Φ Φ 2 + 2V 0 (Φ) + 2 mΦ + V 1 (Φ) G + V 2 (Φ)G 2 + 2 m 2 B + V 3 (Φ) B α − W α m B B α − W α m B ,(10)with W α ≡ 1 2 D β D α Ω β .
The new action (10) is invariant under the following transformations
Φ → Φ Λ = Φ ; B α → B Λ α = B α + 1 2 D β D α Λ β ; Ω α → Ω Λ α = Ω α + m B Λ α ,(11)
with spinor gauge parameter Λ α . Moreover, S is also invariant under the gauge transformation
Φ → Φ K = Φ ; B α → B K α = B α ; Ω α → Ω K α = Ω α + D α K ,(12)
with an arbitrary scalar gauge parameter K.
Since (10) is gauge invariant, it follows that a gauge fixing is necessary for the calculation of quantum corrections to the effective potential. Thus, the gauge-fixed action is defined as the sum S + S gf , where S is given in Eq. (10) and the gauge-fixing term S gf is given by
S gf [B α , Ω α ] = 1 2 d 5 z B α Ω α − 1 α D 2 D β D α m B D β D α m B D β D α −2αm 2 B δ α β − 1 ξ D 2 D α D β B β Ω β . (13)
In particular, if we choose m = 0 and the supersymmetric Fermi-Feynman gauge α = ξ = 1, the kinetic terms take the particularly simple forms
∼ B α ( − m 2 B )B α and ∼ Ω α ( − m 2 B )Ω α .
Of course, there be also ghosts in the gauge-fixed action. Indeed, besides the usual ghosts, there are also ghosts for ghosts due to the fact that (10) describes a first-stage reducible theory.
However, since the ghosts do not interact with the scalar superfield Φ, it follows that the ghost terms do not contribute to the one-loop effective potential. For this reason, we can omit such terms. We note that the similar situation takes place in four dimensions [11].
III. ONE-LOOP CALCULATIONS
In this section, we calculate the one-loop effective potential for the theory (10). To do this, we employ the background field method [19]. Within this approach, we perform the calculations by making a linear split of the superfields into background superfields (B α , Ω α , Φ) and quantum fluctuations (b α , ω α , φ):
B α → B α + b α ; Ω α → Ω α + ω α ; Φ → Φ + φ.(14)
By definition, the effective potential depends only on the matter superfield Φ. Thus, we assume a trivial background for the gauge superfields B α , Ω α , and the derivatives of Φ:
B α = Ω α = 0; D α Φ = 0; ∂ αα Φ = 0,(15)
while a background Φ differs from zero.
For the sake of simplicity, before we consider the general problem, we first study the particular case where m 2 B = V 3 (Φ) = 0. We denote the effective potential calculated in this case by K
A . The importance of this choice is based by the fact that in this case the superfield Ω α completely decouples from theory (10). Therefore, expanding S + S gf around the background superfields and keeping only the quadratic terms in the quantum fluctuations, one finds
S 2 [Φ; φ, b α ] = S K + S IN T ; (16) S K = 1 2 d 5 z b α D 2 (D α D β − 1 α D β D α ) b β + φD 2 φ ;(17)S IN T = 1 2 d 5 z (m Φ + V ′′ 0 )φ 2 + 2(m + V ′ 1 )φg + V 2 g 2 ,(18)where g ≡ −D α b α , V ′ 1 ≡ dV 1 /dΦ, and V ′′ 0 ≡ d 2 V 0 /dΦ 2 .
The interaction vertices can be read off directly from S IN T , and the propagators are obtained by inverting the differential operators in S K , being given by
b α (1)b β (2) = − 1 4k 4 D 2 1 D 1α D β 1 − αD β 1 D 1α δ 2 (θ 1 − θ 2 ) ; (19) φ(1)φ(2) = D 2 1 k 2 δ 2 (θ 1 − θ 2 ).(20)
Notice in Eq. (18) that the quantum superfield b α interacts with the background one Φ through its field strength g. Thus, instead of the propagator b α (1)b β (2) , it is sufficient to use the propagator with no spinor indices g(1)g (2) , which is given by:
g(1)g(2) = D α 1 D 2β b α (1)b β (2) = D 2 1 k 2 δ 2 (θ 1 − θ 2 ).(21)
It is clear that (21) does not depend on the gauge parameter α introduced in the gauge-fixing procedure. Therefore, before we start the calculation of the one-loop effective potential K
A (Φ), we can already conclude that K (1) A (Φ) is gauge independent as it occurs in some other threedimensional supergauge theories, see f.e. [14].
The propagators (20), (21), and the vertices (18) can be written in a matrix form. In order to do this, we make the definitions
χ i ≡ g φ ; χ j ≡ g φ ; M i j ≡ V 2 m + V ′ 1 m + V ′ 1 m Φ + V ′′ 0 ,(22)
so that we can show that
χ i (1)χ j (2) = D 2 1 k 2 δ i j δ 5 (θ 1 − θ 2 ) ; S IN T = 1 2 d 5 zχ i M i j χ j .(23)
These propagators and vertices are quite similar to ones used in our previous work [14], where we have calculated K (1) (Φ) for a generic superfield higher-derivative gauge theory. Due to this similarity, we simply quote the result here:
K (1) A (Φ) = 1 2 d 3 k (2π) 3 1 |k| arctan λ + |k| + arctan λ − |k| ,(24)
where the λ's are the eigenvalues of the matrix M i j , and |k| = √ k 2 .
Substituting the eigenvalues into (24) and calculating the integral over the momenta, we obtain K (1)
A (Φ) = − 1 16π m Φ + V ′′ 0 2 + 2 m + V ′ 1 2 + V 2 2 .(25)
Just as in the usual three-dimensional field theories, this one-loop contribution to the effective potential is UV finite, and its functional structure is given by a polynomial function of V ′′ 0 , V ′ 1 , and V 2 . Indeed, in contrast to four-dimensional theories, logarithmic functions begin to occur only at the two-loop level due to the divergences of the Feynman integrals [13,15]. Additionally, as we already said before, (25) is independent of the gauge-fixing parameter α. This result was expected because the theory (10) B (Φ) one should expand (10) around the background superfields (14) and keep the terms quadratic in the fluctuations:
S 2 [Φ; φ, b α , ω α ] = 1 2 d 5 z b α D 2 (D α D β − 1 α D β D α ) + 2m 2 B δ α β b β + ω α D 2 (D β D α − 1 ξ D α D β ) − 2αm 2 B δ α β ω β + φD 2 φ + (m Φ + V ′′ 0 )φ 2 + 2(m + V ′ 1 )φD α b α − V 2 b α D α D β b β + 2V 3 b α b α − 2 V 3 m B ω β D α D β b α + V 3 m 2 B ω α D 2 D β D α ω β ,(26)
where we have now taken into account the contributions of ω α .
The quadratic mixing terms between the quantum superfields make the calculations troublesome. Fortunately, we can overcome this complication by a non-local change of variables in the path integral, as was done in [20]. Thus, we can diagonalize (26) with the choice
φ(z) −→ φ(z) − d 5 wG(z, w) m + V ′ 1 (Φ(w)) D wα b α (w); (27) ω α (z) −→ ω α (z) + d 5 wG α β (z, w) V 3 (Φ(w)) m B D γ w D wβ b γ (w),(28)
where G(z, w) and G α β (z, w) are Green's functions, which are defined as solutions of the equations
D 2 + m Φ + V ′′ 0 G(z, w) = δ 5 (z − w);(29)D 2 1 − α m 2 B + V 3 m B D γ D α + α m 2 B − 1 ξ D α D γ G γ β (z, w) = δ α β δ 5 (z − w). (30)
It is possible to show that these functions can be expressed in the form
G(z, w) = D 2 − (m Φ + V ′′ 0 ) − (m Φ + V ′′ 0 ) 2 δ 5 (z − w); (31) G γ β (z, w) = D 2 4 1 − αm 2 B + V 3 m 2 B D β D γ − ξ − αξm 2 B D γ D β δ 5 (z − w).(32)
It is worth to point out that we assume that the quantum variable b α does not change under the transformations (27) and (28). For this reason, these transformations correspond to translations on the field space, so that the corresponding Jacobian is equal to unity.
Therefore, after the change of variables (27) and (28), the functional S 2 can be rewritten as:
S 2 = 1 2 d 5 z b α D 2 1 − m 2 B + V 3 − V 2 D α D β + D 2 m 2 B + V 3 − 1 α D β D α b β + ω α D 2 1 − α m 2 B + V 3 m 2 B D β D α + α m 2 B − 1 ξ D α D β ω β + φ D 2 + m Φ + V ′′ 0 φ − 1 2 d 5 zd 5 wb α (z)b β (w) m + V ′ 1 2 D 2 + m Φ + V ′′ 0 − (m Φ + V ′′ 0 ) 2 D α D β + V 3 m B 2 × D 2 D β D α − αm 2 B + V 3 m 2 B δ 5 (z − w).(33)
In principle, we could derive the Feynman rules for the functional (33) and calculate the one-loop supergraphs which contribute to the effective potential. However, it is much easier to perform the calculation using the well-known formula for the one-loop Euclidean effective action [21,22] Γ (1)
B [Φ] = − 1 2 sTr ln O ,(34)
where sTr denotes the supertrace over the discrete and continuous indices of O.
It follows from (33) that O is a block diagonal matrix. Thus, Eq. (34) can be split into three contributions:
Γ (1) B [Φ] = Γ ω [Φ] + Γ b [Φ] + Γ φ [Φ],(35)
where
Γ ω [Φ] = 1 2 Tr ln 1 − α m 2 B + V 3 m 2 B D 2 D β D α + α m 2 B − 1 ξ D 2 D α D β ;(36)Γ b [Φ] = 1 2 Tr ln D 2 1 − m 2 B + V 3 − V 2 D α D β + m 2 B + V 3 − 1 α D 2 D β D α − m + V ′ 1 2 D 2 + m Φ + V ′′ 0 − (m Φ + V ′′ 0 ) 2 D α D β − V 3 m B 2 D 2 D β D α − αm 2 B + V 3 m 2 B ; (37) Γ φ [Φ] = − 1 2 Tr ln D 2 + m Φ + V ′′ 0 .(38)
Notice that ω α and b α are fermionic variables, so that Γ ω and Γ b got an overall plus sign.
Now, let us start with the first contribution Γ ω . First, we factor out the inverse of the ω α -propagator from (36). Thus, Eq. (36) can be rewritten as
Γ ω = 1 2 Tr ln D 2 D γ D α − 1 ξ D 2 D α D γ + 1 2 Tr ln δ γ β + 1 2 −α m 2 B + V 3 m 2 B D 2 D β D γ + ξα m 2 B D 2 D γ D β .(39)
Note that the first trace does not depend on the background superfield, then it can be disregarded. The second trace can be split into two parts with the help of the identity D α D β D α = 0.
Therefore,
Γ ω = 1 2 Tr ln δ γ λ + 1 2 −α m 2 B + V 3 m 2 B D 2 D λ D γ + 1 2 Tr ln δ λ β + ξαm 2 B 2 2 D 2 D λ D β . (40)
Again, the second trace is a constant independent of the background superfield and it can be dropped. To solve the first trace, we have to perform a series expansion of the logarithm.
Therefore,
Γ ω = − 1 2 d 5 z d 3 k (2π) 3 ∞ n=1 1 n 1 2k 2 α m 2 B k 2 + V 3 m 2 B n (D 2 ) n D α 2 D α 1 D α 3 D α 2 · · · D αn D α n−1 × D α 1 D αn δ 2 (θ − θ ′ )| θ=θ ′ .(41)
Each term of the expansion can be evaluated using the D-algebra and the following identities:
δ 2 (θ − θ ′ )| θ=θ ′ = 0 ; D α δ 2 (θ − θ ′ )| θ=θ ′ = 0 ; D 2 δ 2 (θ − θ ′ )| θ=θ ′ = 1.(42)
Thus, it is possible to show that each term in the expansion (41) vanishes. Therefore, we obtain
Γ ω [Φ] = 0.(43)
In the context of three-dimensional super-QED, a vanishing contribution to K (1) (Φ) was also found in Refs. [14,23], where was shown that the contribution of the gauge superfield to K (1) (Φ) vanishes in the Landau gauge. In contrast to [14,23], we have shown that the contribution of the Stückelberg superfield vanishes for any values of the gauge parameters α and ξ.
Now, let us consider the contribution of the quantum prepotential b α to Γ
(1)
B [Φ]
. By repeating the same reasoning that led from (36) to (40), we can prove that (37) can be rewritten as
Γ b = 1 2 Tr ln D 2 D α D γ − 1 α D 2 D γ D α + + 1 2 Tr ln δ γ λ + 1 2 D 2 m 2 B + V 3 + (m + V ′ 1 ) 2 − (m Φ + V ′′ 0 ) 2 + + V 2 + (m + V ′ 1 ) 2 (m Φ + V ′′ 0 ) − (m Φ + V ′′ 0 ) 2 D γ D λ + + 1 2 Tr ln δ λ β − α 2 2 m 2 B + V 3 − V 3 m B 2 − αm 2 B + V 3 m 2 B D 2 D β D λ .(44)
Notice that only the second trace is nonvanishing and independent of α. In order to make progress, we need the identity
δ γ λ + AD 2 D γ D λ + BD γ D λ = δ γ α + AD 2 D γ D α δ α λ + B 1 − 2 A D α D λ .(45)
Thus, by applying this identity to (44), we find
Γ b = 1 2 Tr ln δ γ α + 1 2 m 2 B + V 3 + (m + V ′ 1 ) 2 − (m Φ + V ′′ 0 ) 2 D 2 D γ D α + 1 2 Tr ln δ α λ + 1 2 − (m Φ + V ′′ 0 ) 2 V 2 + (m + V ′ 1 ) 2 (m Φ + V ′′ 0 ) [ − (m Φ + V ′′ 0 ) 2 ] ( − m 2 B − V 3 ) − (m + V ′ 1 ) 2 D α D λ .(46)
In order to evaluate the second trace (the first one is equal to zero), we shall make the simplifying assumption that m = V 1 = 0. Therefore, under such a simplifying assumption, we find
Γ b = − 1 2 d 5 z d 3 k (2π) 3 ∞ n=1 1 2 n n V n 2 (k 2 + m 2 B + V 3 ) n D α 1 D α 2 D α 2 D α 3 · · · D α n−1 D αn × D αn D α 1 δ 2 (θ − θ ′ )| θ=θ ′ .(47)
Again, with the help of the D-algebra and the identities (42), we are able to formally show that
D α 1 D α 2 D α 2 D α 3 · · · D α n−1 D αn D αn D α 1 δ 2 (θ − θ ′ )| θ=θ ′ = −2 n ( √ −k 2 ) n−1 , if n = 2ℓ + 1 0, if n = 2ℓ .(48)
Substituting this formula into (47), we obtain
Γ b = 1 2 d 5 z ∞ ℓ=0 (−1) ℓ V 2ℓ+1 2 2ℓ + 1 d 3 k (2π) 3 (k 2 ) ℓ (k 2 + m 2 B + V 3 ) 2ℓ+1 .(49)
We can evaluate this well-known integral over the momenta and sum the results over ℓ to get
Γ b [Φ] = − 1 16π d 5 zV 2 4(m 2 B + V 3 ) + V 2 2 .(50)
The last (and easiest) contribution which is needed to be calculated is (38). We can simply repeat the same reasoning that led us to Eqs. (43) and (50), but we will not calculate explicitly Γ φ . Therefore, the final result is given by
Γ φ [Φ] = − 1 16π d 5 z(m Φ + V ′′ 0 ) 2 .(51)
Finally, substituting (43), (50), and (51) into (35) and using the relation Γ
(1)
B = d 5 zK (1) B , we find K (1) B (Φ) = − 1 16π m Φ + V ′′ 0 2 + V 2 4(m 2 B + V 3 ) + V 2 2 .(52)
Similarly to K
(1)
A [see Eq. (25)], K(1)
B is UV finite and, therefore, no additional renormalization is needed. Moreover, K
B is also independent of the gauge-fixing parameters α and ξ. In contrast to K (1) A , the functional structure of K (1) B is not given by a polynomial function of V ′′ 0 , V 2 , and V 3 . In the N = 1, d = 3 superspace, such non-polynomial structure is also found in one-loop effective potentials in the context of higher-derivative theories (see, for example, [14]). We conclude this section with the remark that the results (25) A obtained with use of eigenvalues of the mass matrix.
IV. SUMMARY
We formulated a supersymmetric theory of three-dimensional two-form field. In the superfield language, this theory is described by a spinor prepotential B α . We started with a gauge invariant strength G defined in terms of B α , and further introduced a mass term for this field, a coupling of this field to an usual scalar superfield Φ and a Stückelberg superfield in order to implement gauge symmetry in the presence of the mass term. Afterwards, we calculated the one-loop effective potential of Φ in a resulting theory, using a functional approach. The effective potential turns out to be finite as it must occur in three-dimensional theories. We explicitly demonstrated that our results are rather analogous to the one-loop results in supergauge theories constructed on the base of the usual vector supermultiplet.
Essentially, the main result of our paper is a first example of successful formulation of a consistent coupling of three-dimensional spinor superfield to a scalar matter, with the theory turns out to possess gauge symmetry under transformations different from those one in usual supersymmetric QED, and successful calculation of quantum corrections in this theory. Effectively, the main conclusion is that we developed a new supergauge theory with a consistent coupling.
Further development of our study could consist in development of non-Abelian generalization of our theory and in study of higher loop corrections. We expect to do these studies in forthcoming papers.
with m 2 B
2= V 3 (Φ) = 0 is classically equivalent to a theory with two massive real scalar superfields, even though G is by definition a field strength. However, it is not clear whether K (1) (Φ) is independent of α when m 2 B = V 3 (Φ) = 0. Thus, let us move on and calculate K (1) (Φ) in the general case m 2 B = V 3 (Φ) = 0. We denote the effective potential calculated in this case by K (1) B . Again, in order to evaluate the K
and (52), which were obtained by different methods, coincide with each other when m = m B = V 1 = V 3 = 0. This shows that K (1) B obtained through evaluation of the matrix trace is consistent with K (1)
Acknowledgments. Authors are grateful to R. V. Maluf for valuable discussions. P.J. Porfírio would like to acknowledge the Brazilian agency CAPES (PDE process number
This work was partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq). The work by A. Yu. P. has been partially supported by the CNPq project No. 2018-01) for the financial support171759/2018-01) for the financial support. This work was partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq). The work by A. Yu. P. has been partially supported by the CNPq project No. 303783/2015-0.
Superspace or One Thousand and One Lessons in Supersymmetry. S J Gates, M T Grisaru, M Rocek, W Siegel, arXiv:hep-th/0108200Front. Phys. 581S. J. Gates, M. T. Grisaru, M. Rocek and W. Siegel, Superspace or One Thousand and One Lessons in Supersymmetry, Front. Phys. 58, 1 (1983) [arXiv:hep-th/0108200].
. W Siegel, Phys. Lett. B. 85333W. Siegel, Phys. Lett. B 85, 333 (1979).
. M Kalb, P Ramond, Phys. Rev. D. 92273M. Kalb and P. Ramond, Phys. Rev. D 9, 2273 (1974).
. B Altschul, Q G Bailey, V A Kostelecký, arXiv:0912.4852Phys. Rev. D. 8165028gr-qcB. Altschul, Q. G. Bailey, and V. A. Kostelecký, Phys. Rev. D 81, 065028 (2010) [arXiv:0912.4852 [gr-qc]];
. K N Lau, M D Seifert, arXiv:1309.2241Phys. Rev. D. 9525023hep-thK. N. Lau and M. D. Seifert, Phys. Rev. D 95, 025023 (2017) [arXiv:1309.2241 [hep-th]].
. M T Grisaru, N K Nielsen, W Siegel, D Zanon, Nucl. Phys. B. 247157M. T. Grisaru, N. K. Nielsen, W. Siegel and D. Zanon, Nucl. Phys. B 247, 157 (1984).
. I L Buchbinder, S M Kuzenko, Nucl. Phys. B. 308162I. L. Buchbinder and S. M. Kuzenko, Nucl. Phys. B 308, 162 (1988).
. I L Buchbinder, E N Kirillova, N G Pletnev, arXiv:0806.3505Phys. Rev. D. 7884024hep-thI. L. Buchbinder, E. N. Kirillova, and N. G. Pletnev, Phys. Rev. D 78, 084024 (2008) [arXiv:0806.3505 [hep-th]];
. T De Paula Netto, I L Shapiro, arXiv:1605.06600Phys. Rev. D. 9424040hep-thT. de Paula Netto and I. L. Shapiro, Phys. Rev. D 94, 024040 (2016) [arXiv:1605.06600 [hep-th]].
. R G Cai, R Q Yang, arXiv:1504.00855Phys. Rev. D. 9246001hep-thR. G. Cai and R. Q. Yang, Phys. Rev. D 92, 046001 (2015) [arXiv:1504.00855 [hep-th]];
. C , C. Y.
. Y B Zhang, Y Y Wu, Y T Jin, M H Chai, Z Hu, Zang, arXiv:1603.04149Phys. Rev. D. 93126001gr-qcZhang, Y. B. Wu, Y. Y. Jin, Y. T. Chai, M. H. Hu, and Z. Zang, Phys. Rev. D 93, 126001 (2016) [arXiv:1603.04149 [gr-qc]].
. S Aashish, A Padhy, S Panda, A Rana, arXiv:1808.04315Eur. Phys. J. C. 78887gr-qcS. Aashish, A. Padhy, S. Panda, and A. Rana, Eur. Phys. J. C 78, 887 (2018) [arXiv:1808.04315 [gr-qc]].
. F S Gama, M Gomes, J R Nascimento, A Y Petrov, A J Da Silva, arXiv:1501.04061Phys. Rev. D. 916129901Phys. Rev. D. hep-thF. S. Gama, M. Gomes, J. R. Nascimento, A. Y. Petrov and A. J. da Silva, Phys. Rev. D 91, no. 6, 065038 (2015) Erratum: [Phys. Rev. D 91, no. 12, 129901 (2015)] [arXiv:1501.04061 [hep-th]].
. C A S Almeida, F S Gama, R V Maluf, J R Nascimento, A Y Petrov, arXiv:1506.04001Phys. Rev. D. 92885003hep-thC. A. S. Almeida, F. S. Gama, R. V. Maluf, J. R. Nascimento and A. Y. Petrov, Phys. Rev. D 92, no. 8, 085003 (2015) [arXiv:1506.04001 [hep-th]].
. M A M Gomes, R R Landim, C A S Almeida, arXiv:hep-th/0005004Phys. Rev. D. 6325005M. A. M. Gomes, R. R. Landim and C. A. S. Almeida, Phys. Rev. D 63, 025005 (2001) [arXiv:hep-th/0005004].
. A F Ferrari, M Gomes, A C Lehum, J R Nascimento, A Y Petrov, E O Silva, A J Da Silva, arXiv:0901.0679Phys. Lett. B. 678500hep-thA. F. Ferrari, M. Gomes, A. C. Lehum, J. R. Nascimento, A. Y. Petrov, E. O. Silva and A. J. da Silva, Phys. Lett. B 678, 500 (2009) [arXiv:0901.0679 [hep-th]].
. F S Gama, J R Nascimento, A Yu, Petrov, arXiv:1307.3190Phys. Rev. D. 8845021hep-thF. S. Gama, J. R. Nascimento, and A. Yu. Petrov, Phys. Rev. D 88, 045021 (2013) [arXiv:1307.3190 [hep-th]].
. M Gomes, A C Lehum, J R Nascimento, A Yu, A J Petrov, Da Silva, arXiv:1210.6863Phys. Rev. D. 8727701hep-thM. Gomes, A. C. Lehum, J. R. Nascimento, A. Yu. Petrov, and A. J. da Silva, Phys. Rev. D 87 027701 (2013) [arXiv:1210.6863 [hep-th]].
. I A Batalin, G A Vilkovisky, Phys. Rev. 282567I. A. Batalin and G. A. Vilkovisky, Phys. Rev. D28, 2567 (1983).
. J Gomis, J Paris, S Samuel, arXiv:hep-th/9412228Phys. Rep. 259J. Gomis, J. Paris and S. Samuel, Phys. Rep. 259, 1 (1995) [arXiv:hep-th/9412228].
. H Ruegg, M Ruiz-Altaba, arXiv:hep-th/0304245Int. J. Mod. Phys. A. 193265H. Ruegg and M. Ruiz-Altaba, Int. J. Mod. Phys. A 19, 3265 (2004) [arXiv:hep-th/0304245].
. I L Buchbinder, S D Odintsov, I L Shapiro, Effective Action in Quantum Gravity. IOP PublishingBristol and PhiladelphiaI. L. Buchbinder, S. D. Odintsov, and I. L. Shapiro. Effective Action in Quantum Gravity (IOP Publishing, Bristol and Philadelphia, 1992).
. A A Ostrovsky, G A Vilkovisky, J. Math. Phys. 29702A. A. Ostrovsky and G. A. Vilkovisky, J. Math. Phys. 29, 702 (1988);
. I L Buchbinder, B S Merzlikin, arXiv:1505.07679Nucl. Phys. B. 90080hep-thI. L. Buchbinder and B. S. Merzlikin, Nucl. Phys. B 900, 80 (2015) [arXiv:1505.07679 [hep-th]].
I L Buchbinder, S M Kuzenko, Ideas and Methods of Supersymmetry and Supergravity. IOP PublishingI. L. Buchbinder and S. M. Kuzenko, Ideas and Methods of Supersymmetry and Supergravity (IOP Publishing, Bristol and Philadelphia, 1995).
. M T Grisaru, M Rocek, R Von Unge, arXiv:hep-th/9605149Phys. Lett. B. 383415M. T. Grisaru, M. Rocek, and R. von Unge, Phys. Lett. B 383, 415 (1996) [arXiv:hep-th/9605149].
A Yu, Petrov, arXiv:hep-th/0106094Quantum superfield supersymmetry. A. Yu. Petrov, Quantum superfield supersymmetry [arXiv:hep-th/0106094].
|
[] |
[
"On the oscillation rigidity of a Lipschitz function on a high-dimensional flat torus",
"On the oscillation rigidity of a Lipschitz function on a high-dimensional flat torus"
] |
[
"Dmitry Faifman ",
"Vitali Bo'az Klartag ",
"Milman "
] |
[] |
[] |
Given an arbitrary 1-Lipschitz function f on the torus T n , we find a k-dimensional subtorus M ⊆ T n , parallel to the axes, such that the restriction of f to the subtorus M is nearly a constant function. The k-dimensional subtorus M is chosen randomly and uniformly. We show that when k ≤ c log n/(log log n + log 1/ε), the maximum and the minimum of f on this random subtorus M differ by at most ε, with high probability.
|
10.1007/978-3-319-09477-9_10
|
[
"https://arxiv.org/pdf/1402.5589v1.pdf"
] | 34,289,419 |
1402.5589
|
fc93515a7e58f5c3e88b3ad9b40cf4f4e2719c76
|
On the oscillation rigidity of a Lipschitz function on a high-dimensional flat torus
23 Feb 2014
Dmitry Faifman
Vitali Bo'az Klartag
Milman
On the oscillation rigidity of a Lipschitz function on a high-dimensional flat torus
23 Feb 2014
Given an arbitrary 1-Lipschitz function f on the torus T n , we find a k-dimensional subtorus M ⊆ T n , parallel to the axes, such that the restriction of f to the subtorus M is nearly a constant function. The k-dimensional subtorus M is chosen randomly and uniformly. We show that when k ≤ c log n/(log log n + log 1/ε), the maximum and the minimum of f on this random subtorus M differ by at most ε, with high probability.
Introduction
A uniformly continuous function f on an n-dimensional space X of finite volume tends to concentrate near a single value as n approaches infinity, in the sense that the ε-extension of some level set has nearly full measure. This phenomenon, which is called the concentration of measure in high dimension, is frequently related to a transitive group of symmetries acting on X. The prototypical example is the case of a 1-Lipschitz function on the unit sphere S n , see [MS,Le,Gr2].
One of the most important consequences of the concentration of measure is the emergence of spectrum, as was discovered in the 1970-s by the third named author, see [M1, M2, M3]. The idea is that not only the distinguished level set has a large ε-extension in sense of measure, but actually one may find structured subsets on which the function is nearly constant. When we have a group G acting transitively on X, this structured subset belongs to the orbit {gM 0 ; g ∈ G} where M 0 ⊂ X is a fixed subspace. The third named author noted also some connections with Ramsey theory, which were developed in two different directions: by Gromov in [Gr1] in the direction of metric geomery, and by Pestov [P1, P2] in the unexpected direction of dynamical systems.
The phenomenon of spectrum thus follows from concentration, and it is no surprise that most of the results in Analysis establishing spectrum appeared as a consequence School of Mathematical Sciences,Tel Aviv University,Tel Aviv 69978,[email protected],[email protected] of concentration. In this note, we demonstrate an instance where no concentration of measure is available, but nevertheless a geometrically structured level set arises.
To state our result, consider the standard flat torus T n = R n /Z n = (R/Z) n , which inherits its Riemannian structure from R n . We say that M ⊂ T n is a coordinate subtorus of dimension k if it is the collection of all n-tuples (θ j ) n j=1 ∈ T n with fixed n − k coordinates. Given a manifold X and f : X → R we denote the oscillation of f along X by
Osc(f ; X) = sup X f − inf X f.
Theorem 1. There is a universal constant c > 0, such that for any n ≥ 1, 0 < ε ≤ 1 and a function f : T n → R which is 1-Lipschitz, there exists a k-dimensional coordinate subtorus M ⊂ T n with k = c log n log log(3n)+log |ε| , such that Osc(f ; M ) ≤ ε.
Note that the collection of all coordinate subtori equals the orbit {gM 0 ; g ∈ G} where M 0 ⊂ T n is any fixed k-dimensional coordinate subtorus, and the group G = R n ⋊ S n acts on T n by translations and permutations of the coordinates. Theorem 1 is a manifestation of spectrum, yet its proof below is inspired by proofs of the Morrey embedding theorem, and the argument does not follow the usual concentration paradigm. We think that the spectrum phenomenon should be much more widespread, perhaps even more than the concentration phenomenon, and we hope that this note will be a small step towards its recognition.
Proof of the theorem
We write | · | for the standard Euclidean norm in R n and we write log for the natural logarithm. The standard vector fields ∂/∂x 1 , . . . , ∂/∂x n on R n are well defined also on the quotient T n = R n /Z n . These n vector fields are the "coordinate directions" on the unit torus T n . Thus, the partial derivatives ∂ 1 f, . . . , ∂ n f are well-defined for any smooth function f : T n → R, and we have |∇f
| 2 = n i=1 (∂ i f ) 2 . A k-dimensional subspace E ⊂ T x T n is a coordinate subspace if it is spanned by k coordinate direc- tions. For f : T n → R and M ⊂ T n a submanifold, we write ∇ M f for the gradient of the restriction f | M : M → R.
Throughout the proof, c, C will always denote universal constants, not necessarily the same at each appearance. Since the Riemannian volume of T n equals one, Theorem 1 follows from the case α = 1 of the following:
Theorem 2. Let n ≥ 1, 0 < ε ≤ 1, 0 < α ≤ 1 and 1 ≤ k ≤ c log n log log(5n)+| log ε|+| log α| . Let f : T n → R be a locally-Lipschitz function such that, for p = k(1 + α),
T n |∇f | p ≤ 1.
(1)
Then there exists a k-dimensional coordinate subtorus M ⊂ T n with Osc(f ; M ) ≤ ε.
The plan of the proof is as follows. First, for some large k we find a k-dimensional coordinate subtorus M where the derivative is small on average, in the sense that
M |∇ M f | p 1/p is small.
The existence of such a subtorus is a consequence of the observation that at every point most of the partial derivatives in the coordinate directions are small. We then restrict our attention to this subtorus, and take any two pointsx,ỹ ∈ M . Our goal is to show that
f (x) − f (ỹ) < ε.
To this end we construct a polygonal line fromx toỹ which consists of intervals of length 1/2. For every such interval [x, y] we randomly select a point Z in a (k − 1)-dimensional ball which is orthogonal to the interval [x, y] and is centered at its midpoint. We then show that |f (x) − f (Z)| and |f (y) − f (Z)| are typically small, since |∇ M f | is small on average along the intervals [x, Z] and [y, Z].
We proceed with a formal proof of Theorem 2, beginning with the following computation:
Lemma 3. For any n ≥ 1, 0 < ε ≤ 1, 0 < α ≤ 1 and 1 ≤ k ≤ c log n log log(5n)+| log ε|+| log α| , we have that k ≤ n/2 and
2k δ 2 n 1/p ≤ √ k · δ(2)
where p = (1 + α)k and δ = α
16(1 + α) · ε k 3/2 .(3)
Proof. Take c = 1/200. The desired conclusion (2) is equivalent to 4k 2−p ≤ δ 2p+4 n 2 , which in turn is equivalent to
2 8p+18 · α + 1 α 2p+4 · k 2p+8 ≤ ε 2p+4 n 2 .(4)
Since c ≤ 1/12 we have that 6p ≤ 12k ≤ log n/| log ε| and hence ε 2p+4 n 2 ≥ ε 6p n 2 ≥ n.
Since α + 1 ≤ 2 then in order to obtain (4) it suffices to prove
32 α · k 2p+8 ≤ n.(5)
Since c ≤ 1/24 and k ≤ c log n/(log log(5n)) then 24k log k ≤ log n. Since k ≤ c log n | log α|+log(log 5) then 24k log 32 α ≤ log n. We conclude that 12k log 32 α · k ≤ log n, and hence 32 α · k 12k ≤ n.
However, p = (1 + α)k and hence 2p + 8 ≤ 12k. Therefore the desired bound (5) follows from (6). Since k ≤ 1 2 log n ≤ n/2, the lemma is proven. Our standing assumptions for the remainder of the proof of Theorem 2 are that n ≥ 1, 0 < ε ≤ 1, 0 < α ≤ 1 and 1 ≤ k ≤ c log n log log(5n) + | log ε| + | log α| (7) where c > 0 is the constant from Lemma 3. We also denote
p = (1 + α)k(8)
and we write e 1 , . . . , e n for the standard n unit vectors in R n .
Lemma 4. Let v ∈ R n and let J ⊂ {1, . . . , n} be a random subset of size k, chosen uniformly from the collection of all n k subsets. Consider the k-dimensional subspace E ⊂ R n spanned by {e j ; j ∈ J} and let P E be the orthogonal projection operator onto E in R n . Then,
E|P E v| p 1/p ≤ α 8(1 + α) · ε k · |v|.
Proof. We may assume that v = (v 1 , . . . , v n ) ∈ R n satisfies |v| = 1. Let δ > 0 be defined as in (3). Denote I = {i; |v i | ≥ δ}. Since |v| = 1, we must have |I| ≤ 1/δ 2 . We claim that
P(I ∩ J = ∅) ≥ 1 − 2k δ 2 n .(9)
Indeed, if 2k δ 2 n ≥ 1 then (9) is obvious. Otherwise, |I| ≤ δ −2 ≤ n/2 ≤ n − k and
P(I ∩ J = ∅) = k−1 j=0 n − |I| − j n − j ≥ 1 − |I| n − k + 1 k ≥ 1 − 2 δ 2 n k ≥ 1 − 2k δ 2 n .
Thus (9) is proven. Consequently,
E|P E v| p = E j∈J v 2 j p/2 ≤ 2k δ 2 n + E 1 {I∩J=∅} · j∈J v 2 j p/2 ≤ 2k δ 2 n + k · δ 2 p/2 ,
where 1 A equals one if the event A holds true and it vanishes otherwise. By using the inequality (a + b) 1/p ≤ a 1/p + b 1/p we obtain
E|P E v| p 1/p ≤ 2k δ 2 n 1/p + √ k · δ ≤ 2 √ k · δ = α 8(1 + α) · ε k ,
where we used (3) and Lemma 3.
Corollary 5. Let f : T n → R be a locally-Lipschitz function with T n |∇f | p ≤ 1.
Then there exists a k-dimensional coordinate subtorus M ⊂ T n such that
M |∇ M f | p 1/p ≤ α 8(1 + α) · ε k .(10)
Proof. The set of all coordinate k-dimensional subtori admits a unique probability measure, invariant under translations and coordinate permutations. Let M be a random coordinate k-subtorus, chosen with respect to the uniform distribution. All the tangent spaces T x T n are canonically identified with R n , and we let E ⊂ R n denote a random, uniformly chosen k-dimensional coordinate subspace. Then we may write
E M M |∇ M f | p = T n E E |P E ∇f | p ≤ A p T n |∇f | p ≤ A p ,
where A = α 8(1+α) · ε k and we used Lemma 4. It follows that there exists M that satisfies (10).
The following lemma is essentially Morrey's inequality (see [EG,Section 4.5]).
Lemma 6. Consider the k-dimensional Euclidean ball B(0, R) = {x ∈ R k ; |x| ≤ R}. Let f : B(0, R) → R be a locally-Lipschitz function, and let x, y ∈ B(0, R) satisfy |x − y| = 2R. Recall that p = (1 + α)k. Then,
|f (x) − f (y)| ≤ 4 1 + α α · k 1 2(1+α) · R 1− k p B(0,R) |∇f (x)| p dx 1/p .(11)
Proof. We may reduce matters to the case R = 1 by replacing f (x) by f (Rx); note that the right-hand side of (11) is invariant under such replacement. Thus x is a unit vector, and y = −x. Let Z be a random point, distributed uniformly in the (k − 1)-dimensional unit ball
B(0, 1) ∩ x ⊥ = {v ∈ R k ; |v| ≤ 1, v · x = 0},
where v · x is the standard scalar product of x, v ∈ R k . Let us write
E|f (x) − f (Z)| ≤ E|x − Z| 1 0 |∇f ((1 − t)x + tZ)| dt (12) ≤ 2E|∇f ((1 − T )x + T Z)| = 2 B(0,1) |∇f (z)|ρ(z)dz,
where T is a random variable uniformly distributed in [0, 1], independent of Z, and where ρ is the probability density of the random variable (1 − T )x + T Z. Then, ρ((1 − r)x + rz) = c k r k−1 when z ∈ B(0, 1) ∩ x ⊥ , 0 < r < 1. We may compute c k as follows:
1 = c k 1 0 1 r k−1 V k−1 (r)dr = c k V k−1 (1) = c k π k−1 Γ k+1 2 ,
where V k−1 (r) is the (k − 1)-dimensional volume of (k − 1)-dimensional Euclidean ball of radius r. Denote q = p/(p − 1). Then,
B(0,1) ρ q = 1 0 c k r k−1 q V k−1 (r)dr = c q k V k−1 (1) (k − 1)(1 − q) + 1 = p − 1 p − k Γ k+1 2 π k−1 q−1 ,
and hence
B(0,1) ρ q 1/q = p − 1 p − k 1/q Γ k+1 2 π k−1 1/p (13) ≤ 1 + α α 1/q k k/2 π k−1 1/p ≤ 1 + α α · k 1 2(1+α) .
Denote C α,k = 1+α α · k 1 2(1+α) . From (12), (13) and the Hölder inequality,
E|f (x)−f (Z)| ≤ 2 B(0,1) |∇f | p 1 p B(0,1) ρ q 1 q ≤ 2C α,k B(0,1) |∇f | p 1 p .(14)
A bound similar to (14) holds also for E|f (y) − f (Z)|, since y = −x. By the triangle inequality,
|f (x) − f (y)| ≤ E|f (y) − f (Z)| + E|f (Z) − f (x)| ≤ 4C α,≤ α 8(1 + α) · ε k(15)
Given any two points x, y ∈ M , let us show that
|f (x) − f (y)| ≤ ε.(16)
The distance between x and y is at most √ k/2. Let us construct a curve, in fact a polygonal line, starting at x and ending at y which consists of at most √ k + 1 intervals of length 1/2. For instance, we may take all but the last two intervals to be intervals of length 1/2 lying on the geodesic between x to y. The last two intervals need to connect two points whose distance is at most 1/2, and this is easy to do by drawing an isosceles triangle whose base is the segment between these two points.
Proof of Theorem 2. According to Corollary 5 we may pick a coordinate subtorus M = T k so that|∇ M f | pk
B(0,1)
|∇f | p
1/p
.
M
1/p
. We do not know whether the dependence on the dimension in Theorem 1 is optimal. Better estimates may be obtained if the subtorus M ⊂ T n is allowed to be an arbitrary k-dimensional rational subtorus, which is not necessarily a coordinate subtorus.
AcknowledgementsWe would like to thank Vladimir Pestov for his interest in this work. The secondnamed author was supported by a grant from the European Research Council (ERC).Let [x j , x j+1 ] be any of the intervals appearing in the polygonal line constructed above. Let B ⊂ T k = M be a geodesic ball of radius R = 1/4 centered at the midpoint of [x j , x j+1 ]. This geodesic ball on the torus is isometric to a Euclidean ball of radius R = 1/4 in R k . Lemma 6 applies, and implies thatSince the number of intervals in the polygonal line are at mostwhere we used (15) in the last passage. The points x, y ∈ M were arbitrary, hence Osc(f ; M ) ≤ ε.Remarks.1. It is evident from the proof of Theorem 2 that the subtorus M is chosen randomly and uniformly over the collection of all k-dimensional coordinate subtori. It is easy to obtain that with probability at least 9/10, we have that Osc(M ; f ) ≤ ε.2.The assumption that f is locally-Lipschitz in Theorem 2 is only used to justify the use of the fundamental theorem of calculus in (12). It is possible to significantly weaken this assumption; It suffices to know that f admits weak derivatives ∂ 1 f, . . . , ∂ n f and that (1) holds true, see[EG,Chapter 4] for more information.It is a bit surprising that the conclusion of the theorem holds also for noncontinuous, unbounded functions, with many singular points, as long as(1)is satisfied in the sense of weak derivatives. The singularities are necessarily of a rather mild type, and a variant of our proof yields a subtorus M on which the function f is necessarily continuous with Osc(f ; M ) ≤ ε.3. Another possible approach to the problem would be along the lines of the proof of the classical concentration theorems -namely, finding an ε-net of points in a subtorus, where all the coordinate partial derivatives of the function are small. However, this approach requires some additional a-priori data about the function, such as a uniform bound on the Hessian.
Measure theory and fine properties of functions. Lawrence C Evans, Ronald F Gariepy, Studies in Advanced Mathematics. CRC PressEvans, Lawrence C., Gariepy, Ronald F.; Measure theory and fine properties of functions. Studies in Advanced Mathematics. CRC Press, Boca Raton, FL, 1992.
Gromov, Mikhael; Filling Riemannian manifolds. J. Differential Geom. 181Gromov, Mikhael; Filling Riemannian manifolds. J. Differential Geom. 18 (1983), no. 1, 1-147.
Isoperimetry of waists and concentration of maps. Mikhael Gromov, Geom. Funct. Anal. (GAFA). 1Gromov, Mikhael; Isoperimetry of waists and concentration of maps. Geom. Funct. Anal. (GAFA) 13 (2003), no. 1, 178-215.
The concentration of measure phenomenon. Mathematical Surveys and Monographs. Michel Ledoux, American Mathematical Society89Providence, RILedoux, Michel; The concentration of measure phenomenon. Mathematical Sur- veys and Monographs, 89. American Mathematical Society, Providence, RI, 2001.
Geometric theory of Banach spaces. II. Geometry of the unit ball. (Russian) Uspehi Mat. Vitali D Milman, Nauk. 266Milman, Vitali. D.; Geometric theory of Banach spaces. II. Geometry of the unit ball. (Russian) Uspehi Mat. Nauk 26 (1971), no. 6 (162), 73-149.
The spectrum of bounded continuous functions which are given on the unit sphere of a B-space. Vitali D Milman, Russian) Funkcional. Anal. i Priložen. 32Milman, Vitali. D.; The spectrum of bounded continuous functions which are given on the unit sphere of a B-space. (Russian) Funkcional. Anal. i Priložen. 3 (1969), no. 2, 67-79.
Asymptotic properties of functions of several variables that are defined on homogeneous spaces. Vitali D Milman, Soviet Math. Dokl. 12Milman, Vitali. D.; Asymptotic properties of functions of several variables that are defined on homogeneous spaces. Soviet Math. Dokl. 12 (1971), 1277-1281;
. Dokl. Akad. Nauk SSSR. 199Russiantranslated from Dokl. Akad. Nauk SSSR 199 (1971), 1247-1250 (Russian).
Asymptotic theory of finitedimensional normed spaces. Vitali D Milman, Gideon Schechtman, With an appendix by M. Gromov. Lecture Notes in Mathematics, 1200. BerlinSpringer-VerlagMilman, Vitali. D.; Schechtman, Gideon; Asymptotic theory of finite- dimensional normed spaces. With an appendix by M. Gromov. Lecture Notes in Mathematics, 1200. Springer-Verlag, Berlin, 1986.
Urysohn metric spaces, and extremely amenable groups. Vladimir ; Pestov, Ramsey-Milman Phenomenon, Israel J. Math. 127Pestov, Vladimir; Ramsey-Milman phenomenon, Urysohn metric spaces, and extremely amenable groups. Israel J. Math. 127 (2002), 317-357.
Dynamics of infinite-dimensional groups. The Ramsey-Dvoretzky-Milman phenomenon. Vladimir Pestov, University Lecture Series. 40American Mathematical SocietyPestov, Vladimir; Dynamics of infinite-dimensional groups. The Ramsey- Dvoretzky-Milman phenomenon. University Lecture Series, 40. American Math- ematical Society, Providence, RI, 2006.
|
[] |
[
"A note on V-binomials' recurrence for Lucas sequence V n companion to U n sequence",
"A note on V-binomials' recurrence for Lucas sequence V n companion to U n sequence"
] |
[
"Andrzej Krzysztof Kwaśniewski [email protected] \nInstitute of Combinatorics and its Applications\nInstitute of Combinatorics and its Applications\nPL-15-674 Bia lystok, Konwaliowa 11/11, PL-15-674 Bia lystok, Konwaliowa 11/11Winnipeg, WinnipegManitoba, ManitobaCanada, Poland, Canada, Poland\n"
] |
[
"Institute of Combinatorics and its Applications\nInstitute of Combinatorics and its Applications\nPL-15-674 Bia lystok, Konwaliowa 11/11, PL-15-674 Bia lystok, Konwaliowa 11/11Winnipeg, WinnipegManitoba, ManitobaCanada, Poland, Canada, Poland"
] |
[] |
Following [2] (2009) we deliver V -binomials' recurrence formula for Lucas sequence V n companion to U n sequence [1] (1878). This formula is not present neither in [1] (1878) nor in [2] (2009), nor in [3] (1915), nor in [4] (1936), nor in [5] (1949) and neither in all other quoted here as "'Lucas (p, q)people"' references [1-29]-far more not complete. Meanwhile V -binomials' recurrence formula for Lucas sequence V n easily follows from the original Theorem 17 in [2] absent in quoted papers except for [2] of course. Our formula may and should be confronted with [3] (1915) Fontené recurrence i.e. (6) or (7) identities in [7] (1969) which, as we indicate, also stem easily from the Theorem 17 in [2].1 Preliminaries Notation a,b a = b in [1] (1878) is used for the roots of the equation x 2 = P x−Q or (a, b) ≡ (u, v) in [2] (2009) for the roots of the equation x 2 = ℓx−1. The identification (a, b) ≡ (p, q) i.e. p, q are used in "'Lucas (p, q)-people"' publications recently and in recent past (look into not complete list of references [2 -29] and p, q-references therein). Lucas (p, q)-people would then use U -identifications:
| null |
[
"https://arxiv.org/pdf/1011.3015v2.pdf"
] | 117,616,062 |
1011.3015
|
492ab46c4eab69e6f56c0ede5536d34f712ee5df
|
A note on V-binomials' recurrence for Lucas sequence V n companion to U n sequence
21 Nov 2010
Andrzej Krzysztof Kwaśniewski [email protected]
Institute of Combinatorics and its Applications
Institute of Combinatorics and its Applications
PL-15-674 Bia lystok, Konwaliowa 11/11, PL-15-674 Bia lystok, Konwaliowa 11/11Winnipeg, WinnipegManitoba, ManitobaCanada, Poland, Canada, Poland
A note on V-binomials' recurrence for Lucas sequence V n companion to U n sequence
21 Nov 2010AMS Classification Numbers: 05A1005A30 Keywords: Lucas sequencegeneralized binomial coefficients
Following [2] (2009) we deliver V -binomials' recurrence formula for Lucas sequence V n companion to U n sequence [1] (1878). This formula is not present neither in [1] (1878) nor in [2] (2009), nor in [3] (1915), nor in [4] (1936), nor in [5] (1949) and neither in all other quoted here as "'Lucas (p, q)people"' references [1-29]-far more not complete. Meanwhile V -binomials' recurrence formula for Lucas sequence V n easily follows from the original Theorem 17 in [2] absent in quoted papers except for [2] of course. Our formula may and should be confronted with [3] (1915) Fontené recurrence i.e. (6) or (7) identities in [7] (1969) which, as we indicate, also stem easily from the Theorem 17 in [2].1 Preliminaries Notation a,b a = b in [1] (1878) is used for the roots of the equation x 2 = P x−Q or (a, b) ≡ (u, v) in [2] (2009) for the roots of the equation x 2 = ℓx−1. The identification (a, b) ≡ (p, q) i.e. p, q are used in "'Lucas (p, q)-people"' publications recently and in recent past (look into not complete list of references [2 -29] and p, q-references therein). Lucas (p, q)-people would then use U -identifications:
n p,q = n−1 j=0 p n−j−1 q j = U n = p n − q n p − q , 0 p,q = U 0 = 0, 1 p,q = U 1 = 1, where p, q denote now the roots of the equation x 2 = sx + t hence p + q = s and pq = −t and the empty sum convention was used for 0 p,q = 0. Usually one assumes p = q. In general also s = t -though according to the context [10] (1989) s = t may happen to be the case of interest.
The Lucas U -binomial coefficients n k U ≡ n k p,q are then defined as follows Definition 1 Let U be as in [1] i.e U n = n p,q then U -binomial coefficients for any n, k ∈ N ∪ {0} are defined as follows n k U = n k p,q = n p,q ! k p,q ! · (n − k) p,q ! = n k p,q k p,q ! (1)
where n p,q ! = n p,q ·(n−1) p,q ·...·1 p,q and n k p,q = n p,q ·(n−1) p,q ·...·(n−k+1) p,q .
Definition 2 Let V be as in [1] i.e V n = p n + q n , hence V 0 = 2 and V n = p + q = s. Then V -binomial coefficients for any n, k ∈ N ∪ {0} are defined as follows
n k V = V n ! V k ! · V ( n − k)! = V k n V k ! (2)
where V n ! = V n · V n−1 · ... · V 1 and V k n = V n · V n−1 · ... · V n−k+1 .
One easily generalizes L-binomial to L-multinomial coefficients [29] .
Definition 3 Let L be any natural numbers' valued sequence i.e. L n ∈ N and s ∈ N. L-multinomial coefficient is then identified with the symbol n k 1 , k 2 , ..., k s L = L n ! L k 1 ! · ... · L ks ! (3) where k i ∈ N and s i=1 k i = n for i = 1, 2, ..., s. Otherwise it is equal to zero.
Naturally for any natural n, k and k 1 + ... + k m = n − k the following holds
n k L · n − k k 1 , k 2 , ..., k m L = n k, k 1 , k 2 , ..., k m L (4) 2 V -binomial coefficients' recurrence
The authors of [2] prove (Th. 17) the following nontrivial recurrence for the general case of r+s r,s L[p,q] L-binomial arrays in multinomial notation. Let s, r > 0. Then
r + s r, s L[p,q] = g 1 (r, s) · r + s − 1 r − 1, s L[p,q] + g 2 (r, s) · r + s − 1 r, s − 1 L[p,q](5)
where r
r,0 L = s 0,s L = 1. L[p, q] r+s = g 1 (r, s) · L[p, q] r + g 2 (r, s) · L[p, q] s .(6)
Taking into account the U -addition formula i.e. the first of two trigonometriclike L-addition formulas (42) from [1] [see also [20], [22]
] (L[p, q] = L = U, V ) i.e. 2U r+s = U r V s + U s V r , 2V r+s = V r V s + U s U r (7)
one readily recognizes that the U -binomial recurrence from the Corollary 18 in [2] is identical with the U -binomial recurrence (58) [1]. However there is not companion V -binomial recurrence neither in [1] (1878) nor in [2] (2009).
This V -binomial recurrence is given right now in the form of (5) adapted to
L[p, q] = V [p, q] = V -Lucas sequence case. r + s r, s V [p,q] = h 1 (r, s) r + s − 1 r − 1, s V [p,q] + h 2 (r, s) r + s − 1 r, s − 1 V [p,q] ,(8)
where p = q and r
r,0 L = s 0,s L = 1. V r+s = h 1 (r, s)V r + h 2 (r, s)V s .(9)
and where (p = q) h 1 · (p r q s − q r p s ) = p r+s q s − q r+s p s ,
h 2 · (q r p s − p r q s ) = p r+s q r − q r+s p r .
The recurrent relations (13) and (14) in [28] for n p,q -binomial coefficients are special cases of this paper formula (5) i.e. of Th. 17 in [2] with straightforward identifications of g 1 , g 2 in (13) and in (14) in [28] as well as this paper recurrence (6) for L = U [p, q] n = n p,q sequence.
g 1 = p r , g 2 = q s ,(12)
or
g 1 = q r , g 2 = p s ,(13)
whle (s + r) p,q = p s r p,q + q r s p,q = (r + s) q,p = q r s p,q + p s r p,q .
Now let A is any natural numbers' or even complex numbers' valued sequence. One readily sees that also (1915) Fontené recurrence for Fontené-Ward generalized A-binomial coefficients i.e. equivalent identities (6) , (7) in [7] are special cases of this paper formula (5) i.e. of Th. 17 in [2] with straightforward identifications of g 1 , g 2 in this paper formula (5) identities while this paper recurrence (6) becomes trivial identity. Namely, the identities (6) and (7) from [7] (1969) read correspondingly:
r + s r, s A = 1 · r + s − 1 r − 1, s A + A r+s − A r A s r + s − 1 r, s − 1 A ,(15)r + s r, s A = A r+s − A s A r · r + s − 1 r − 1, s A + 1 · r + s − 1 r, s − 1 A ,(16)
where p = q and r r,0 L = s 0,s L = 1. And finally we have tautology identity
A s+r ≡ A r+s − A s A r · A r + 1 · A s .(17)
As for combinatorial interpretations of L-binomial or L-multinomial coefficients we leave that subject apart from this note because this note is to be deliberately short. Nevertheless we direct the reader to some papers and references therein; these are herethe following: [30] (2010), [2] (2009), [29] (2009), [10] (1989), [11] (1991), [13] (1992), [14] (1993), [15] (1994), [16] (1994), [26] (2004), [26] (2004), [29] (2009) and to this end see [8].
[20] W. Bajguz
n p,q = n−1 j=0 p n−j−1 q j = U n = p n − q n p − q , 0 p,q = U 0 = 0, 1 p,q = U 1 = 1,
where p, q denote now the roots of the equation x 2 = sx + t ≡ x 2 = P x − Q hence p + q = s = P and pq = Q = −t and the empty sum convention was used for 0 p,q = 0. Usually one considers -as we do now-the p = q case. In general also s = t -though according to the context [11] (1989) s = t may happen to be the case of interest.
The Lucas U -binomial coefficients n k U ≡ n k p,q are then defined ( [1] (1878), [5] (1936), [6](1949), [7] (1964), [8] (1969) ) as follows.
Definition 1 Let U be as in [1] i.e U n = n p,q then U -binomial coefficients for any n, k ∈ N ∪ {0} are defined as follows
n k U = n k p,q = n p,q ! k p,q ! · (n − k) p,q ! = n k p,q k p,q ! (1)
where n p,q ! = n p,q ·(n−1) p,q ·...·1 p,q and n k p,q = n p,q ·(n−1) p,q ·...·(n−k+1) p,q .
Definition 2 Let V be as in [1] i.e V n = p n + q n , hence V 0 = 2 and V n = p + q = s ≡ P . Then V -binomial coefficients for any n, k ∈ N ∪ {0} are defined as follows
n k V = V n ! V k ! · V ( n − k)! = V k n V k ! (2) where V n ! = V n · V n−1 · ... · V 1 and V k n = V n · V n−1 · ... · V n−k+1 .
One easily generalizes L-binomial to L-multinomial coefficients [31] .
Definition 3 Let L be any natural numbers' valued sequence i.e. L n ∈ N and s ∈ N. L-multinomial coefficient is then identified with the symbol n k 1 , k 2 , ..., k s L = L n ! L k 1 ! · ... · L ks ! (3)
where k i ∈ N and s i=1 k i = n for i = 1, 2, ..., s. Otherwise it is equal to zero.
Naturally for any natural n, k and k 1 + ... + k m = n − k the following holds
L[p, q] r+s = g 1 (r, s) · L[p, q] r + g 2 (r, s) · L[p, q] s .(6)
Compare the above now common knowledge formulas (5) and (6) with :
[JM] formula (2) and formula in between (1') and (2) in [6] (1949) or [Gould] compare formula (7) in [8] (1969) with this note formula(5), or [KaKi] see formulas (51) and (40) in [15] (1992) or [G-V] see this correspondence in section (10.1) of [11] (1989) or [K-W] see this correspondence in [12] (1989) or [Corcino] compare this note formula (5) with the Theorem 1 formulas (13) and (14) -with the simple proof just checking -in [30] (2008) or [MD1] compare this note formulas (5) and (6) with the special case formulas (2) and (1) in [33] v [1] [here only special form T λ tiling cobweb admissible natural numbers' valued sequences are admitted] where note that (1) and (2) formulas are given original combinatorial interpretations in terms of tilings of the so called cobweb posets and (2) is given a combinatorial proof or [MD2] compare this note formulas (5) and (6) with (16) and (11) in [33] v [2] where a trivial derivation of nontrivial (16) is supplied -in reverse order with respect to the corresponding derivation in [2].
Historical Note As accurately noticed by Knuth and Wilf in [12] the recurrent relations for Fibonomial coefficients appeared already in 1878 Lukas work (see: p. 27 , formula (58) in [1]. In our opinion -Lucas Théorie des Fonctions Numériques Simplement Périodiques is the far more nonaccidental context for binomial-type coefficients exhibiting their relevance to hyperbolic trigonometry [see for example [22], [24]]. Indeed. Taking here into account the U -addition formula i.e. the first of two trigonometric-like L-addition formulas (42) from [1] [see also [22], [24]] (L[p, q] = L = U, V ) i.e.
2U r+s = U r V s + U s V r , 2V r+s = V r V s + U s U r(7)
one readily recognizes that the U -binomial recurrence from the Corollary 18 in [2] is identical with the U -binomial recurrence (58) [1]. See also Proposition 2.2. in [32] (2010).
However there is no companion V -binomial recurrence neither in [1] (1878) nor in [2] (2009)as well as all other quoted papers. Here let us note on the way as aside remark, that we know [34] quite promising analogues of addition rules for both companion sequences. Namely
U r+s = U r V s − p n q n U r−s , V r+s = V r V s − p n q n V r−s .(8)
Now, it is not difficult to realize that the looked for V -binomial recurrence may be given in the form of (5) adapted to L[p, q] = V [p, q] = V -Lucas sequence case. Here it is.
r + s r, s V [p,q] = h 1 (r, s) r + s − 1 r − 1, s V [p,q] + h 2 (r, s) r + s − 1 r, s − 1 V [p,q] ,(9)
where p = q and r r,0 L = s 0,s L = 1, and for r = s
V r+s = h 1 (r, s)V r + h 2 (r, s)V s , V 2r = (h 1 (r, r) + h 2 (r, r)) · V r(10)
and where (p = q) and r = s while
h 1 · (p r q s − q r p s ) = p r+s q s − q r+s p s , r = s,(11)
and h 2 · (q r p s − p r q s ) = p r+s q r − q r+s p r r = s,
Now for r = s -having in mind that V k = p k + q k -a taking into account p interchange with q symmetry -we may establish the identification (13) for P = 0. (Recall [1] (1878), that p, q are roots of x 2 − P x + Q therefore p + q = P and p · q = Q.) h 1 (r, r) = p 2r p r + q r , h 2 (r, r) = q 2r p r + q r .
The matters are much easier in L n = U [p, q] n = n p,q = p n −q n p−q U -Lucas sequence well elaborated case. See for example formula (2) and formula in between (1') and (2) in [6] (1949) and come back to acapit starting with Compare -above.
Indeed. One may proceed as above with V -Lucas sequence immediately noting that for r = s h 1 + h 2 = p r + q r as it should be, See below. The recurrent relations (13) and (14) in [30] for n p,q -binomial coefficients are special cases of this paper formula (5) i.e. of Th. 17 in [2] with straightforward identifications of g 1 , g 2 in (13) and in (14) in [30] as well as this paper recurrence (6) for L = U [p, q] n = n p,q sequence.
g 1 = p r , g 2 = q s ,(14)
or
g 1 = q r , g 2 = p s ,(15)
while (s + r) p,q = p s r p,q + q r s p,q = (r + s) q,p = q r s p,q + p s r p,q .
hence -in multinomial notation and choosing (14) we have
r + s r, s U ≡ r + s r, s p,q = p r · r + s − 1 r − 1, s U + q s · r + s − 1 r, s − 1 U ,(17)
where r r,0 U = s 0,s U = 1. Now let A be any natural numbers' or even complex numbers' valued sequence. One readily sees that also (1915) Fontené recurrence for Fontené-Ward generalized A-binomial coefficients i.e. equivalent identities (6) , (7) in [8] are special cases of this paper formula (5) i.e. of Th. 17 in [2] with straightforward identifications of g 1 , g 2 in this paper formula (5) while this paper recurrence (6) becomes trivial identity. Namely, the identities (6) and (7) from [8] (1969) read correspondingly:
r + s r, s A = 1 · r + s − 1 r − 1, s A + A r+s − A r A s r + s − 1 r, s − 1 A ,(18)r + s r, s A = A r+s − A s A r · r + s − 1 r − 1, s A + 1 · r + s − 1 r, s − 1 A ,(19)
where p = q and r r,0 L = s 0,s L = 1. And finally we have tautology identity
A s+r ≡ A r+s − A s A r · A r + 1 · A s .(20)
As for combinatorial interpretations of L-binomial or L-multinomial coefficients we leave that subject apart from this note because this note is to be deliberately short. Nevertheless we direct the reader to some papers and references therein; these are here the following: [32] (2010), [2] (2009), [31] (2009), [33] (2009 , [11] (1989), [13] (1991), [15] (1992), [16] (1993), [17] (1994), [18] (1994), [28] (2004), and to this end see [9]. The list is far not complete. For example the on combinatorial interpretation work of Arthur T. Benjamin http : //www.math.hmc.edu/ benjamin/ should be notified also.
Final remark. The above presentation, definitions and recurrent formulas like (5) and (6) extend correspondingly to Horadam W and H sequences while (10)-(12) just stay the same under the replacement: V → H i.e V n = p n + q n → H n = A · p n + B · q n while (13) becomes h 1 (r, r) = A · p 2r A · p r + B · q r , h 2 (r, r) = B · q 2r A · p r + B · q r .
To this end recall that Lucas sequences are special cases of Horadam sequences i.e. the extension looks like: (Lucas [1] (1878)) U n = p n −q n p−q → A·p n −B·q n p−q = W n (Horadam : [35], [36] (1965), [37], [38], [39].
To this end note also that Horadam W sequence is a special case of Horadam H sequence with obvious choice of A and B in the latter.
A more detailed presentation of Horadam binomials' properties we leave to the subsequent note.
a,b a = b in[1] (1878) is used for the roots of the equationx 2 = P x−Q or (a, b) ≡ (u, v) in[2] (2009) for the roots of the equation x 2 = ℓx−1. The identification (a, b) ≡ (p, q) i.e. p, q are used in "'Lucas (p, q)-people"' publications recently and in recent past (look into not complete list of references[2 -33] and p, q-references therein). Lucas (p, q)-people would then use U -identifications:
, k 2 , ..., k m L = n k, k 1 , k 2 , ..., k m L (4) 2 V -binomial coefficients' recurrence The authors of [2] prove trivially an observation named the Theorem 17 i.e the following nontrivial recurrence for the general case of r+s r,s L[p,q] Lbinomial arrays in multinomial notation. Let s, r > 0. Let L = L[p, q] be any zero characteristic field valued sequence (L n = 0). Then r + s r, s L[p,q] = g 1 (r, s) · r + s − 1 r − 1, s L[p,q] + g 2 (r, s) · r + s − 1 r, s − 1 L[p,q]
, A.K.Kwaniewski On generalization of Lucas symmetric functions and Tchebycheff polynomials ,Integral Transforms and Special Karen Sue Briggs and J. B. Remmel, A p,q-analogue of a Formula of Frobenius, Electron. J. Comb. 10 (2003), No R9.[26] J. B. Remmel and Michelle L. Wachs, Rook Theory, Generalized Stirling Numbers and (p,q)-Analogues , The Electronic Journal of Combinatorics 11 , (Nov 22, 2004), No R 84.[27] Karen Sue Briggs, A Rook Theory Model for the Generalized p,q-Stirling Numbers of the First and Second Kind, Formal Power Series and Algebraic Combinatorics, Series Formélles et Combinatoire Algébrique San Diego, California 2006 [28] Roberto B. Corcino, ON p,q-Binomial Coefficients, INTEGERS: ELECTRONIC JOURNAL OF COMBINATORIAL NUMBER THE-ORY 8 ,(2008), No A29. Bruce E. Sagan, Carla D. Savage, Combinatorial Interpretation of Binomial Coefficient Analogues Related to Lucas Sequences, INTEGERS: ELECTRONIC JOURNAL OF COMBINATORIAL NUMBER THE-ORY (2010), (2010), to appear A note on V -binomials' recurrence for V-Lucas sequence companion to U-Lucas sequence Andrzej Krzysztof Kwaśniewski Member of the Institute of Combinatorics and its Applications, Winnipeg, Manitoba, CanadaFunctions Vol.8 Numbers 3-4, (1999): 165-174 .
[21] Alexandru Ioan Lupas, A guide of Fibonacci and Lucas polynomials,
Octogon, Math. Magazine, vol.7 , No 1, (1999): 2-12.
[22] W. Bajguz, On generalized of Tchebycheff polynomials, Integral Trans-
forms and Special Functions, Vol. 9, No. 2 (2000), pp. 91-98
[23] Eric R. Tou, Residues of Generalized Binomial Coefficients Modulo a
Product of Primes, senior thesis, Spring (2002):, Department of Math-
ematics and Computer Science, Gustavus Adolphus College, St. Peter,
MN, http : //sites.google.com/site/erikrtou/home , promotor John
M. Holte.
[24] Karen Sue Briggs, Q-analogues and p, q-analogues of rook numbers and
hit numbers and their extensions, Ph.D. thesis, University, of California,
San Diego (2003).
[25] [29] M. Dziemiańczuk, Generalization of Fibonomial Coefficients,
arXiv:0908.3248v1 [v1] Sat, 22 Aug (2009), 13:18:44 GMT
[30] PL-15-674 Bia lystok, Konwaliowa 11/11, Poland
e-mail: [email protected]
Summary
Following [2] (2009) we deliver V -binomials' recurrence formula for Lucas
sequence V = V n n≥0 companion to U = U n n≥0 sequence [1] (1878). This
formula is not present neither in [1] (1878) nor in [2] (2009), nor in [3] (1915),
nor in [5] (1936), nor in [6] (1949) and neither in all other quoted here as
"'Lucas (p, q)-people"' references [1-34]. Meanwhile V -binomials' recurrence
formula for Lucas sequence V n easily follows from the original Theorem 17
in [2]. Our formula may and should be confronted with [3] (1915) Fontené
recurrence i.e. (6) or (7) identities in [8] (1969) which, as we indicate, also
stem easily from the Theorem 17 in [2].
AMS Classification Numbers: 05A10 , 05A30
Keywords: Lucas sequence, generalized binomial coefficients
1 Preliminaries -notation.
Lucas Edouard, Théorie, des Fonctions Numériques Simplement Priodiques. Douglas Lind Fibonacci Association1Edouard LUCAS Théorie des Fonctions Numériques Simplement Pri- odiques, American Journal of Mathematics, Volume 1, (1878): 184-240 (Translated from the French by Sidney Kravitz , Edited by Douglas Lind Fibonacci Association 1969
Generalizing the combinatorics of binomial coefficients via ℓ-nomials with corrections noted (thanks to Bruce Sagan) , Integers. Nicholas A Loehr, Carla D Savage, to appearNicholas A. Loehr, Carla D. Savage August 26, 2009 Generalizing the combinatorics of binomial coefficients via ℓ-nomials with corrections noted (thanks to Bruce Sagan) , Integers, to appear.
Fonten Georges Généralisation d'une formule connue. Nouvelles Annales de Mathématiques. 112Fonten Georges Généralisation d'une formule connue, Nouvelles An- nales de Mathématiques (4) 15 (1915) p. 112
A calculus of sequences. Ward Morgan, Amer.J. Math. 58Ward Morgan, A calculus of sequences, Amer.J. Math. 58 (1936): 255- 266 .
The product of Sequences with the common linear recursion formula of Order 2. D Jarden, T Motzkin, Riveon Lematimatica. 3Jarden D., Motzkin T.,The product of Sequences with the common linear recursion formula of Order 2, Riveon Lematimatica 3 , (1949): 25-27.
Generalized Binomial Coefficients. R F Torretoo, J A Fuchs, The Fibonacci Quarterly. 2Torretoo R. F. , Fuchs J. A. Generalized Binomial Coefficients, The Fibonacci Quarterly,vol. 2, (1964): 296-302 .
The Bracket Function and Fontené-Ward Generalized binomial Coefficients with Applications to Fibonomial Coefficients. H W Gould, The Fibonacci Quarterly. 7Gould H.W. The Bracket Function and Fontené-Ward Generalized bi- nomial Coefficients with Applications to Fibonomial Coefficients, The Fibonacci Quarterly vol.7, (1969): 23-40.
I R M A Strasbourg, 229/S-08 Actes de Seminaire LotharingienProceedings of the 11th Winter School on Abstract Analysis. Frolk, Zdenkthe 11th Winter School on Abstract AnalysisPalermoBernd Voigt A common generalization of binomial coefficientsBernd Voigt A common generalization of binomial coefficients, Stirling numbers and Gaussian coefficients Publ. I.R.M.A. Strasbourg, 1984, 229/S-08 Actes de Seminaire Lotharingien, p. 87-89. , In: Frolk, Zdenk (ed.): Proceedings of the 11th Winter School on Abstract Analysis. Circolo Matematico di Palermo, Palermo, 1984, pp. 339-359.
Interpolation and combinatorial functions. Luis Verde-Star, Studies in Applied Mathematics. 79Luis Verde-Star, Interpolation and combinatorial functions. Studies in Applied Mathematics, 79: (1988):65-92.
Determinant Paths and Plane Partitions. Ira M Gessel, X G Viennot, 24Preprint; see(10.3Ira M. Gessel, X. G. Viennot,Determinant Paths and Plane Partitions, (1989):-Preprint; see(10.3) page 24
White, p,q-Stirling Numbers and Set Partition Statistics. M Wachs, D , Journal of Combinatorial Theory, Series A. 56M. Wachs, D. White, p,q-Stirling Numbers and Set Partition Statistics, Journal of Combinatorial Theory, Series A 56, (1991): 27-46.
A (p, q)-Oscillator Realization of Two-parameter Quantum Algebras. R Chakrabarti, R Jagannathan, J. Phys. A: Math. Gen. 24711R. Chakrabarti and R. Jagannathan, A (p, q)-Oscillator Realization of Two-parameter Quantum Algebras, J. Phys. A: Math. Gen. 24,(1991): L711.
Normal ordering for deformed boson operators and operator-valued deformed Stirling numbers. J Katriel, M Kibler, J. Phys. A: Math. Gen. 25J.Katriel, M. Kibler, Normal ordering for deformed boson operators and operator-valued deformed Stirling numbers, J. Phys. A: Math. Gen. 25,(1992):2683-2691.
A unified combinatorial approach for q-(and p,q-)Stirling numbers. A De Medicis, P Leroux, J. Statist. Plann. Inference. 34A. de Medicis and P. Leroux, A unified combinatorial approach for q-(and p,q-)Stirling numbers, J. Statist. Plann. Inference 34 (1993): 89-105.
Wachs: sigma -Restricted Growth Functions and p,q-Stirling Numbers. Michelle L , J. Comb. Theory, Ser. A. 682Michelle L. Wachs: sigma -Restricted Growth Functions and p,q-Stirling Numbers. J. Comb. Theory, Ser. A 68(2), (1994):470-480.
SeungKyung ParkP-Partitions and q-Stirling Numbers. Journal of Combinatorial Theory, Series A. 68SeungKyung ParkP-Partitions and q-Stirling Numbers , Journal of Combinatorial Theory, Series A, 68 , (1994): 33-52.
Generalized Stirling Numbers. Convolution Formulae and p, q-Analogues. A De Médicis, P Leroux, Canad. J. Math. 47A. De Médicis, P. Leroux, Generalized Stirling Numbers. Convolution Formulae and p, q-Analogues, Canad. J. Math. 47 , ((1995):474-499.
From generalized binomial symbols to Beta and Alpha sequences. Mirek Majewski, Andrzej Nowicki, Papua New Guinea Journal of Mathematics, Computing and Education. 4Mirek Majewski and Andrzej Nowicki, From generalized binomial sym- bols to Beta and Alpha sequences, Papua New Guinea Journal of Math- ematics, Computing and Education, 4, (1998): 73-78.
On GCD-morphic sequences. M Dziemiańczuk, W Bajguz, arXiv:0802.1303v1IeJNART40v1] SunM. Dziemiańczuk, W.Bajguz, On GCD-morphic sequences,IeJNART: Volume (3), September (2009): 33-37. arXiv:0802.1303v1, [v1] Sun, 10 Feb 2008 05:03:40 GMT
Lucas Edouard, Théorie, des Fonctions Numériques Simplement Priodiques. Douglas Lind Fibonacci Association1Edouard LUCAS Théorie des Fonctions Numériques Simplement Pri- odiques, American Journal of Mathematics, Volume 1, (1878): 184-240 (Translated from the French by Sidney Kravitz , Edited by Douglas Lind Fibonacci Association 1969.
Generalizing the combinatorics of binomial coefficients via ℓ-nomials with corrections noted (thanks to Bruce Sagan) , Integers. Nicholas A Loehr, Carla D Savage, to appearNicholas A. Loehr, Carla D. Savage August 26, 2009 Generalizing the combinatorics of binomial coefficients via ℓ-nomials with corrections noted (thanks to Bruce Sagan) , Integers, to appear.
Fonten Georges Généralisation d'une formule connue. Nouvelles Annales de Mathématiques. 15112Fonten Georges Généralisation d'une formule connue, Nouvelles An- nales de Mathématiques (4) 15 (1915): 112
Extended Theory of Lucas functions. Derric Henry Lehmer, Ann. of Math. 2Derric Henry Lehmer , Extended Theory of Lucas functions, Ann. of Math. (2) 31 (1930): 418-448.
A calculus of sequences. Ward Morgan, Amer.J. Math. 58Ward Morgan, A calculus of sequences, Amer.J. Math. 58 (1936): 255- 266.
The product of Sequences with the common linear recursion formula of Order 2. D Jarden, T Motzkin, Riveon Lematimatica. 3Jarden D., Motzkin T.,The product of Sequences with the common linear recursion formula of Order 2, Riveon Lematimatica 3 , (1949): 25-27.
Generalized Binomial Coefficients. R F Torretoo, J A Fuchs, The Fibonacci Quarterly. 2Torretoo R. F. , Fuchs J. A. Generalized Binomial Coefficients, The Fibonacci Quarterly,vol. 2, (1964): 296-302 .
The Bracket Function and Fontené-Ward Generalized binomial Coefficients with Applications to Fibonomial Coefficients. H W Gould, The Fibonacci Quarterly. 7Gould H.W. The Bracket Function and Fontené-Ward Generalized bi- nomial Coefficients with Applications to Fibonomial Coefficients, The Fibonacci Quarterly vol.7, (1969): 23-40.
I R M A Strasbourg, 229/S-08 Actes de Seminaire LotharingienProceedings of the 11th Winter School on Abstract Analysis. Frolk, Zdenkthe 11th Winter School on Abstract AnalysisPalermoBernd Voigt A common generalization of binomial coefficientsBernd Voigt A common generalization of binomial coefficients, Stirling numbers and Gaussian coefficients Publ. I.R.M.A. Strasbourg, 1984, 229/S-08 Actes de Seminaire Lotharingien, p. 87-89. , In: Frolk, Zdenk (ed.): Proceedings of the 11th Winter School on Abstract Analysis. Circolo Matematico di Palermo, Palermo, 1984, pp. 339-359.
Interpolation and combinatorial functions. Luis Verde-Star, Studies in Applied Mathematics. 79Luis Verde-Star, Interpolation and combinatorial functions. Studies in Applied Mathematics, 79: (1988):65-92.
Determinant Paths and Plane Partitions. Ira M Gessel, X G Viennot, 24Preprint; see(10.3Ira M. Gessel, X. G. Viennot,Determinant Paths and Plane Partitions, (1989):-Preprint; see(10.3) page 24
The Power of a Prime that Divides a Generalized Binomial Coefficient. D E Knuth, H S Wilf, J. Reine Angev. Math. 396D. E. Knuth, H. S. Wilf, The Power of a Prime that Divides a General- ized Binomial Coefficient, J. Reine Angev. Math. 396 (1989) : 212-219.
White, p,q-Stirling Numbers and Set Partition Statistics. M Wachs, D , Journal of Combinatorial Theory, Series A. 56M. Wachs, D. White, p,q-Stirling Numbers and Set Partition Statistics, Journal of Combinatorial Theory, Series A 56, (1991): 27-46.
A (p, q)-Oscillator Realization of Two-parameter Quantum Algebras. R Chakrabarti, R Jagannathan, J. Phys. A: Math. Gen. 24711R. Chakrabarti and R. Jagannathan, A (p, q)-Oscillator Realization of Two-parameter Quantum Algebras, J. Phys. A: Math. Gen. 24,(1991): L711.
Normal ordering for deformed boson operators and operator-valued deformed Stirling numbers. J Katriel, M Kibler, J. Phys. A: Math. Gen. 25J.Katriel, M. Kibler, Normal ordering for deformed boson operators and operator-valued deformed Stirling numbers, J. Phys. A: Math. Gen. 25,(1992):2683-2691.
A unified combinatorial approach for q-(and p,q-)Stirling numbers. A De Medicis, P Leroux, J. Statist. Plann. Inference. 34A. de Medicis and P. Leroux, A unified combinatorial approach for q-(and p,q-)Stirling numbers, J. Statist. Plann. Inference 34 (1993): 89-105.
Wachs: sigma -Restricted Growth Functions and p,q-Stirling Numbers. Michelle L , J. Comb. Theory, Ser. A. 682Michelle L. Wachs: sigma -Restricted Growth Functions and p,q-Stirling Numbers. J. Comb. Theory, Ser. A 68(2), (1994):470-480.
SeungKyung ParkP-Partitions and q-Stirling Numbers. Journal of Combinatorial Theory, Series A. 68SeungKyung ParkP-Partitions and q-Stirling Numbers , Journal of Combinatorial Theory, Series A, 68 , (1994): 33-52.
Generalized Stirling Numbers. Convolution Formulae and p, q-Analogues. A De Médicis, P Leroux, Canad. J. Math. 47A. De Médicis, P. Leroux, Generalized Stirling Numbers. Convolution Formulae and p, q-Analogues, Canad. J. Math. 47 , ((1995):474-499.
From generalized binomial symbols to Beta and Alpha sequences. Mirek Majewski, Andrzej Nowicki, Papua New Guinea Journal of Mathematics, Computing and Education. 4Mirek Majewski and Andrzej Nowicki, From generalized binomial sym- bols to Beta and Alpha sequences, Papua New Guinea Journal of Math- ematics, Computing and Education, 4, (1998): 73-78.
On GCD-morphic sequences. M Dziemiańczuk, W Bajguz, arXiv:0802.1303v1IeJNART40v1] SunM. Dziemiańczuk, W.Bajguz, On GCD-morphic sequences,IeJNART: Volume (3), September (2009): 33-37. arXiv:0802.1303v1, [v1] Sun, 10 Feb 2008 05:03:40 GMT
Kwaniewski On generalization of Lucas symmetric functions and Tchebycheff polynomials. W Bajguz, A K , Integral Transforms and Special Functions. 83W.Bajguz, A.K.Kwaniewski On generalization of Lucas symmetric functions and Tchebycheff polynomials ,Integral Transforms and Special Functions Vol.8 Numbers 3-4, (1999): 165-174 .
. Alexandru Ioan Lupas, Lucas Fibonacci, Polynomials, Octogon, Math. Magazine. 71Alexandru Ioan Lupas, A guide of Fibonacci and Lucas polynomials, Octogon, Math. Magazine, vol.7 , No 1, (1999): 2-12.
W Bajguz, On generalized of Tchebycheff polynomials. 9W. Bajguz, On generalized of Tchebycheff polynomials, Integral Trans- forms and Special Functions, Vol. 9, No. 2 (2000), pp. 91-98
Residues of Generalized Binomial Coefficients Modulo a Product of Primes, senior thesis. Eric R Tou, Spring; Gustavus Adolphus College, St. Peter, MNDepartment of Mathematics and Computer ScienceEric R. Tou, Residues of Generalized Binomial Coefficients Modulo a Product of Primes, senior thesis, Spring (2002):, Department of Math- ematics and Computer Science, Gustavus Adolphus College, St. Peter, MN, http : //sites.google.com/site/erikrtou/home , promotor John M. Holte.
Q-analogues and p, q-analogues of rook numbers and hit numbers and their extensions. Karen Sue Briggs, University, of California, San DiegoPh.D. thesisKaren Sue Briggs, Q-analogues and p, q-analogues of rook numbers and hit numbers and their extensions, Ph.D. thesis, University, of California, San Diego (2003).
A p,q-analogue of a Formula of Frobenius. Karen Sue Briggs, J B Remmel, Electron. J. Comb. 109Karen Sue Briggs and J. B. Remmel, A p,q-analogue of a Formula of Frobenius, Electron. J. Comb. 10 (2003), No R9.
Rook Theory, Generalized Stirling Numbers and (p,q)-Analogues. J B Remmel, Michelle L Wachs, The Electronic Journal of Combinatorics. 11R 84J. B. Remmel and Michelle L. Wachs, Rook Theory, Generalized Stir- ling Numbers and (p,q)-Analogues , The Electronic Journal of Combi- natorics 11 , (Nov 22, 2004), No R 84.
A Rook Theory Model for the Generalized p,q-Stirling Numbers of the First and Second Kind, Formal Power Series and Algebraic Combinatorics. Karen Sue Briggs, Series Formélles et Combinatoire Algébrique. San Diego, CaliforniaKaren Sue Briggs, A Rook Theory Model for the Generalized p,q-Stirling Numbers of the First and Second Kind, Formal Power Series and Alge- braic Combinatorics, Series Formélles et Combinatoire Algébrique San Diego, California 2006
. Roberto B Corcino, On P,Q-Binomial, Coefficients, INTEGERS: ELECTRONIC JOURNAL OF COMBINATORIAL NUMBER THE-ORY. 829Roberto B. Corcino, ON p,q-Binomial Coefficients, INTEGERS: ELECTRONIC JOURNAL OF COMBINATORIAL NUMBER THE- ORY 8 ,(2008), No A29.
M Dziemiańczuk, arXiv:0908.3248v1[v118:44 GMTGeneralization of Fibonomial Coefficients. 13M. Dziemiańczuk, Generalization of Fibonomial Coefficients, arXiv:0908.3248v1 [v1] Sat, 22 Aug (2009), 13:18:44 GMT
Combinatorial Interpretation of Binomial Coefficient Analogues Related to Lucas Sequences. Bruce E Sagan, Carla D Savage, INTEGERS: ELECTRONIC JOURNAL OF COMBINATORIAL NUMBER THE-ORY. to appearBruce E. Sagan, Carla D. Savage, Combinatorial Interpretation of Bi- nomial Coefficient Analogues Related to Lucas Sequences, INTEGERS: ELECTRONIC JOURNAL OF COMBINATORIAL NUMBER THE- ORY (2010), (2010), to appear
M Dziemiańczuk, arXiv:0802.3473v154:09 GMTReport On Cobweb Posets' Tiling Problem. 55v2] ThuM. Dziemiańczuk, Report On Cobweb Posets' Tiling Problem, arXiv:0802.3473v1 [v1] Sun, Sun, 24 Feb 2008 00:54:09 GMT, [v2] Thu, 2 Apr 2009 11:05:55 GMT
From MathWorld-A Wolfram Web Resource. Eric W Weisstein, Lucas Sequence, Weisstein, Eric W. Lucas Sequence. From MathWorld-A Wolfram Web Resource. http : //mathworld.wolf ram.com/LucasSequence.html.
. A F Horadam, A Generalized Fibonacci Sequence, American. Mathematical. Monthly. 68A. F. Horadam, A Generalized Fibonacci Sequence, American. Mathe- matical. Monthly 68 (1961):455-59.
Basic properties of a certain generalized sequence of numbers. A Horadam, The Fibonacci Quart. 3A.F Horadam, Basic properties of a certain generalized sequence of numbers, The Fibonacci Quart., 3 (1965), 161-176.
Generating functions for powers of a certain generalized sequence of numbers. A Horadam, Duke Math. J. 32A.F Horadam, Generating functions for powers of a certain generalized sequence of numbers, Duke Math. J., 32 (1965), 437-446.
M Elmore, Fibonacci functions, Fibonacci Quart. 4M. Elmore, Fibonacci functions, Fibonacci Quart. 4 (1967): 371-382.
Special properties of the sequence W (a, b; p, q), Fibonacci Quarterly. A F Horadam, 5A.F. Horadam, Special properties of the sequence W (a, b; p, q), Fi- bonacci Quarterly 5, 424-434 (1967).
|
[] |
[
"SOME TIGHT CONTACT FOLIATIONS CAN BE APPROXIMATED BY OVERTWISTED ONEŚ",
"SOME TIGHT CONTACT FOLIATIONS CAN BE APPROXIMATED BY OVERTWISTED ONEŚ"
] |
[
"Alvaro Del Pino "
] |
[] |
[] |
A contact foliation is a foliation endowed with a leafwise contact structure. In this remark we explain a turbulisation procedure that allows us to prove that tightness is not a homotopy invariant property for contact foliations.• the leaves of (N × S 1 , F 0 , ξ 0 = E ∩ F 0 ) are tight, • the leaves of (N × S 1 , F 1 , ξ 1 = E ∩ F 1 ) are overtwisted.
|
10.1007/s00013-017-1139-8
|
[
"https://arxiv.org/pdf/1709.03773v1.pdf"
] | 58,910,858 |
1709.03773
|
d2df7040edc9f249e6b0ab6f74b14eac16709991
|
SOME TIGHT CONTACT FOLIATIONS CAN BE APPROXIMATED BY OVERTWISTED ONEŚ
Alvaro Del Pino
SOME TIGHT CONTACT FOLIATIONS CAN BE APPROXIMATED BY OVERTWISTED ONEŚ
A contact foliation is a foliation endowed with a leafwise contact structure. In this remark we explain a turbulisation procedure that allows us to prove that tightness is not a homotopy invariant property for contact foliations.• the leaves of (N × S 1 , F 0 , ξ 0 = E ∩ F 0 ) are tight, • the leaves of (N × S 1 , F 1 , ξ 1 = E ∩ F 1 ) are overtwisted.
Statement of the results
Let M 2n+1+q be a closed smooth manifold. Let F 2n+1 be a smooth codimension-q foliation on M . We say that (M, F) can be endowed with the structure of a contact foliation if there is a hyperplane field ξ 2n ⊂ F such that, for every leaf L of F, (L, ξ| L ) is a contact manifold.
In [CPP,Theorem 1.1] it was shown that (M 4 , F 3 ) admits a leafwise contact structure if there exists a 2-plane field tangent to F. This was later extended in [BEM] to foliations of any dimension, any codimension, and admitting a leafwise formal contact structure. In both cases, the foliations produced have all leaves overtwisted; therefore, the meaningful question is whether one can construct and classify contact foliations with tight leaves.
Instability of tightness under homotopies. If the foliation F is fixed, the parametric Moser trick [CPP,Lemma 2.8] implies that any two homotopic contact foliations (F, ξ 0 ) and (F, ξ 1 ) are actually isotopic by a flow tangent to the leaves. In particular, if F is fixed, tightness is preserved under homotopies. Our main result states that this is not the case anymore if F is allowed to move: Theorem 1. Let N be a closed orientable 3-manifold. There is a path of contact foliations (N × S 1 , F s , ξ s ), s ∈ [0, 1], satisfying:
• the leaves of (N × S 1 , F 0 , ξ 0 ) are tight, • the leaves of (N × S 1 , F s , ξ s ) are overtwisted, for all s > 0.
Foliations transverse to even-contact structures. Given a codimension-1 contact foliation (M 2n+2 , F 2n+1 , ξ 2n ) and a line field X transverse to F, it is immediate that the codimension-1 distribution E = ξ ⊕ X is maximally non-integrable. Such distributions are called even-contact structures.
The kernel or characteristic foliation of E is a line field W ⊂ E uniquely defined by the expression [W, E] ⊂ E. Given an even-contact structure E, any codimension-1 foliation transverse to its kernel is imprinted with a leafwise contact structure. It is natural to study the moduli of contact foliations arising in this manner from E. Our second result states: Theorem 2. Let N be a closed orientable 3-manifold. There are foliations F 0 and F 1 and an evencontact structure E such that:
Proof. During the proof of Theorem 1 we shall see that the contact foliations (F s , ξ s ), s ∈ [0, 1], are imprinted by the same even-contact structure E.
This result is in line with the theorem of McDuff [McD] stating that even-contact structures satisfy the complete h-principle: one should expect this flexibility to manifest in other ways.
Acknowledgements. This note developed during a visit of the author to V. Ginzburg in UCSC and it was V. Ginzburg that posed the question of whether tightness could potentially be stable under deformations. In this occasion the question is certainly more clever than the small observation that provides the (negative) answer. The author is also grateful to F. Presas for reading this note and providing valuable suggestions. The author is supported by the grant NWO Vici Grant no. 639.033.312.
Turbulisation of contact foliations
We will now explain how to turbulise a contact foliation along a loop of legendrian knots.
2.1. Local model around a loop of legendrian knots. Let (N, ξ) be a contact 3-manifold. Any legendrian knot K ⊂ (N, ξ) has a tubular neighbourhood with the following normal form:
(Op(K) ⊂ N, ξ) ∼ = (D 2 × S 1 , ξ leg = ker(cos(z)dx + sin(z)dy)),
where (x, y, z) are the coordinates in D 2 × S 1 . A convenient way of thinking about the model is that it is simply the space of oriented contact elements of the disc. In particular, any diffeomorphism φ of D 2 relative to the boundary induces a contactomorphism C(φ) of the model, also relative to the boundary, as follows:
C(φ)(x, y, z) = (φ(x, y), dφ(z)).
Here we think of z as an oriented line in T (x,y) D 2 and we make dφ act by pushforward.
We can now define a contact foliation
(M leg = D 2 × S 1 × S 1 , F leg = t D 2 × S 1 × {t}, ξ leg ),
which is simply the trivial bundle over S 1 with fibre the local model we just described. Given a contact foliation (M, F, ξ) and an embedded torus K : S 1 × S 1 → M such that K t = K(t, −) is a legendrian knot on a leaf of F, it follows that there is an embedding (M leg , F leg , ξ leg ) → (M, F, ξ) providing a local model around K. It is sufficient to describe the turbulisation process in (M leg , F leg , ξ leg ).
2.2. Fixing the even-contact structure. Our aim now is to fix an even-contact structure E leg in (M leg , F leg ) imprinting ξ leg . The reason why we do not simply choose ξ leg ⊕ ∂ t is that E leg should allow us to turbulise.
Take polar coordinates (r, θ) on D 2 . Construct a diffeomorphism φ : D 2 → D 2 such that:
• φ is of the form φ(r, θ) = (f (r), θ) for some function f : [0, 1] → [0, 1],
• f restricts to the identity in the complement of [1/2, 2/3], • f compresses the interval [1/2, 2/3] towards the point 1/2. See Figure 1 for a depiction of the graph of f . Fix a vector field h(r)∂ r , with h(r) < 0 in the region r ∈ (1/2, 2/3) and h(r) = 0 everywhere else, whose time-1 map is the function f (r). There exists a unique vector field X in D 2 × S 1 satisfying:
• X is a contact vector field for the structure ξ leg , • X is a lift of h(r)∂ r . In particular, X has a negative radial component in the region r ∈
(1/2, 2/3).
By construction the 3-distribution E leg (x, y, z, t) = ξ leg ⊕ ∂ t + X(x, y, z) is an even-contact structure whose kernel is W leg = ∂ t + X(x, y, z) and whose imprint on (M leg , F leg ) is precisely ξ leg . Similarly, the foliation F leg is simply the pullback of the line field F 1 = ∂ r . F 1 and L are transverse to one another. We can find a homotopy of line fields (F s ) s∈[0,1] in S satisfying:
• F 1 = ∂ r , • F s is transverse to L, for all s, • F s is isotopic to F 1 for every s > 0,
• F 0 is as in the last frame of Figure 2: it has a closed orbit bounding a (half) Reeb component.
This path of line fields lifts to a path of codimension-1 foliations F leg,s in M leg . F leg,1 is simply F leg and F leg,s is isotopic to it for every positive s. F leg,0 has a single compact leaf, which is diffeomorphic to T 3 ; this leaf bounds a Reeb component whose interior leaves are diffeomorphic to R 2 × S 1 . Transversality of L with respect to F s implies that E leg imprints a contact foliation ξ leg,s on each F leg,s . Proof. The open leaves, as contact manifolds, are open subsets of the standard model (D 2 × S 1 , ξ leg ), which is tight. The compact T 3 leaf is obtained by glueing the boundary components of a neighbourhood of the convex T 2 = ∂(D 2 × S 1 , ξ leg ); it is contactomorphic to the space of oriented contact elements of the 2-torus and therefore tight.
Let us package this construction:
Definition 4. Let (M, F, ξ) be a contact foliation. Suppose there is a region U ⊂ M such that (U, F, ξ) is diffeomorphic to the model (M leg , F leg , ξ leg ). We say that the homotopy (M, F s , ξ s ) s∈[0,1] given by the procedure just described is the turbulisation of (M, F, ξ) along U .
Remark 5. There is an alternate way to describe the turbulisation process. (M leg , F leg , ξ leg ) is simply the space of oriented contact elements of the foliation (D 2 × S 1 , t∈S 1 D 2 × {t}): the foliation of the solid torus by its disc slices. Then, the turbulisation process upstairs amounts to turbulising (D 2 ×S 1 , t∈S 1 D 2 ×{t}) and applying the contact elements construction. In particular, this highlights the fact that indeed the resulting leaves are tight. This construction also works for higher dimensional contact foliations.
Applications
3.1. Proof of Theorems 1 and 2. In [Dy] K. Dymara proved that there are legendrian links in overtwisted contact manifolds that intersect every overtwisted disc; that is, their complement is tight. Such a link is said to be non-loose. Let (N, ξ) be an overtwisted contact manifold with K a non-loose legendrian link. Consider the contact foliation
(M, F 1 , ξ 1 ) = (N × S 1 , t∈S 1 N × {t}, ξ),
where we abuse notation and write ξ for the leafwise contact structure lifting (N, ξ). Take U to be the tubular neighbourhood of K × S 1 ⊂ M and apply the turbulisation process to (M, F 1 , ξ 1 ) (on each component) to yield a path of contact foliations (M, F s , ξ s ) s∈[0,1] . It is immediate that (M, F s , ξ s ) is diffeomorphic to (M, F 1 , ξ 1 ) if s is positive, because the foliations themselves are diffeomorphic and Gray's stability applies. In particular, the leaves of all of them are overtwisted. We claim that (M, F 0 , ξ 0 ) has all leaves tight. This is clear for the leaves in the Reeb components, as shown in Lemma 3. Similarly, the leaves outside of the Reeb components are tight because a neighbourhood of the non-loose legendrian link has been removed. We conclude by recalling that every closed overtwisted 3-manifold admits a non-loose legendrian link: the legendrian push-off of the binding of a supporting open book [EVV].
Remark 6. The foliation (M, F 1 ) is taut, since it admits a transverse S 1 . As pointed out by V. Shende during a talk of the author: we are trading tautness of the foliation to achieve tightness of the leaves.
A more general statement.
A slightly more involved argument shows:
Theorem 7. Let M be a 4-manifold. Suppose that M admits a contact foliation (F, ξ) with tight leaves. Then M admits a contact foliation (F 0 , ξ 0 ) with tight leaves that can be approximated by contact foliations (F s , ξ s ) s∈(0,1] with overtwisted leaves.
Proof. Find an embedded curve γ : S 1 → M transverse to F. This provides a S 1 -family of Darboux balls (D 3 , ξ std ) along γ:
(M std , F std , ξ std ) = (D 3 × S 1 , t∈S 1 D 3 × {t}, ξ std ) → (M, F, ξ)
Choose a legendrian knot K ⊂ (D 3 , ξ std ) and lift it to K × S 1 ⊂ (M std , F std , ξ std ) ⊂ (M, F, ξ). Turbulisation in a neighbourhood of K × S 1 yields a contact foliation (M, F , ξ ). The leaves of (M, F , ξ ) are still tight.
The interior of the Reeb component we just inserted is diffeomorphic, as a contact foliation, to the model (M leg , F leg , ξ leg ). Given a homotopically essential transverse knot η ⊂ (D 2 × S 1 , ξ leg ) we may perform a Lutz twist along η to yield an overtwisted contact structure ξ OT in D 2 × S 1 . The resulting local model along η reads:
(D 2 × S 1 , ξ Lutz = ker(f (r)dz + g(r)dθ)) where (r, θ, z) are the coordinates in a neighbourhood of η = {r = 0} and r → (f (r), g(r))/|f, g| is an immersion of [0, 1] onto S 1 that is injective for r ∈ [0, 1 − δ) and satisfies:
(f (r), g(r)) = (1, r 2 ) if r ∈ [0, δ] f (r) = 0 if r ∈ {1/4, 3/4} g(r) = 0 if r ∈ {0, 1/2, 1 − δ} (f (r), g(r)) = (1, (r − 1 + δ) 2 ) if r ∈ [1 − δ/2, 1].
The Lutz twist can be introduced parametrically [CPP] to replace (M leg , F leg , ξ leg ) ⊂ (M, F , ξ ) by (M leg , F leg , ξ OT ) in a t-invariant fashion. This produces a new contact foliation (M, F 1 , ξ 1 ) from (M, F , ξ ).
Set K (z) = (1/4, 0, z) ∈ (D 2 × S 1 , ξ Lutz ) ⊂ (D 2 × S 1 , ξ OT ); we shall prove that it is non-loose. The quasi-prelagrangian tori
{r = r 0 > 1/4} ⊂ (D 2 × S 1 , ξ Lutz ) ⊂ (D 2 × S 1 , ξ OT )
are incompressible in (D 2 × S 1 , ξ OT ) \ K due to our choice of η and K . We invoke [Co,Théorème 4.2]: (D 2 × S 1 , ξ OT ) \ K is universally tight if and only if it is universally tight after removing any finite collection of such tori. Choose the tori at radii r = 1/2, 1 − δ. The reader can check that the pieces {r < 1/2}, {1/2 < r < 1 − δ} have standard tight R 3 as their universal cover. The remaining piece, which intersects (D 2 × S 1 , ξ Lutz ) in {r > 1 − δ}, is contactomorphic to the complement of η in (D 2 × S 1 , ξ leg ) and is therefore tight as well.
We turbulise in a neighbourhood of
K × S 1 ⊂ (M leg , F leg , ξ OT ) ⊂ (M, F 1 , ξ 1 )
to produce the claimed family (M, F s , ξ s ) s∈[0,1] and conclude the proof.
The reader can check that the resulting foliation (M, F 0 , ξ 0 ) is in the same formal class as (M, F, ξ), since F 0 is obtained from F by turbulising twice and the even-contact structures inducing ξ and ξ 0 differ from one another by a parametric (full) Lutz-twist.
A natural question to pose in light of Theorem 7 is whether any M 4 admitting a formal contact foliation admits a foliation with tight leaves; the fundamental geometric issue towards achieving this is that it seems extremely delicate to ensure that no overtwisted disc is really present. For Theorem 7 the main idea was to introduce the overtwisted discs in a controlled fashion so that they could later be destroyed.
Figure 1 .
1The function f . 2.3. Turbulisation. Consider the surface S = [0, 1] × S 1 with coordinates (r, t); M leg projects onto S in the obvious way. Under this projection the kernel W leg is mapped to the line field L = ∂ t +h(r)∂ r .
Lemma 3 .
3The contact foliations in the homotopy (M leg , F leg,s , ξ leg,s ), s ∈ [0, 1], have all leaves tight.
Figure 2 .
2The solid lines represent (the foliations induced by) the path of line fields (F s ) s∈[0,1] . The dotted ones with arrows on top represent the line field L.
Existence and classification of overtwisted contact structures in all dimensions. M S Borman, Y Eliashberg, E Murphy, Acta Math. 215M.S. Borman, Y. Eliashberg, and E. Murphy. Existence and classification of overtwisted contact structures in all dimensions. Acta Math. 215.2 (2015), pp. 281-361.
h-Principle for Contact Foliations. R Casals, A Pino, F Presas, Int. Math. Res. Not. 20R. Casals, A. del Pino, and F Presas. h-Principle for Contact Foliations. Int. Math. Res. Not. 20 (2015), pp. 10176-10207.
Recollement des variétés de contact tendues. V Colin, Bull. Soc. Math. France. 127V. Colin. Recollement des variétés de contact tendues. Bull. Soc. Math. France 127 (1999), pp. 43-96.
Legendrian knots in overtwisted contact structures on S 3. K Dymara, Ann. Global Anal. Geom. 19K. Dymara. Legendrian knots in overtwisted contact structures on S 3 . Ann. Global Anal. Geom. 19.3 (2001), pp. 293-305.
Torsion and open book decompositions. J Etnyre, D V Vela-Vick, Int. Math. Res. Not. 22J. Etnyre and D.V. Vela-Vick. Torsion and open book decompositions. Int. Math. Res. Not. 22 (2010), pp. 4385-4398.
Applications of convex integration to symplectic and contact geometry. D Mcduff, Ann. Inst. Fourier. 37D. McDuff. Applications of convex integration to symplectic and contact geometry. Ann. Inst. Fourier 37 (1987), pp. 107-133.
. Budapestlaan. 6Utrecht University, Department of Mathematicsmail address: [email protected] University, Department of Mathematics, Budapestlaan 6, 3584 Utrecht, The Netherlands E-mail address: [email protected]
|
[] |
[
"On Generation of Photons Carrying Orbital Angular Momentum in the Helical Undulator",
"On Generation of Photons Carrying Orbital Angular Momentum in the Helical Undulator"
] |
[
"A Afanasev \nDepartment of Physics\nGeorge Washington University\n20052WashingtonDCUSA\n",
"A Mikhailichenko \nCLASSE\n14853IthacaNYUSA\n"
] |
[
"Department of Physics\nGeorge Washington University\n20052WashingtonDCUSA",
"CLASSE\n14853IthacaNYUSA"
] |
[] |
We analyze properties of electromagnetic radiation in helical undulators with a particular emphasis on the orbital angular momentum of the radiated photons. We demonstrate that all harmonics higher than the first one radiated in a helical undulator carry an orbital angular momentum. We discuss some possible applications of this phenomenon and the ways of effective generation of these photons in a helical undulator. We call for review of results of experiments performed where the higher harmonics radiated in a helical undulator might be involved. H E r c M z z z z
| null |
[
"https://arxiv.org/pdf/1109.1603v2.pdf"
] | 119,297,502 |
1109.1603
|
d35f6acf9db715106ce214cfeb4f1043130019dc
|
On Generation of Photons Carrying Orbital Angular Momentum in the Helical Undulator
11/ 09/2011
A Afanasev
Department of Physics
George Washington University
20052WashingtonDCUSA
A Mikhailichenko
CLASSE
14853IthacaNYUSA
On Generation of Photons Carrying Orbital Angular Momentum in the Helical Undulator
11/ 09/20111
We analyze properties of electromagnetic radiation in helical undulators with a particular emphasis on the orbital angular momentum of the radiated photons. We demonstrate that all harmonics higher than the first one radiated in a helical undulator carry an orbital angular momentum. We discuss some possible applications of this phenomenon and the ways of effective generation of these photons in a helical undulator. We call for review of results of experiments performed where the higher harmonics radiated in a helical undulator might be involved. H E r c M z z z z
OVERVIEW
Photons are powerful probes of the structure of matter. Depending on their wavelength, photons give us insight about phenomena at widely different scales, ranging from astrophysics to physics of elementary particles.
Motivated by the needs of nuclear and particle physics, here we will focus on angular momentum properties of electromagnetic radiation. The fact that circularly polarized plane-wave photons carry an angular momentum of ħ was demonstrated in a classical experiment [1]. Description of photons in terms of spherical waves and multipole expansion is also well established [2], [3] for radiative processes at atomic scales, with photons characterized by multiple units of angular momentum. Note, however, that separation of spin and orbital angular momentum of a photon cannot be done in a gaugeinvariant way [4].
About 20 years ago, Allen and collaborators re-discovered 1 that a special type of beams (called Laguerre-Gaussian), predicted as non-plane wave solutions of Maxwell equations, can carry large angular momentum associated with their helical wave fronts [5]. When quantized, such beams can be described in terms of "twisted photons" [6]. Orbital angular momentum (OAM) light beams found numerous applications in optics, communications, biophysics, mechanics of micro-particles, and probes of Bose-Einstein condensates [7]. It was demonstrated [8] that in photo ionization of atoms by twisted photons the new selection rules apply that involve more than one unit of angular momentum.
Other possible application might be in the photo-cathodes for generation of highly polarized beams of electron with high brightness, where manipulation with the energy range of the photons carrying OAM might help in closing undesirable channels in emitting of photoelectrons from the levels prohibited by angular-momentum conservation laws. Recently, it was pointed out that twisted photons can be produced in MeV and GeV range via the mechanism of Compton backscattering [9].
Beams of twisted photons with pre-set angular momentum may emerge as a new and productive tool in nuclear and particle physics with electromagnetic probes that would help to control angular momentum of particles or quantum states generated in photoproduction processes. Applications may include: (a) production of high-spin states in the laser-based searches for dark matter particle candidates [10], [11]; (b) meson and baryon spectroscopy of high-spin states; (c) possible formation of new high-spin particles at high energies; (d) studies parity-violating effects in atomic transitions [12], [13]; e) searches for axion-like particles through analyzing polarization properties of light passing through strong magnetic fields [14].
This paper deals with a method of generation of OAM radiation based on a use of a helical undulator. Undulator radiation (UR) is a valuable source of radiation in the energy rage indicated. A lot of publications are dedicated to the UR properties [15]- [19]. One of the features of UR is that radiation is emitted in harmonics [15] where n=1,2,3…, enumerates the harmonics of frequency
u c D / β = Ω , u u D π λ 2 =
is a spatial period of undulator field,
β β ≅ = c v /
, v is the particle's average longitudinal velocity in the undulator, and ϑ is the azimuthtal angle with respect to the observer. Unfortunately, existing publications did not clarify the question about OAM carried by the photons radiated at harmonics higher that the first one, corresponding n=1. This is especially important for helical undulators, where a charged particle emits circularlypolarized photons. Although for the photon the orbital part of angular momentum and the spin part (=1) could not be separated, the peculiarities of generation of radiation in a helical undulator allow to distinguish the part of radiation which carries away the angular momentum of electron in an undulator; hence it is possible to distinguish with OAM. As we will demonstrate, all harmonics with n >1 carry OAM. The physics explanation of the fact why the electron in a helical undulator generates radiation with OAM was not presented clearly in the literature.
In this paper we analyze undulator radiation in a helical undulator for its ability to generate the photons with OAM, and come to a simple and clear explanation why all harmonics other than the lowest one (n=1) carry OAM. This fact was not taken into consideration in the past in a number of experiments in atomic and nuclear physics obtained with the higher harmonics from helical undulators; see Ref. [20] for a few examples. Since spin polarization effects may be different if OAM photons are involved, re-analysis of relevant experiments may be in order.
SOURCE OF ANGULAR MOMENTA
Angular momentum of radiation was described first in [22]. In [23], the concept of angular momentum of radiation was developed further. Most general description of the angular momentum was presented in [24]; see also [25]- [27], [30]. The angular momentum carried by EM radiation is defined as
∫ ∫ ⋅ ⋅ − ⋅ ⋅ = × × = dV E r H H r E c dV H E r c M )] ( ) ( [ 1 ) ( 1 2 2 r r r r r r r r r r ,(1)
where the integration is over the entire volume, the radius vector r r is directed from the rotation axis to the point in a volume where the EM fields are located, see Fig.1
)] ( ) ( [ 1 ) ( 1 2 2 r r r r r r r .
(
The conditions for appearance of longitudinal component of light are described in Refs. [24], [27].
To explain appearance of longitudinal component in Ref. [24], the authors considered the EM wave with restricted transverse dimensions, demonstrating that the longitudinal component associated with derivative of the envelope function of EM field in a transverse direction.
For a helically-polarized EM wave there is no necessity for this, however, as the longitudinal component persists in this type of wave intrinsically while the point shifts from the axis. The easiest way to prove this is to consider the time dependent periodic EM field of the general nature. Without any restriction to the common case, this could be demonstrated with TM field 2 . This type of field could be defined by equations
0 = ∂ ∂ − = ∂ ∂ − ∂ ∂ = × ∇ t B y E x E E z x y z r r (3)
The components of electromagnetic field could be represented as following [31],
u W u W iE E E y x ∂ ∂ + ∂ ∂ = − = , dz t z u u W t c z E z ) , , ,( 1 22 2 2 2 ∫ ∂ ∂ − ∂ ∂ = ∫ ∂ ∂ ε = − = ≡ dz t z u u E t i B B B B y x z ) , , , ( , 0 0 ,(4)
where
iy x u iy x u − = + = , , i 2 1 ≡ − , and z is a longitudinal coordinate, ∂ ∂ + ∂ ∂ ≡ ∂ ∂ ∂ ∂ − ∂ ∂ ≡ ∂ ∂ y i x u y i x u 2 1 , 2 1 .(5)
With the definition (5), Laplacian could be expressed as following 2 The electric and magnetic photons corresponding to the representation of the field as TM and TE waves.
4 2 2 2 2 2 2 2 2 2 2 4 z u u z y x ∂ ∂ + ∂ ∂ ∂ = ∂ ∂ + ∂ ∂ + ∂ ∂ ≡ ∇ . (6) Complex potential ) , , , ( ) , , , ( t z u u W t z y x W ≡ satisfies the equation 0 ) , , ,( 1 42 2 2 2 2 2 = − + t z u u W t c z u u W ∂ ∂ ∂ ∂ ∂ ∂ ∂ .(7)
By introduction of potential
z U t z u u W ∂ ∂ = / ) , , , (
, the longitudinal component of electric field from (4) can be represented as
u u U t z u u U t c z E z ∂ ∂ ∂ − = ∂ ∂ − ∂ ∂ = 2 2 2 2 2 2 4 ) , , ,( 1 (8)
From the last expression one can see that any plane electromagnetic wave propagating in
z-direction, ) ( ) , ( ) , , , ( ct z U t z U t z u u U − = = ,2 ≠ ∂ ∂ ∂ u u U
to be able to carry angular momentum (and accelerate particles) or to satisfy the equation
0 ) , ,, ( ) ( 2 22 2 2 1 ≠ − ∂ ∂ ∂ ∂ t z u u U t c z . It could be satisfied if the potential can be factorized as ) ( ) , , ( ) , , , ( ct z f z u u U t z u u U − ⋅ = , where ) ( ) , , ( ) , , ( r U z y x U z u u U r ≡ ≡
is a function of coordinates only. Thus the source of the longitudinal component is associated with a nonzero derivative 0 / ≠ ∂ ∂ u U . This property was used in Ref. [24] to demonstrate appearance of longitudinal component at the edge of a cylindrical EM beam. The other way to obtain a nonzero result from expression (8), as we claimed earlier, is helical symmetry of the EM wave. Indeed, the helical symmetry can be described as [ ]
) / 2 ( ) / 2 ( ) , , , ( ) / 2 exp( ) , ( ) , ( λ π + λ π ⋅ = λ π ⋅ = z iSin z Cos t z u u W z i t r W t r W r r ,(9)
where λ stands for the spatial period along the coordinate z (wavelength of radiation). Formula (9) describes the helical field with left helicity, i.e. while the observer moves in a positive direction along z, the potential pattern rotates counterclockwise. Since the superposition rule is in effect here, the formula (9) can be treated as representation of two orthogonal polarizations. The electric field can be presented in the following form: (10) Most important is the case of a dipole harmonic, which describes a helical undulator or a wiggler. In this case the only harmonics of interest are the ones having dipole symmetry. For a multipole harmonic the potential solution of (8) looks as follows [31]:
u W e u W e iE E E z i z i y x ∂ ∂ ⋅ + ∂ ∂ ⋅ = − = λ π − λ π 2 2 , ∫ λ π − ∂ ∂ − ∂ ∂ = dz e t z u u W t c z E z z 2 2 2 2 2 2 ) , , ,( 1 Re − ∂ ∂ − ∂ ∂ + + + ∂ ∂ − ∂ ∂ + − × × − = − − − ∞ = ∑ ... ) , ( 1 ) 2 )( 1 ( 32 ) , ( 1 ) 1 ( 4 ) ( ) ( ) , , ,( 1 2ct - z G m u i t z u u W m m m − ∞ = ∑ ⋅ − = .( 1 1
Substituting this expression into (10), one can obtain
λ π − ∞ = λ π − ∞ = ⋅ λ π ⋅ − ⋅ = − ∂ ∂ − ∂ ∂ = ∑ ∫ ∑ z i m m z i m m z e ct z G m u dz e ct z G t c z m u E 2 1 2 2 2 2 2 2 1 2 ) ( Re ) ( 1 Re .(13)
In polar coordinates ϕ = i re u the latter can be expressed as
ϕ + λ π − ∞ = ⋅ λ π ⋅ − ⋅ = ∑ im z i m m z e ct z G m r E 2 1 2 ) ( Re .(14)
One can see that the longitudinal component is equal to zero on the axis, but it is growing while the axis off-set is increasing. For example, the dipole helical longitudinal component of the field has a form ) cos(
) ( ϕ + − = D D z r z ct z G E .
The simplest way to recognize the source of angular momenta is by considering a process of radiation in a reference frame moving with average velocity of the electron
β = β = v c v
. In a moving frame the source of radiation is an electron orbiting along the circle with a radius r′ , see Fig.1
′ = × ′ ′ = ′ ′ = ≅ = ) ( ,(17)
i.e. the loss of momentum of electron is equal to the momentum carried away by SR. Figure 1. In a moving frame, the radiation has an electron as a point source moving along the circle with the radius γ
/ K r u D = ′
. In the Lab frame the cone of radiation is tilted toward the direction of motion (z-axis) by the angle 1/γ , so the projector-type radiation is emitted from the off-axis location.
In formula (15) the integral ∫ Idt is the total energy carried away by the radiation
λ γ β π λ π γ β γ β ε = = = = ∫ (18) where 2 / mc eH K D =
is a so-called undulator parameter or simply a K-factor. Total momentum carried away by the radiation comes to (20) We would like to emphasize here OAM carried away by higher harmonics as it is represented in Fig.1. (Note that the first harmonic is not shown in Fig.1, see Fig.2a). Since the intensity of radiation at the first harmonic is centered at the axis, it does not depend on the instantaneous position of electron, hence it does not carry OAM. The quantum number associated with the angular momentum can be represented as follows,
∫ × × = = r d H E r c M j r r r h h r ) ( 1 2(21)
It can be seen in Fig.1 that since the source of radiation is located at the off-axis position (at the circle) the radiation is carrying away a linear momentum, and hence, the angular momentum. Simply speaking, the electron orbit is decreasing its radius.
One can see that by changing the K factor it is possible to change the angular momentum of radiation as more and more harmonics become involved in a process of radiation. The intensity of radiation carried away by the harmonics is described by the formula [15], [16], [19]
ϑ β − ϑ β ϑ β − ϑ + ϑ β − ϑ β ′ β ϑ β − ω = ο ⊥ ⊥ ⊥
Cos
where
) 2 / 1 ( 2 ⊥ β − ⋅ β ≅ β = β
. One can see if 0 = β , then the formula (15) coincides with Schott's formula [33]. Graphs of ο ω d dI e c n / ) / ( 2 0 2 for three lower harmonics, n=1 and n=2 and n=5 for K=0.7 represented in Fig. 2.
In the coordinate system moving with the average velocity of electron, the electron moves by a circular trajectory with an instant radius eH r / ε β ′ ′ = ′ (Fig.1). The intensity of radiation by the electron is given by summation of (22) over all harmonics and integration over the polar angle, so it gives again 3 In a moving frame the electron is spinning along the circle with a radius γ / K r u D = ′ , radiating SR within the cone which angular opening is defined by the local gamma-factor in the moving frame. So one can see that the source of radiation in a moving frame is located at this radius and so the Pointing vector associated with electron has orbital angular momentum with respect to the axis. q q q One peculiarity associated with helical motion in an undulator is that the orbital momentum of electron increases, while the energy of the electron drops down. Indeed, as the transverse momentum can be represented as
γ γ β 2 K mc mcK r mc r p r M u D r = ⋅ ′ = ⋅ ′ = ⋅ ′ = ⊥ ⊥ ,(24)
where it was used the identity
HELICAL UNDULATOR
Practically any helical undulator which is able to generate the field with 1 K is suitable for generation of higher harmonics. But the preference is given to the ones with a possibility to change polarization, for example the SC one described in [28], allowing easy change of K and polarization. The coils in an undulator from Fig.2, wound above the Copper tube with six-wire strands stick together in flat SC cable. A period of winding is 24.5 mm. An outer diameter is 10 mm, and an inner diameter clear for the beam is Ø inner =8 mm. The direction of current in the corresponding outer-layer coil is shown by the arrows [28]. With such aperture the SC undulator can generate K~1.5 for this period. For the larger period the achievable values of K factor are growing exponentially for the fixed aperture.
DISCUSSION
The fact that electromagnetic radiation from any helical undulator at higher harmonics carries the orbital angular momentum was not realized for many years.
The physics explanation of appearance of angular momenta for the photons radiated by electron in a helical undulator is clear, however. It is associated with the fact that the electron is moving along a helical trajectory twisted around the axis, and radiates while the point of radiation is shifted away from the axis.
Since in experiments with undulator radiation performed in the past, it was not realized that higher harmonics carry OAM, it is appropriate to review results obtained in such experiments, especially the ones for transitions between the states with different angular momenta.
Production of polarized positrons with higher undulator harmonics also needs revision. In the positron production scheme with undulator for ILC, the initial suggestion was to operate at low K-factor, K<0.4 [29], where the content of higher harmonics is minimal (even at K~0.7 only 50% of radiation is carried by higher harmonics). Meanwhile the baseline of ILC deals with undulator having K~0.92 [32] where content of higher harmonics is dominating. As the electron-positron pair should carry away the orbital momentum, their energies should be close to each other. It means that the energy of positron (electron) should be about ~half of the energy of quanta within narrow margins. Therefore the polarized positron production by higher undulator harmonics should be suppressed by the phase volume and by polarization behavior, as the positrons with half energy carry ~50% polarization at most.
SUMMARY
The radiation in a helical undulator at higher harmonics is dominantly carrying orbital angular momentum, which should be taken into account. Physical meaning of this phenomenon is clear: the radiating electron moves along the helix of nonzero radius, so the source of instantaneous radiation is shifted from the axis. This fact was ignored in all experiments done with helical undulators in the past. We are suggesting revision of all results obtained with usage of higher undulator harmonics generated in a helical undulator. In addition, the theory of OAM photon interaction in matter (ionization, Compton scattering and electron-positron pair production) also requires revision.
The definition of brightness (brilliance) for the sources with helical undulators should be modified also.
In conclusion we note that usage of photons with OAM might open new ways in obtaining highly polarized electrons from photocathodes.
One of Authors (AM) thanks Evgenyi Bessonov for useful discussions.
functions G m-1 (z,t) describe the value of multipole field at the axis u=0.
Figure 2 .
2primes denote the quantities calculated in the moving system of reference: v′ is the electron's velocity, is the classical electron radius. Note also that the radius r′ is an invariant under Lorentz transformation. The angular distribution of intensity of radiation on harmonics in a moving frame.
clear that the size of the source of undulator radiation can not be less, forces usage of beams with highest γ and undulator with shortest period, operating at lower K. So for the ERL having γ=10 4 , K=1 and period for the LCLS this comes to ~1/6 µm. It is clear also, that operation with low K-factor should force operation with higher current to compensate the loss of the photon flux ~K 2 .
Figure 2 .
2SC helical undulator with possibility to change the K factor and polarization.
does not have a longitudinal component, and, hence, does not accelerate particles moving along the straight line (z-coordinate) and does not carry angular momentum. One can see from Eq.(8) that the wave should have a transverse structure, 0 /
. We are interested in the radiation directed On the other hand, for the electron losing its energy by synchrotron radiation (SR) and moving at a constant radius, the change of energy in a moving frame can be transformed as the followingtangentially to the instant trajectory in this system, where
)
(
H
E
r
v
r
r
×
⊥
, so the integral (1)
can be evaluated as follows
( )
( )
∫
∫
∫
∫ ∫
∫
′
=
′
=
×
×
≅
×
×
=
Idt
c
r
dt
dA
S
c
r
dA
H
E
r
c
cdt
dV
H
E
r
c
M
v
r
r
r
r
r
r
r
2
2
)
(
1
(15)
where dA stands for the element of area,
H
E
S
r
r
r
×
=
is the Poynting vector,
∫
∫
=
×
=
SdA
dA
H
E
I
r
r
is the intensity of radiation.
One can see that
c
I
r
dt
M
d
′
=
r
.
(16)
dt
M
d
r
c
dt
p
r
d
r
c
dt
p
r
d
r
c
dt
dp
c
dt
dE
I
r
r
r
And the total angular momentum carried away by the radiation isz
tot
e
e
e
Kp
r
p
r
c
m
p
r
c
m
M
r
r
r
r
r
r
'
]
'
[
]
'
[
=
∆
×
=
∆
×
=
∆
⊥
See ref[2], page 401, Appendix.
Strictly speaking only the part of radiation with n>1 carries out the angular momentum, However, for K≥1 the intensity of radiation at the first harmonic is< 10%, so for evaluation this difference might be neglected.
. R A Beth, Phys. Rev. 50115R. A. Beth, Phys. Rev. 50, 115 (1936).
W Heitler, The Quantum Theory of Radiation. OxfordW.Heitler, "The Quantum Theory of Radiation", Oxford, 1954.
. R Roy, B P Nigam, Nuclear Physics. John WilleyR.Roy, B.P.Nigam," Nuclear Physics", John Willey, 1967.
Quantum Electrodynamics. V B Berstetskii, E M Lifshitz, L P Pitaevskii, Course of Theoretical Physics. 4V.B. Berstetskii, E.M.Lifshitz, L.P.Pitaevskii, "Quantum Electrodynamics", vol.4 of "Course of Theoretical Physics", Butterworth-Heinemann, 1971.
Orbital Angular Momentum of Light and Transformation of Lagguerre-Gaussian Laser Modes. L Allen, M W Beijersbergen, R J C Spreeuw, J P Woerdman, Phys.Rev. A. 4511L.Allen, M.W.Beijersbergen, R.J.C.Spreeuw, J.P.Woerdman, "Orbital Angular Momentum of Light and Transformation of Lagguerre-Gaussian Laser Modes", Phys.Rev. A, Vol.45, No 11, 1 June 1992.
Longitudinal Dispersion of Orbital Angular Momentum Modes in High-Gain free-Electron Lasers. E Hemsing, A Marinelli, S Reiche, J Rosenzweig, Phys.Rev.Sp.Topics-Accelerators and Beams. 1170704E.Hemsing, A.Marinelli, S.Reiche, J.Rosenzweig, "Longitudinal Dispersion of Orbital Angular Momentum Modes in High-Gain free-Electron Lasers", Phys.Rev.Sp.Topics-Accelerators and Beams 11, 070704 (2008).
The Twisted Photon Associated to Hyper-Hermitian Four Manifolds. M Dunajski, Journal of Geometry and Physics. 30M.Dunajski, "The Twisted Photon Associated to Hyper-Hermitian Four Manifolds", Journal of Geometry and Physics 30 (1999) 266-281.
Twisted Photons. G Molina-Terriza, J Torres, Ltorner , Nature Physics. 3G.Molina-Terriza, J.Torres, LTorner, "Twisted Photons", Nature Physics, vol 3, May 2007.
Bose-Einstein condensation. A Griffin, D W Snoke, S Stringari, Cambridg U. PressA.Griffin, D.W.Snoke, S.Stringari, (editors),"Bose-Einstein condensation", Cambridg U. Press (1995).
Photoionization with Orbital Angular Momentum Beams. A Picon, Optics Express. 36604A.Picon et al., "Photoionization with Orbital Angular Momentum Beams", Optics Express 3660, 15 Feb. 2010/Vol 18, No 4.
Generation of High-Energy Photons with Large Orbital Algular Momentum by Backscattering. U D Jentschura, V G Serbo, PRL. 10613001U.D.Jentschura, V.G.Serbo, "Generation of High-Energy Photons with Large Orbital Algular Momentum by Backscattering, PRL, 106, 013001(2011).
Proposed Experiment to Produce and Detect Light Pseudoscalars. K Van Bibber, Phys.Rev.Lett. 59K. Van Bibber et al., "Proposed Experiment to Produce and Detect Light Pseudoscalars", Phys.Rev.Lett.59:759-762,1987;
No Light Shining Through a Wall. C Robilliard, Phys. Rev. Lett. 99C. Robilliard et al., "No Light Shining Through a Wall", Phys. Rev. Lett. 99: 190403, 2007;
Search for Axion-like Particles Using a Variable Baseline Photon Regeneration Technique. A Chou, Phys.Rev.Lett. 10080402A. Chou et al., "Search for Axion-like Particles Using a Variable Baseline Photon Regeneration Technique", Phys.Rev.Lett.100:080402,2008;
Experimental Limit on Optical-Photon Coupling to Light Neutral Scalar Boson. A Afanasev, Phys.Rev.Lett. 101120401A.Afanasev, et.al. " Experimental Limit on Optical-Photon Coupling to Light Neutral Scalar Boson", Phys.Rev.Lett., 101, 120401 (2008);
New ALPS Results on Hidden-Sector Lightweights. K Ehret, Phys.Lett. 689K.Ehret et al., "New ALPS Results on Hidden-Sector Lightweights", Phys.Lett.B689:149-155,2010.
Limits of Electrodynamics: Paraphotons?. L Okun, Sov.Phys.JETP. 56502L. Okun, "Limits of Electrodynamics: Paraphotons?", Sov.Phys.JETP 56:502,1982;
New Experimental Limit on Photon Hidden-Sector Paraphoton Mixing. A Afanasev, Phys.Lett. B. 679A.Afanasev, et.al. "New Experimental Limit on Photon Hidden-Sector Paraphoton Mixing", Phys.Lett. B 679 (2009), 317-320.
Sensitive Magnetometry Based on Nonlinear Magneto-Optical Rotation. D Budker, D F Kimball, S M Rochester, V V Yashchuk, M Zolotorev, Phys.Rev. 6243403D. Budker, D.F. Kimball, S.M. Rochester, V.V. Yashchuk, M. Zolotorev, "Sensitive Magnetometry Based on Nonlinear Magneto-Optical Rotation", Phys.Rev. A62 (2000) 043403.
Search for Parity Nonconservation in Atomic Dysprosium. A T Nguyen, D Budker, D Demille, M Zolotorev, Phys.Rev. 56A.T.Nguyen, D.Budker, D.DeMille, M.Zolotorev, "Search for Parity Nonconservation in Atomic Dysprosium", Phys.Rev. A56 (1997) 3453-3463.
Arion <---> Photon Oscillations In A Steady Magnetic Field. P A Sikivie ; A, Anselm, Erratum-ibid.52:695Phys.Rev.Lett. 511480Yad. Fiz.P. Sikivie, "Experimantal Tests of the Invisible Axion", Phys.Rev.Lett.51:1415,1983, Erratum-ibid.52:695,1984; A.A. Anselm, "Arion <---> Photon Oscillations In A Steady Magnetic Field" Yad. Fiz. 42, 1480 (1985);
Effects Of Nearly Massless, Spin Zero Particles On Light Propagation In A Magnetic Field. L Maiani, R Petronzio, E Zavattini, Phys. Lett. B. 175359L. Maiani, R. Petronzio and E. Zavattini, "Effects Of Nearly Massless, Spin Zero Particles On Light Propagation In A Magnetic Field", Phys. Lett. B 175, 359 (1986);
Search for nearly massless, weakly coupled particles by optical techniques. R Cameron, Phys.Rev. 47BNLR. Cameron et al., "Search for nearly massless, weakly coupled particles by optical techniques", Phys.Rev.D47:3707-3725,1993; C.Y.Scarlett, et al., "An Anomalous Curvature Experiment E840", BNL, Apr. 27, 2006.
Undulator Radiation. D F Alferov, Yu A Bashmakov, E G Bessonov, Sov. Phys. Tech. Phys. 1810D.F.Alferov, Yu.A.Bashmakov, E.G.Bessonov, " Undulator Radiation", Sov. Phys. Tech. Phys. Vol.18, No 10, 1974, p. 1336-1339.
The Undulator as a Source of Electromagnetic Radiation. D F Alferov, Yu A Bashmakov, K A Belovintsev, E G Bessonov, Part.Accel. 9D.F.Alferov, Yu.A.Bashmakov, K.A.Belovintsev, E.G.Bessonov, " The Undulator as a Source of Electromagnetic Radiation", Part.Accel. 9 (1979) 223-236
A Short-Period Helical Wiggler as an Improved Source of Synchrotron Radiation. B M Kincaid, Journal of Applied Physics. 487B.M.Kincaid, "A Short-Period Helical Wiggler as an Improved Source of Synchrotron Radiation", Journal of Applied Physics, Vol.48, No 7, July 1977.
. A A Sokolov, I M Ternov, Izvestia Vuzov Fizika. 543A.A.Sokolov, I.M.Ternov et al., Izvestia Vuzov Fizika, N 5, 43 (1968);
. Zs. F. Phys. 2111Zs. F. Phys. 211, 1 (1968).
Relativistic Electron. A A Sokolov, I M Ternov, Nauka (UDK 539.12MoscowA.A.Sokolov, I.M.Ternov," Relativistic Electron", Moscow, Nauka (UDK 539.12) 1974.
X-MCD twin-Helical Undulator beam line BL25SU of Spring. S Suga, Journal of Magnetism and Magnetic Materials. 233260S.Suga et. al. "X-MCD twin-Helical Undulator beam line BL25SU of Spring", Journal of Magnetism and Magnetic Materials, vol. 233, issues 1-2, July 2001, p.60.
Performance of a Very High Resolution Soft X-ray Beamline BL25SU with a Twin-Helical Undulator at SPring-8. Y Saiton, 10.1063/1.1287626Rev. Sci. Instrum. 713254Y.Saiton et. al. "Performance of a Very High Resolution Soft X-ray Beamline BL25SU with a Twin-Helical Undulator at SPring-8", Rev. Sci. Instrum. 71, 3254 (2000); doi:10.1063/1.1287626 .
High Pressure Experiments with Synchrotron Radiation. S Nasu, 10.1023/A:1012663330441Hyperfine Interactions. 1131-4S.Nasu, "High Pressure Experiments with Synchrotron Radiation", Hyperfine Interactions, Volume 113, Numbers 1-4, 97-109, DOI: 10.1023/A:1012663330441
The Angular Momentum of an Electromagnetic Wave, Sadovskii Effect and the Generation of Magnetic Fields in a Plasma. I V Sokolov, Physics-Uspekhi. 161I.V.Sokolov,"The Angular Momentum of an Electromagnetic Wave, Sadovskii Effect and the Generation of Magnetic Fields in a Plasma", Physics-Uspekhi, Vol. 161, No10, Oct. 1991, http://ufn.ru/ufn91/ufn91_10/Russian/r9110g.pdf .
Der Drehimpuls des Lichtes. M Abraham, Physikalische Zeitschrift, XV. M.Abraham, "Der Drehimpuls des Lichtes", Physikalische Zeitschrift, XV, (1914) pp. 915-918.
Notes on the Theory of Radiation. C G Darwin, Proc. R. Soc. Lond. A 1932. R. Soc. Lond. A 1932136C.G.Darwin, "Notes on the Theory of Radiation", Proc. R. Soc. Lond. A 1932, 136, 36-52.
States, Waves and Photons: A Modern introduction to Light. J W Simmons, M J Guttmann, Addison-WesleyJ.W.Simmons, M.J.Guttmann, "States, Waves and Photons: A Modern introduction to Light", Addison-Wesley, 1970.
Classical Electricity and Magnetism. W K H Panofsky, M Phillips, Addison-Wesley PC, IncW.K.H.Panofsky, M.Phillips, "Classical Electricity and Magnetism", Addison- Wesley PC, Inc,1956.
Classical Electrodynamics. J Jackson, J.Wiley&Sons IncJ.Jackson, "Classical Electrodynamics", J.Wiley&Sons Inc., 1998.
On the longitudinal Component in Light. G F Fitzgerald, Philosophical Magazine and Journal of Science. XLII, No. CCLVIG.F.FitzGerald, "On the longitudinal Component in Light", Philosophical Magazine and Journal of Science, Vol. XLII, No. CCLVI, July-December 1896 , pp.260-270
SC Undulator with the Possibility to Change its Strength and Polarization by Feeding Current. A Mikhailichenko, 24811NYA.Mikhailichenko, "SC Undulator with the Possibility to Change its Strength and Polarization by Feeding Current", TUP248, PAC11, NY 2011.
ILC Undulator-Based Positron Source, Tests and Simulations. A Mikhailichenko, Proc. nullWEZAB01, PAC07, Albuquerque, NMA.Mikhailichenko, ILC Undulator-Based Positron Source, Tests and Simulations", WEZAB01, PAC07, Albuquerque, NM 2007, Proc., pp.1974-1978.
Orbital angular momentum: origins, behavior and applications. A M Yao, M J Padgett, Advances in Optics and Photonics. 3A.M.Yao, M.J.Padgett," Orbital angular momentum: origins, behavior and applications", Advances in Optics and Photonics 3, 161-204 (2011).
A Mikhailichenko, 3D Electromagnetic Field. Representation and Measurements. Cornell, Wilson Lab.A.Mikhailichenko, "3D Electromagnetic Field. Representation and Measurements", CBN 95-16, Cornell, Wilson Lab., 1995.
. REPORT-2007-001ILC Reference Design Report. III-Accelerator. pIII-48ILC Reference Design Report, ILC REPORT-2007-001, vol III-Accelerator, pIII-48.
On the Electron Theory of Matter and on Radiation. G A Schott, Phil. Mag. 15Electromagnetic RadiationG.A.Schott, "On the Electron Theory of Matter and on Radiation", Phil. Magazine, Feb. 1907; "Electromagnetic Radiation", Phil. Mag. 15 , 752, 1912.
|
[] |
[
"Quantum decay law: Critical times and the Equivalence of approaches",
"Quantum decay law: Critical times and the Equivalence of approaches"
] |
[
"D F Ramírez Jiménez \nDepartamento de Fisica\nUniversidad de los Andes\nCra.1E No.18A-10Santafe de BogotáColombia\n",
"N G Kelkar \nDepartamento de Fisica\nUniversidad de los Andes\nCra.1E No.18A-10Santafe de BogotáColombia\n"
] |
[
"Departamento de Fisica\nUniversidad de los Andes\nCra.1E No.18A-10Santafe de BogotáColombia",
"Departamento de Fisica\nUniversidad de los Andes\nCra.1E No.18A-10Santafe de BogotáColombia"
] |
[] |
Methods based on the use of Green's functions or the Jost functions and the Fock-Krylov method are apparently very different approaches to understand the time evolution of unstable states. We show that the two former methods are equivalent up to some constants and as an outcome find an analytic expression for the energy density of states in the Fock-Krylov amplitude in terms of the coefficients introduced in the Green's functions and the Jost functions methods. This model-independent density is further used to obtain an analytical expression for the survival amplitude and study its behaviour at large times. Using these expressions, we investigate the origin of the oscillatory behaviour of the decay law in the region of the transition from the exponential to the non-exponential at large times. With the objective to understand the failure of nuclear and particle physics experiments in observing the non-exponential decay law predicted by quantum mechanics for large times, we derive analytical formulae for the critical transition time, t c , from the exponential to the inverse power law behaviour at large times. Evaluating τ c = Γt c for some particle resonances and narrow nuclear states which have been tested experimentally to verify the exponential decay law, we conclude that the large time power law in particle and nuclear decay is hard to find experimentally.
|
10.1088/1751-8121/aaf9f3
|
[
"https://arxiv.org/pdf/1809.10673v2.pdf"
] | 54,212,337 |
1809.10673
|
b8511446ef0ee6ea33326240803e2814b2db7321
|
Quantum decay law: Critical times and the Equivalence of approaches
27 Sep 2018
D F Ramírez Jiménez
Departamento de Fisica
Universidad de los Andes
Cra.1E No.18A-10Santafe de BogotáColombia
N G Kelkar
Departamento de Fisica
Universidad de los Andes
Cra.1E No.18A-10Santafe de BogotáColombia
Quantum decay law: Critical times and the Equivalence of approaches
27 Sep 20181
Methods based on the use of Green's functions or the Jost functions and the Fock-Krylov method are apparently very different approaches to understand the time evolution of unstable states. We show that the two former methods are equivalent up to some constants and as an outcome find an analytic expression for the energy density of states in the Fock-Krylov amplitude in terms of the coefficients introduced in the Green's functions and the Jost functions methods. This model-independent density is further used to obtain an analytical expression for the survival amplitude and study its behaviour at large times. Using these expressions, we investigate the origin of the oscillatory behaviour of the decay law in the region of the transition from the exponential to the non-exponential at large times. With the objective to understand the failure of nuclear and particle physics experiments in observing the non-exponential decay law predicted by quantum mechanics for large times, we derive analytical formulae for the critical transition time, t c , from the exponential to the inverse power law behaviour at large times. Evaluating τ c = Γt c for some particle resonances and narrow nuclear states which have been tested experimentally to verify the exponential decay law, we conclude that the large time power law in particle and nuclear decay is hard to find experimentally.
I. INTRODUCTION
The decay law of an unstable system can be shown classically to be of an exponential nature but in a quantum mechanical analysis, this law is an approximation which fails for short and large times [1][2][3][4].
The former case is described by a quadratic function in t [5] and the latter by an inverse power law in t [3]. The non-exponential behaviour predicted by quantum mechanics has intrigued experimental nuclear and particle physicists who performed experiments (see [6] and references therein) with nuclei such as 222 Rn, 60 Co, 56 Mn and measured the decay law for several half-lives to disappointingly find only an exponential decay law. Even though the non-exponential behaviour at large times was confirmed in an experiment measuring the luminescence decays of many dissolved organic materials after pulsed laser excitation [7], the failure of the nuclear and particle physics experiments raised questions about observation such as: (i) how long should one wait or in other words, what is the critical transition time (τ C ) from the exponential to a power law behaviour, (ii) does the interaction with the environment affect the measurement and (iii) could it be possible that every measurement resets the decay to an exponential one, thus making the non-exponential behaviour not observable. There exist different points of view [3,8] and the above questions still seem to be open for discussions. Apart from all this, there exist different theoretical formalisms for the quantum mechanical treatment of the time evolution and decay of an unstable state [9][10][11][12][13][14]. In order to at least partly answer the above questions, it is essential to investigate if the different formalisms agree only globally on the short and large time behaviour or also in details such as the prediction of the critical transition times from the exponential to the power law as well as the exponent in the power law behaviour. With this objective, in the present work, we investigate some of the most popularly known approaches for the calculation of survival probabilities, namely, the method of García-Calderón (GC) [10] and collaborators [11,15] which uses the Green's functions, the method of W. van Dijk and Y. Nogami (DN) [13,14] and the Fock-Krylov (FK) method [9] which involves the Fourier transform of an energy density. We show that the seemingly different methods are indeed equivalent and derive analytical expressions for the survival amplitudes as well as their large time behaviour. A natural continuation of this investigation is to study the critical time for the transition from the exponential to the non-exponential power law behaviour. Here, we also obtain an analytical expression for the critical time and apply it to study the decay of nuclear and particle resonances.
The paper is organized as follows: in Section II, we present the basic results of the GC, DN and FK approaches without entering into the details of the derivations. In Section III, we shall show how the GC and DN approaches lead to the survival amplitude as written in the Fock-Krylov method (with a density based on a relation from statistical physics) and we shall obtain an expression for the energy density of the initial state. In Section IV, we shall derive an expression for the survival amplitude in terms of Incomplete Gamma functions and thereby study its behaviour for large t. Finally, we present the analysis of an isolated resonance and obtain an analytic expression to find the critical transition time from the exponential to the power law behaviour, in Section V. Here, we compare our results with an existing work on an isolated resonance [16]. Application of the results to realistic resonances in Section VI, unveils some reasons for the non-observability of non-exponential decay in nuclear and particle physics. In Section VII, we present an analysis of the oscillatory transition region produced by the interference of the exponential and power law decay at large times. We discuss the origin of the oscillatory term and present an expression for the modulating function which describes it. In Section VIII, we summarize our findings.
II. SURVIVAL PROBABILITIES
In general terms, if H is the Hamiltonian of a system and its initial state is Ψ(0) , the state of system Ψ(t) at a time t > 0 is given as a solution of the Schrödinger equation
i d dt Ψ(t) = H Ψ(t) .(1)
The quantum decay law is the probability (called non-decay or survival probability) that the state at time t is in its initial state and is given by, Ψ(0)|Ψ(t) 2 = Ψ(0) e −iHt Ψ(0) 2 . Starting with the survival amplitude A(t), one can write it as the projection of the state Ψ(t) on the state Ψ(0) :
A(t) = Ψ(0)|Ψ(t) = Ψ(0) e −iHt Ψ(0) ,(2)
and the survival probability mentioned above is simply
P (t) = |A(t)| 2 .(3)
Both the survival amplitude and the survival probability are equal to one when t = 0 because A(0) = Ψ(0)|Ψ(0) = 1 and from (3), P (0) = |A(0)| 2 = 1.
We note here that in the context of the time evolution of an unstable state, a widely discussed quantity is also the so called "nonescape probability" which is essentially the probability that the particle remains confined inside the interaction or potential region after a time t. Though the two concepts of survival and nonescape probabilities are closely related, there are instances when they could be significantly different [17]. We refer the reader to [11,17] for some interesting discussions on this topic.
A. Fock-Krylov method
A popular method for computing the survival amplitude is the Fock-Krylov (FK) approach [9]. In general, this method involves expanding the initial state in eigenstates of a complete set of observables which commute with the Hamiltonian. If we let H, β to be this set of observables and let E, b be an eigenstate of them:
H E, b = E E, b ,(4)β E, b = b E, b ,(5)
then the normalized initial state can be expanded in this basis as
|Ψ(0) = E min dE db E, b E, b|Ψ(0) .(6)
Let us now consider an intermediate unstable state (resonance) formed in a scattering process such as
A + a → R * → A + a.
Since the initial unstable state, |Ψ(0) , cannot be an eigenstate of the (hermitian)
Hamiltonian, an expansion as in (6) (assuming a continuous spectrum) in terms of the energy eigenstates of the decay products A and a can be considered to express |Ψ(0) as
|Ψ(0) = dE a(E) |E(7)
where |E is the eigenstate and E the total energy of the system A + a. Substituting now for |Ψ(0) in (2), we get,
A(t) = dE dE a * (E ) a(E) E |e (−iHt) |E = dE dE a * (E ) a(E) e (−iEt) δ(E − E ) (8) = dE |a(E)| 2 e −iEt(9)
The proper normalization of Ψ tells us that |a(E)| 2 should have the dimension of (1/E) and hence can be associated with an energy density of states. Thus, in the Fock-Krylov method,
A(t) = ∞ E th. dE ρ(E) e −iEt(10)
where E th. is the minimum sum of the masses of the decay products.
B. Statistical physics based approach
The advantage of the FK method is that it is not necessary to solve the Schrödinger equation and in cases where one does not know Ψ(0) , one can proceed to evaluate A(t) if the "spectral function" or the energy distribution of the resonant state is known. Such an approach was given in [18,19] in order to analyze realistic cases of nuclear and particle resonances. The authors noted that theoretically, many different forms of ρ(E) are available but they may not necessarily have a connection with experiments.
One of the experimental signatures for the existence of a resonance is the sharp jump in the phase shift δ(E), as a function of energy. The energy derivative of the phase shift displays the typical Lorentzian form associated with a resonance [20] and has different interpretations. One of its first appearances in relation with resonances was in the definition of Wigner's time delay [21]. Though Wigner's work dealt with the single channel case, the energy derivative, dδ l (E)/dE, for a resonance occurring in the l th partial wave in scattering can be shown to be the difference between the time spent by the interacting particles with and without interaction in a given region of space [22][23][24]. This interpretation led the authors in [18,19] to find the connection between dδ l (E)/dE and ρ l (E) for the density of states in a resonance.
In calculating the second virial coefficients B and C for the equation of states in a gas, pV =
RT [1 + B/V + C/V 2 + .
..], Beth and Uhlenbeck [25] (the derivation of their result is reproduced in [26], see also [27,28]) found that the difference between the density of states with interaction, n l , and without, n
l , is given by the derivative of the scattering phase shift δ l as,
ρ BU l (E) = n l (k) − n (0) l (k) = 2l + 1 π dδ l (E) dE(11)
where k and E are the momentum and energy in the centre-of-mass system of the scattering particles, respectively. If a resonance is formed during the scattering process, E becomes the energy of the resonance in its rest frame. In the absence of interaction, since no resonance can be produced, one would expect the density of states ρ(E) to be zero. If the interaction is switched off, n l will tend to n (0) l from above. Therefore, the authors concluded that, as long as n l − n (0) l ≥ 0 (which is at least the case for an isolated resonance), one can write for the continuum probability density of states of the decay products in a resonance,
ρ BU l (E) = const. dδ l (E) dE .(12)
Finally, using the phase shift values extracted from scattering experiments, the authors calculated the survival amplitude, Eq. (10) with the substitution of Eq. (12) as
A l (t) = ∞ E th. dE dδ l (E) dE e −iEt .(13)
Analytical expressions for the above have been provided in a recent work [20] with the use of the Mittag-Leffler theorem.
Before we proceed to the next subsections, let us clarify the notation used in this work. For mathematical simplicity, we set, 2m = 2 = 1 and hence k 2 = E. For a resonance pole given by E r − iΓ r /2
in the complex energy plane, k 2 r = r − iΓ r /2 where r = E r − E th , with E th being the threshold energy (or the sum of the masses of the decay products of the resonance). Having shifted the energies by an amount E th , the lower limit on the integral for the survival amplitude will be 0 instead of the threshold energy E th .
C. Green's function method
Another method for obtaining the survival amplitude is to solve (1) using Green's functions. Finding the Green's function may be a laborious undertaking, however, there exists an elegant approach proposed by Garcia-Calderon (GC) [10] (and followed up in [11,15,29]) which overcomes this difficulty. The GC approach uses resonant states for calculating the Green's function, the corresponding wave function and the survival amplitude. Since those resonant states are intimately connected with the poles of the S-matrix (see [20] and references therein for realistic studies and [30] for different pole structures in scattering), it is possible to express A(t) analytically, in particular, in terms of error functions. In what follows, we shall briefly discuss the GC method and recommend Refs [11,15,31] to the interested reader for details of the formalism.
The GC method is based on building the wave function through the Green's function of a system using resonant states. Let us consider the system to be a particle of mass m without spin, moving under the influence of a central potential V (r) of finite range R, which, at time t = 0, is described by an initial wave function ψ(r, 0). If ψ(r, t) is the state of the system after a time t (here ψ(r, t) is actually the wave function, Ψ, times r), for S-waves, it must satisfy the Schrödinger equation:
− ∂ 2 ψ(r, t) ∂r 2 + V (r)ψ(r, t) = i ∂ψ(r, t) ∂t .(14)
Here, we take = 2m = 1. Using Green's functions, it is possible to show that the wave function ψ(r, t)
can be written as
ψ(r, t) = n C n (k n )u n (r, k n )M (k n , t),(15)
where the sum is over all poles of the S-matrix. The authors in [15] make use of the fact that for a finite range interaction, the outgoing Green's function as a function of the momentum, k, can be extended analytically to the whole complex k plane where it has an infinite number of poles. As is well known, purely imaginary poles in the upper half of the complex k plane correspond to bound states and those in the lower half plane correspond to virtual states. Complex poles are however found only in the lower half of the complex k plane and corresponding to every pole k n = a n − ib n (a n , b n > 0), there exists due to time reversal invariance, a complex pole, k −n , situated symmetrically with respect to the imaginary axis, i.e., k −n = −k * n . In [15], the authors considered examples with potentials having no bound states so that all poles were located only in the lower half of the complex k plane.
Coming back to (15), M (k n , t) is the integral
M (k n , t) = i 2π ∞ −∞ e −itx 2 x − k n dx,(16)
u n (r, k n ) is the resonant state associated with the pole k = k n and is the solution of the differential equation [31] d 2 u n (r, k n )
dr 2 + V (r) − k 2 n u n (r, k n ) = 0,(17)
with boundary conditions
u n (0, k n ) = 0,(18)du n (r, k n ) dr r=R = ik n u n (R, k n ),(19)
and C n (k n ) is given by
C n (k n ) = R 0 ψ(r, 0)u n (r, k n ) dr.(20)
The survival amplitude, in this case is given by,
A(t) = Ψ(r, 0)|Ψ(r, t) = n C n (k n )C n (k n )M (k n , t),(21)
where the coefficientC n (k n ) is:C
n (k n ) = R 0 ψ * (r, 0)u n (r, k n ) dr.(22)
Each pair of coefficients C n (k n ) andC n (k n ) for a given n satisfy certain properties (see [15] for details).
If we evaluate the integral M for large t using the steepest descent method (see Appendix A), it is possible to show that
A(t) = − 1 √ 4π e iπ/4 Im p C p (k p )C p (k p ) k 3 p t −3/2 + O(t −5/2 ),(23)
where the sum is over all the fourth-quadrant poles of the S-matrix in the complex k plane. For large t, the survival amplitude is proportional to t −3/2 and the survival probability is proportional to t −3 .
This result for l = 0 is consistent with the expectation of t 2l+3 (for the l th partial wave) in literature [3,18,19,32]. It is also consistent with the density given by ρ l (E) ∝ dδ l (E)/dE since one expects the phase shift to behave as δ l ∼ k 2l+1 near threshold which eventually leads to the above power law at large times (see Sections 4.3 and 5.2 in [20]). In the experimental observation of the non-exponential decay [7], however, the exponent was found to vary between -2 to -4. A similar observation can be found in [33], where, within a model of a two level system coupled to the continuum, the authors found an exponent of -4.
A small note regarding the steepest descents method is in order here before closing this subsection. This method has been used earlier in [10,34,35] in the context of arriving at the above result but in a somewhat different manner as compared to the present work where it is used to directly evaluate A(t). The authors in [34] for example, use this method in order to obtain the retarded Green's function, g(r, r ; t), entering into the definition of the time evolved wave function, namely, ψ(r, t) = R 0 g(r, r ; t)ψ(r , 0)dr , which eventually defines the survival amplitude. The contours of integration in [34] and in the present work are hence also different.
D. Jost and Moshinsky functions method
In an attempt to obtain the expression for the wave function of a decaying quantum system, the authors W. van Dijk and Y. Nogami (DN) in Ref. [13], proposed a method which involved the description of the wave function as a linear combination of the the Moshinsky functions, M (k, r, t) [36], each of which is associated with a pole of the scattering matrix, S. In a follow-up work [14], the authors used this formalism to study the survival and nonescape probabilities of decaying quantum systems. In this subsection, we shall describe the DN approach for the evaluation of survival probabilities in some detail, in order to later compare it with the GC and FK approaches discussed before.
The authors in [14] begin by considering the case of S-wave unstable states and attempt to find a solution of the time dependent Schrödinger equation with a central potential V(r) of finite range. The scattering solutions are expressed in terms of Jost functions such that for the case of no bound states,
ψ(r, t) = 2 π ∞ 0 k 2 |f (k)| 2 c(k)u(k, r)e −ik 2 t dk,(24)
where k > 0 and k 2 is the corresponding energy. Here, ψ(r, t) is the wave function, Ψ, times r. c(k) is given as,
c(k) = ∞ 0 ψ(r, 0)u(k, r) dr,(25)
with ψ(r, 0) being the initial wave function which is restricted to the interaction region and the function u(k, r) is defined as
u(k, r) = 1 2ik f (k)f (−k, r) − f (−k)f (k, r) .(26)
f (k, r) is the Jost solution of the time independent Schrödinger equation [37] (with potential V(r)) and
f (k) is the Jost function related to it as f (±k) = f (±k, 0). The function u(k, r) is real and u(k, r) and c(k) are both entire and even in the parameter k.
Using (24) the survival amplitude is written as,
A(t) = ∞ 0 ψ * (r, 0)ψ(r, t) dr = 2 π ∞ 0 dk k 2 |f (k)| 2 c(k)e −ik 2 t ∞ 0 ψ * (r, 0)u(k, r) dr .(27)
Before proceeding to the comparison of the approaches reviewed in this section, we mention in passing that the time evolution of unstable states can in principle be considered with the use of complex energy "eigenfunctions" too [38]. These are indeed the wave functions introduced by Gamow in his tunneling theory of alpha decay. The Gamow function represents a decaying state of the physical system in a situation in which there are no incident particles and hence should behave asymptotically as a purely outgoing wave. The effectiveness of this method, in spite of several shortcomings has been discussed in [38]. Another work worth mentioning in the context of the present investigations is Ref. [39] where the authors performed a comparison of the Hermitian and non-Hermitian formulation for the time evolution of quantum decay and showed that they lead to an identical description for a large class of well-behaved potentials.
III. ENERGY DENSITY OF THE INITIAL STATE
Having introduced the different approaches for the calculation of survival amplitudes, we shall now examine the expressions, Eq. (13), (21) and (27) to obtain a definition of the energy density of states in the GC and DN formalisms and compare the survival probabilities in these two approaches with that of the frequently used Fock-Krylov method.
A. GC formalism
We begin by writing the integral M (k n , t) as
M (k n , t) = i 2π ∞ 0 e −itx 2 x − k n dx − i 2π ∞ 0 e −itx 2 x + k n dx = i 2π ∞ 0 1 x − k n − 1 x + k n e −itx 2 dx = 1 2πi ∞ 0 2k n k 2 n − x 2 e −itx 2 dx,(28)
and making the change of variable E = x 2 , we have
M (k n , t) = 1 2πi ∞ 0 k n √ E k 2 n − E e −itE dE.(29)
Since
k n k 2 n − E = 1 k n 1 + E k 2 n − E ,
the integral takes the form:
M (k n , t) = 1 2πi ∞ 0 √ E k n k 2 n − E e −itE dE + 1 2πik n ∞ 0 e −itE √ E dE.(30)
Substituting (30) in (21), we obtain:
A(t) = ∞ 0 1 2πi n C n (k n )C n (k n ) √ E k n k 2 n − E e −itE dE + 1 2πi n C n (k n )C n (k n ) k n ∞ 0 e −itE √ E dE. (31)
The last term is zero because of the properties of the coefficients C n . The final form of the survival amplitude is:
A(t) = ∞ 0 1 2πi n C n (k n )C n (k n ) √ E k n k 2 n − E e −itE dE.(32)
We can see that A(t) is the Fourier transform of the series given in the square brackets. In other words, the GC approach leads to a survival amplitude which is very similar in form to that of the Fock-Krylov method. Comparing Eq. (32) with the FK amplitude given in Eq. (10), we consider identifying the quantity in square brackets with the energy density ρ(E) of the initial state and write
ρ GC (E) = 1 2πi n C n (k n )C n (k n ) √ E k n k 2 n − E ,(33)
If we perform the last sum with poles of the fourth quadrant only, this energy density can be written as
ρ GC (E) = 1 π Im p C p (k p )C p (k p ) √ E k p k 2 p − E .(34)
We shall see in a later section that in case of an isolated resonance, with a pole at say k r , the respective coefficient C r (k r )C r (k r ) is simply given as (see (82)):
C r (k r )C r (k r ) = 1 + i Im k r Re k r .
Replacing the above in (34) for an isolated resonance,
ρ GC iso (E) = 1 π Im √ E k r k 2 r − E + 1 π Im i Im k r Re k r √ E k r k 2 r − E .(35)
In order to confirm our identification of the quantity in square brackets in (32) with the density of states, we note as mentioned earlier, that (a) the energy derivative of the scattering phase shift, dδ l /dE, in the vicinity of a resonance, can be derived analytically by making use of the properties of the S-matrix and a theorem of Mittag-Leffler. For the case of an s-wave resonance, it is given by [20],
dδ 0 (E) dE = Im √ E k r k 2 r − E(36)
and (b) the Beth and Uhlenbeck formula (11) allows us to relate the energy derivative of the phase shift with the density of states in an s-wave resonance as:
ρ BU 0 (E) = 1 π dδ 0 (E) dE ,(37)
so that
ρ GC iso (E) = ρ BU 0 (E) + 1 π Im i Im k r Re k r √ E k r k 2 r − E .(38)
The density of states as given by the Beth-Uhlenbeck formula is the same as the first term in (35). The second term in (35) can be seen to be a small correction to the first term for narrow resonances. The reason for the correction term not appearing in the Beth-Uhlenbeck (BU) formula could be due to the approximations made in the derivation of the BU formula and remains to be investigated. With the above confirmation, we conclude that the Green's function approach of GC (taken for the case of an isolated s-wave resonance) and the Fock-Krylov method with the density given using the Beth-Uhlenbeck formula, are equivalent.
B. DN formalism
Let us start by considering the integral in the square brackets in (27). It is the complex conjugate of c(k) given by (25). Thus
A(t) = 2 π ∞ 0 k 2 |c(k)| 2 |f (k)| 2 e −ik 2 t dk.(39)
Performing a change of variable k 2 = E, we have:
A(t) = ∞ 0 √ E π c( √ E) f ( √ E) 2 e −iEt dE.(40)
Comparing the above expression with the Fock-Krylov amplitude, the energy density in the DN formalism is given by,
ρ DN (E) = √ E π c( √ E) f ( √ E) 2 .(41)
If we consider the integrand in (39) without the exponential part e −ik 2 t , then using (25) and the property of the Jost function f * (k) = f (−k) for real k, we get:
(k) ≡ 2 π k 2 |c(k)| 2 |f (k)| 2 = 2 π ∞ 0 ∞ 0 ψ(r, 0)ψ * (r , 0) k 2 u(k, r)u(k, r ) f (k)f (−k) dr dr.(42)
Now, taking the function in square brackets (let us call it I(k, r, r )) and considering the definition of the S-matrix in terms of the Jost functions,
S(k) = f (k) f (−k) ,(43)
we get,
I(k, r, r ) = − 1 4 S(k)f (−k, r) − f (k, r) f (−k, r ) − f (k, r )/S(k) .(44)
Taking into account that k p , p = 1, 2, . . . are the poles of S-matrix in the fourth-quadrant of the complex k plane, the S matrix has additional poles −k * p and zeros −k p and k * p , where all these zeros and poles are simple [40]. The poles of the function I(k, r, r ) are then situated in the same quadrant as the poles and zeros of the S-matrix. If b p are the residues of the S-matrix in the fourthquadrant, the residues corresponding to its poles of the third-quadrant are −b * p [20] while the residues of the inverse of the S-matrix, in terms of b p are (see Appendix B):
Res 1/S(k), k = −k p = −b p ,(45)Res 1/S(k), k = k * p = b * p .(46)
Thus, the residues of the function I(k, r, r ) can be expressed in terms of the residues of the S-matrix. If we call ι(k p , r, r) the residues of this function corresponding to the poles of the fourth-quadrant, then:
Res I(k, r, r ), k = k p = − 1 4 b p f (−k p , r)f (−k p , r ) ≡ ι(k p , r, r ),(47)
Res
I(k, r, r ), k = −k * p = 1 4 b * p f * (−k p , r)f * (−k p , r ) = −ι * (k p , r, r ),(48)Res I(k, r, r ), k = −k p = 1 4 b p f (−k p , r)f (−k p , r ) = −ι(k p , r, r ),(49)
Res
I(k, r, r ), k = k * p = − 1 4 b * p f * (−k p , r)f * (−k p , r ) = ι * (k p , r, r ).(50)
In the calculation of these residues, we used the property f * (−k * , r) = f (k, r) for complex k [41]. From the Mittag-Leffler theorem and taking into account that I(0, r, r ) = 0, we have:
I(k, r, r ) = 4k 2 Re p ι(k p , r, r ) k p (k 2 − k 2 p ) .(51)
Finally, after some lengthy algebra (see Appendix C), it is possible to write (k) in the following form:
(k) = 2 π k 2 Re p ia p (k p ) k p (k 2 p − k 2 ) ,(52)
where the coefficients a p (k p ) are given by,
a p (k p ) ≡ 4i ∞ 0 ∞ 0 ψ(r, 0)ψ * (r , 0)ι(k p , r, r ) dr dr.(53)
Noting the definition of (k) in (42) and substituting (52) in (39), the survival amplitude in the DN formalism becomes,
A(t) = ∞ 0 2 π Re k 2 p ia p (k p ) k p (k 2 p − k 2 ) e −ik 2 t dk ,(54)
which, after a change of variable E = k 2 , can be expressed as,
A(t) = ∞ 0 1 π Re √ E p ia p (k p ) k p (k 2 p − E) e −iEt dE = ∞ 0 1 π Im p a p (k p ) k p √ E k 2 p − E e −iEt dE ,(55)
so that
ρ DN (E) = 1 π Im p a p (k p ) k p √ E k 2 p − E .(56)
The above expression for the survival amplitude is the same as that in Eq. (34) up to the constants C p (k p )C p (k p ) and a p (k p ). Note however that there is a subtle difference between the constants of the GC and DN formalism. C p (k p )C p (k p ) of the GC formalism depend solely on the resonant poles k p . However, the constants a p (k p ) which apparently depend on only k p , in principle depend on all other existing poles through their dependence on the residues ι(k p , r, r ) (see Eqs (47) and (B5)).
C. Comparison of the GC and DN coefficients
The coefficients in the GC formalism written as a double integral:
C p (k p )C p (k p ) = R 0 R 0 ψ(r, 0)ψ * (r , 0)u p (r, k p )u p (r , k p ) drdr ,(57)
where R is the range of the potential, are not equal to the coefficients a p (k p ) unless
u p (r, k p ) = b p i f (−k p , r) .(58)
The above expression is deduced by substituting the definition of ι(k n , r, r ) in the integral (53) and comparing with (57). In principle, this result shows that the resonant state associated with the fourthquadrant pole k = k p which is also a pole of the S-matrix may be computed in terms of the residues of the S-matrix at the corresponding pole and the Jost function.
From the Riemman-Lebesgue theorem we know that (k) → 0 when k → ∞. This implies that
lim k→∞ (k) = lim k→∞ 2 π k 2 Re p ia p (k p ) k p (k 2 p − k 2 ) = 0 ⇒ Im p a p (k p ) k p = 0.(59)
Since A(0) = 1, from (55) we have that:
∞ 0 1 π Im p a p (k p ) k p √ E k 2 p − E dE = 1.(60)
However,
√ E k 2 p − E = 1 √ E k 2 p k 2 p − E − 1 .(61)
Using the condition (59), Eq. (60) takes the form:
1 π Im p a p (k p ) k p ∞ 0 1 √ E k 2 p k 2 p − E − 1 dE = 1 π Im p k p a p (k p ) ∞ 0 dE √ E k 2 p − E = 1.(62)
Since the integral in the last equation is equal to iπ/k p , (62) reduces to 1
Im i p a p (k p ) = Re p a p (k p ) = 1.(63)
The properties (59) and (63) satisfied by the coefficients a p (k p ) are the same as those satisfied by C p (k p )C p (k p ). 1 The integral was calculated following this theorem: If f (z) is a single-valued analytic function in the domain 0 < Arg z <
IV. ANALYTICAL EXPRESSION FOR THE SURVIVAL AMPLITUDE
Analytical expressions for the survival amplitude, A(t), of a resonance given by a Breit-Wigner form for the energy density can be found in [43,44]. In [20], the analytical expressions for A(t) were derived using generalized expressions for the energy density (derived using the analytical properties of the Smatrix and the Mittag-Leffler theorem) within the Fock-Krylov method. The expressions were shown to reduce to those arising from the Breit-Wigner form alone plus corrections. Here, we shall find an analytical expression for the survival amplitude given in (32) and (55), study their asymptotic behaviour and analyse the transition region between the exponential and non-exponential decay law. Apart from obtaining analytical expressions for the transition time, we shall examine some nuclear and particle decays and the relevance of the results for an experimental observation of the non-exponential decay law.
Since the energy density and hence the survival amplitude in the GC and DN formalisms have been shown in the previous section to be equivalent up to the constants appearing in Eqs (32) and (55), it is convenient to write both equations in one compact expression before computing the survival amplitude.
Thus, if we define the coefficient γ p (k p ) as:
γ p (k p ) = C p (k p )C p (k p ), for GC formalism, a p (k p ), for DN formalism,(64)
then both the energy densities can be written in a common form as:
ρ(E) = 1 π Im p γ p (k p ) k p √ E k 2 p − E .(65)
The coefficients γ p (k p ) satisfy the same properties as C p (k p )C p (k p ) and a p (k p ), i.e.,
Re p γ p (k p ) = 1,(66)Im p γ p (k p ) k p = 0.(67)
2π, except for a finite number of singularities z k , k = 1, . . . n not lying on the positive real axis and let z = ∞ be a zero of order not lower than first of the function f (z), then
∞ 0 x α−1 f (x) dx = 2πi 1 − e 2πiα n k=1 Res z α−1 f (z), z = z k ,
where 0 < α < 1 [42].
A. Survival Amplitude in terms of the incomplete gamma function
In the present section, we shall provide analytical expressions for the survival amplitudes in a combined form which is valid for both methods. Analytical formulae for the survival amplitudes evaluated in [15] within the Green's function method, were presented in terms of the error functions. Here we present the expressions using incomplete gamma functions.
Substituting (65) in (21), the survival amplitude is given as
A(t) = ∞ 0 ρ(E)e −iEt dE = 1 2πi p γ p (k p ) k p ∞ 0 √ E k 2 p − E e −iEt dE − 1 2πi p γ * p (k p ) k * p ∞ 0 √ E k * p 2 − E e −iEt dE .(68)
Using (D4) and (D6) (see appendix D), we get
A(t) = 1 2πi p γ p (k p ) k p 2πik p e −ik 2 p t + i √ π 2 k p e −ik 2 p t Γ − 1 2 , −ik 2 p t − 1 2πi p γ * p (k p ) k * p i √ π 2 k * p e −ik * p 2 t Γ − 1 2 , −ik * p 2 t = p γ p (k p )e −ik 2 p t + 1 4 √ π p γ p (k p )e −ik 2 p t Γ − 1 2 , −ik 2 p t − γ * p (k p )e −ik * p 2 t Γ − 1 2 , −ik * p 2 t .(69)
In order to ensure that the survival amplitude at t = 0 is unity, using Γ − 1 2 , 0 = −2 √ π together with the property (66) gives:
A(0) = p γ p (k p ) − 1 2 p γ p (k p ) − γ * p (k p ) = Re p γ p (k p ) = 1.(70)
Using the properties of the incomplete gamma functions [45]:
Γ(α + 1, z) = αΓ(α, z) + z α e −z(71)
with α = 1/2 and z = −ik 2 p t, we can write (69) as
A(t) = p γ p (k p )e −ik 2 p t + 1 4 √ π p γ p (k p ) 2 −ik 2 p t −1/2 − 2e −ik 2 p t Γ − 1 2 , −ik 2 p t − 1 4 √ π p γ * p (k p ) 2 −ik * p 2 t −1/2 − 2e −ik 2 p t Γ − 1 2 , −ik * p 2 t = p γ p (k p )e −ik 2 p t − 1 2 √ π p γ p (k p )e −ik 2 p t Γ 1 2 , −ik 2 p t − γ * p (k p )e −ik * p 2 t Γ 1 2 , −ik * p 2 t + 1 √ πt e 3iπ/4 Im p γ p (k p ) k p .
The last term is zero due to the property (67). Finally,
A(t) = p γ p (k p )e −ik 2 p t − 1 2 √ π p γ p (k p )e −ik 2 p t Γ 1 2 , −ik 2 p t − γ * p (k p )e −ik * p 2 t Γ 1 2 , −ik * p 2 t .(72)
The above expression is equivalent to Eq. (4.21) of Ref. [10] given in terms of the M functions. For a given p, each term depends on the pole associated with the index p and it is possible to define partial survival amplitudes for the pole k p as:
A p (t) = γ p (k p )e −ik 2 p t − 1 2 √ π γ p (k p )e −ik 2 p t Γ 1 2 , −ik 2 p t − γ * p (k p )e −ik * p 2 t Γ 1 2 , −ik * p 2 t ,(73)
such that the survival amplitude takes the simple form:
A(t) = p A p (t).(74)
B. Behaviour at large times
Using the asymptotic expansion of the incomplete gamma function [46]:
e z Γ α, z ∼ z α−1 + (α − 1)z α−2 + · · ·(75)
and ignoring exponential terms, we have,
A(t) ∼ − 1 2 √ π p γ p (k p ) 1 k p (−it) −1/2 − 1 2k 3 p (−it) −3/2 + 1 2 √ π p γ * p (k p ) 1 k * p (−it) −1/2 − 1 2k * p 3 (−it) −3/2 = − 1 2 √ π (−it) −1/2 · 2i Im p γ p (k p ) k p + 1 4 √ π (−it) −3/2 · 2i Im p γ p (k p ) k 3 p .(76)
The first term is zero due to the properties of the coefficients, γ p (k p ), and
A(t) ∼ − e iπ/4 √ 4π Im p γ p (k p ) k 3 p t −3/2 .(77)
The above result has also been obtained in appendix A by computing A(t) with the steepest descent method. In both cases, the results are consistent and the survival probability is proportional to t −3 for large t. We remind the reader that the above analysis has been performed for S-waves.
C. Survival amplitude for an isolated resonance
If the system under analysis has only one resonant pole k r , the expression deduced for the survival amplitude at any time t as well as that for large times can be written in a simple form:
A(t) = γ r (k r )e −ik 2 r t − 1 2 √ π γ r (k r )e −ik 2 r t Γ 1 2 , −ik 2 r t − γ * r (k r )e −ik * r 2 t Γ 1 2 , −ik * r 2 t ,(78)A(t) ∼ − e iπ/4 √ 4π Im γ r (k r ) k 3 r t −3/2 .(79)
In this case, the conditions that the coefficients C r andC r must satisfy are reduced to Re γ r (k r ) = 1,
Im γ r (k r ) k r = 0.(80)
leading to [16] γ r (k r ) = 1 + i Im k r Re k r = k r Re k r .
Alternatively, Eq. (78) and (79) can be written as
A(t) = k r Re k r e −ik 2 r t − 1 2 √ π Re k r k r e −ik 2 r t Γ 1 2 , −ik 2 r t − k * r e −ik * r 2 t Γ 1 2 , −ik * r 2 t ,(83)A(t) ∼ − e iπ/4 √ 4π Re k r Im 1 k 2 r t −3/2 .(84)
An inspection of (83) reveals that, for intermediate times, the survival amplitude can be described by an exponential function, i.e.,
A r (t) ≈ k r Re k r e −ik 2 r t .(85)
The quantum mechanical description of the decay law leads to a non-exponential behaviour at very short and very large times with the intermediate region being dominated by the exponential decay law.
In what follows, we shall concentrate on the transition region from the exponential to the power law at large times. Considering the case of an isolated resonance, the critical time for the transition to the power law is investigated and its relevance for an experimental observation of the power law is discussed.
V. CRITICAL TIME
It would be useful if we could find the parameters on which the critical time for the survival amplitude to go from an exponential to a power law behaviour depends. With this objective, we shall study the intersection of the intermediate and large time survival probabilities. We define the critical time between these behaviours as t c , such that
k r Re k r e −ik 2 r tc 2 = − e iπ/4 √ 4π Re k r Im 1 k 2 r t −3/2 c 2 .(86)
Since k 2 r = r − iΓ r /2 and defining τ c = Γ r t c , it is convenient to write (86) as
k r 2 e −τc = Γ 3 r 4π Im 1 k 2 r 2 τ −3 c .(87)
Let C be the constant defined by
C = Γ 3 r 4π 1 k r Im 1 k 2 r 2 ,(88)
which is always positive. The transition time, τ c = Γ r t c , shall be the zero of the function
f (τ c ) = e −τc − Cτ −3 c = τ 3 c e −τc − C τ 3 c .(89)
Let the auxiliary function g(τ c ) together with its derivative be given by
g(τ c ) = τ 3 c e −τc − C,(90)g (τ c ) = τ 2 c (3 − τ c )e −τc .(91)
g(τ c ) has two critical points: τ c = 0 and τ c = 3. In the interval 0 < τ c < 3, g (τ c ) > 0; and in the interval τ c > 3, g (τ c ) < 0. This means that τ c = 3 is a maximum and τ c = 0 is a minimum. The values of g at those points are g(0) = −C and g(3) = 27e −3 − C. When τ c → ∞, g → −C.
Since C > 0, g is positive on some interval or negative for all τ c > 0, and this depends on the sign of its maximum. If 27e −3 − C < 0, the maximum is negative and g < 0 in τ c > 0. If C = 27e −3 , the maximum is zero and g ≤ 0 in τ c > 0; but, if 27e −3 − C > 0, g will have two zeros and will be positive in the interval formed by those zeros. Now, it is easy to find the zeros of f and this depends on the values of C. We have three cases:
i) First case: If C > 27e −3 , f (τ c ) has no zeros and is negative for τ c > 0, this means that e −τc < Cτ −3 c : there is no critical time and thus, the power law behaviour always dominates.
ii) Second case: If C = 27e −3 , f (τ c ) has one zero and is negative or null in τ c > 0, this implies e −τc ≤ Cτ −3 c : there is only one critical point and the power law behaviour dominates again.
iii) Third case: If C < 27e −3 , f (τ c ) has two zeros τ c1 and τ c2 such that τ c1 < τ c2 . f (τ c ) > 0 for τ c1 < τ c < τ c2 (and e −τc > Cτ −3 c here). f (τ c ) < 0 for values of τ c out of this interval and e −τc < Cτ −3 c : the exponential behaviour is more dominant than the power law behaviour in τ c1 < τ < τ c2 , but, for τ > τ c2 , it is the power law that dominates. We shall see that τ c2 can be identified as the critical time for the transition from the exponential to the power law.
Let us now see if it is possible to write the parameter C in terms of x r = Γ r 2 r . For a given resonance pole, E r − iΓ r /2 in the complex energy plane, r is defined as E r − E th where E th for example is the sum of the masses of the decay products of an unstable particle with mass E r . Since
1 k 2 r = 1 k 2 r exp −i Arg k 2 r ,
we have
Im 1 k 2 r = − 1 k 2 r sin Arg k 2 r ,
and C is equal to
C = 2 π Γ r 2 3 1 |k 2 r | 3 sin 2 Arg k 2 r .
But, |k 2 r | = r 1 + x 2 r , and sin Arg k 2
r = −x r / √ 1 + x r . Thus, C = 2 π x 5 r 1 + x 2 r 5/2 = 2 π x r 1 + x 2 r 5 .(92)
Thus C can be written as a function of x r . An upper bound of C can be obtained if we see that the term x r 1 + x 2 r −1/2 is always less than one for any value of x r . Thus,
C < 2 π .(93)
This bound is less than 27e −3 = 1.34 . . . and the third case applies always.
Coming back to the definition of τ c through the zeros of the function f (τ c ) in Eq. (89), we can write it as
− τ 3 e −τ /3 = − 3 √ C 3 .
With a change of variables, z = −τ /3, we can write
ze z = − 3 √ C 3 .(94)
Here we note that the Lambert function W (x) is the inverse function of the function x = W (x)e W (x) .
Although this function has infinite branches, we would be interested in the real branches: the principal one, denoted by W 0 (x), which takes the values W 0 (
z = − τ 3 = W − 3 √ C 3 .
In order to decide which branch, we recall that 0 < C < 2 π . Thus, the argument of the Lambert function satisfies − 1 3 or −0.2867513379 < C < 0. If we use the branch W 0 (x), we will obtain a critical time that satisfies 0 < τ < 1.3484499515: this interval corresponds to the solutions τ c1 , the smaller times, which we discard.
However, if we use the other branch, the critical time satisfies τ > 5.6426374987 and we associate it with the transition time τ c . The appropriate solution is given by,
τ c = −3W −1 − 3 √ C 3 .(95)
The above formula for τ c is model independent if the energy density ρ(E) entering the calculation of the survival amplitude is model independent. This density as mentioned above, is up to a factor, the same as the one obtained in [20] solely using the properties of the S matrix and a theorem of Mittag-Leffler. In [32], the authors had also obtained an equation similar to Eq. (94) but for a Breit-Wigner form of ρ(E) without a threshold factor. The solution of the equation in [32] can be written as
τ BW c = −W −1 − 4 π x 2 r .(96)
In Table I, values of τ c (this work) and τ BW c (as in [32]) for some values of x r are compared. The absence of the threshold factor (apart from the use of a Breit-Wigner form) gives rise to smaller critical times τ BW c as compared to τ c evaluated from the model independent form involving the correct threshold for ρ(E) as in (36). The second column displays some fitted values, τ f it c , to be discussed below. Though, in principle, most real life resonances would correspond to x r < 1, it is interesting to note that for C = 2/π (or x r → ∞), we have the lower bound of the critical time: τ c = 5.6426375.
The critical time for the transition from the exponential to the power law at large times was also studied in [16] in the context of a single isolated resonance. Determining the transition time from a numerical calculation of the survival probabilities for several values of the variable R = r /Γ r and observing its behaviour as a function of this variable, the authors guessed a logarithmic form for the transition time as follows:
τ f it c = A ln (R) + B(97)
and obtained the values A= 5.41 and B = 12.25 from a fitting procedure. In Fig. 1 literature in connection with the decay of artificial quantum structures [48]. In [48], the authors found that the decay law could have a non-exponential form at all times, for the range, 0 < R ≤ 0.3. The decay of a single ultracold atom was also shown to be non-exponential in [49] below R = 0.3. An examination of the inset in Fig. 1 shows that it is indeed below R = 0.3 that the analytical expression (95) and the fitted one, Eq. (97) start differing.
VI. NON-EXPONENTIAL DECAY OF PARTICLES AND NUCLEI
We shall now apply the results obtained in this work to study some unstable states which have been investigated experimentally. In Table II, we list the critical times for the beginning of the nonexponential (power) law for the particles and nuclei which have been studied in literature [6,[50][51][52].
The transition time as calculated in the present work appears many half-lives later than the number of observed half-lives. From the values given in the table, it is evident that (a) it was necessary to wait much longer to observe the power law (b) but waiting so long would also destroy most of the sample with nothing left for measurement. One could then think of observing the broader resonances such as the sigma meson with a width of a few hundred MeV leading to a very small τ c ∼ 9, however, such a width corresponds to a lifetime of about 10 −23 s, making the observation once again not possible.
VII. INTERFERENCE REGION
The survival probability for narrow resonances, i.e., for x r = Γ r /2 r 1, typically displays an exponential decay law followed by an oscillatory transition region (several half-lives τ C = Γ r t c later) which is then followed by the power law at large times. The origin of the oscillations lies in the interference of the exponential and power law behaviours. In this section, we shall investigate the large time transition region and obtain an analytical expression to describe it.
A. Origin of the oscillatory term
As we have already seen, the survival amplitude can be expressed as a sum of two parts: one describing an exponential decay, A e and another term A p with a power law behaviour. Thus, the total amplitude A(t) is given by,
A(t) = A e (t) + A p (t),(98)
with
A e (t) = k r Re k r e −ik 2 r t ,(99)A p (t) = − 1 2 √ π Re k r k r e −ik 2 r t Γ 1 2 , −ik 2 r t − k * r e −ik * r 2 t Γ 1 2 , −ik * r 2 t = − e iπ/4 √ 4π Re k r Im 1 k 2 r t −3/2 + O(t −5/2 ), t → ∞,(100)
where k 2 r = r − iΓ r /2. The survival probability is
P (t) = |A(t)| 2 = |A e (t)| 2 + |A p (t)| 2 + 2 Re A e (t)A * p (t) = P e (t) + P p (t) + 2 Re A e (t)A * p (t) ,(101)
where P e (t) = |A e (t)| 2 and P p (t) = |A p (t)| 2 . Given the fact that the oscillatory behaviour becomes evident on a logarithmic scale, we rewrite the above equation as,
P (t) = P e (t) + P p (t) 1 + 2 Re A e (t)A * p (t) |A e (t)| 2 + |A p (t)| 2 = I(t) P e (t) + P p (t) ,(102)
where we have defined a modulating function, I(t), such that
I(t) = 1 + 2 Re A e (t)A * p (t) |A e (t)| 2 + |A p (t)| 2 .(103)
Taking the logarithm on both sides of the above equation, we can write, ln P (t) = ln P e (t) + P p (t) + ln I(t).
Eq. (104) hints that the modulating function I(t) must give rise to the oscillations and this is indeed confirmed in Fig. 2. The modulating function I(t) is shown in Fig. 3 on a linear scale.
B. Analytical expression for the modulating function
If we naively replace Eqs (99) and (100) (first line) in Eq. (103), the modulating function can be written as, where γ r (k r ) is given by eq. (82). Thus, the above equation as such would be quite difficult to analyze and hence we consider approximating A p (t) simply by the power law behaviour at large times. Such an approximation is quite good for small values of x r where the critical time for the transition from the exponential to the power law behaviour (as seen in an earlier section) is quite large. Thus the expressions which will be derived below, will be valid only for resonances where x r 1. With τ = Γ r t and k 2 r = r − iΓ r /2, we now write, − ik 2 r t = −
I(t) = 1 + 2 Re A e (t)A * p (t) |A e (t)| 2 + |A p (t)| 2 = 1 − 1 √ π Re γ r (k r )e −ik 2 r t γ r (k r )e −ik 2 r t Γ 1 2 , −ik 2 r t − γ * r (k r )e −ik * r 2 t Γ 1 2 , −ik * r 2 t * γ r (k r )e −ik 2 r t 2 + 1 4π γ r (k r )e −ik 2 r t Γ 1 2 , −ik 2 r t − γ * r (k r )e −ik * r 2 t Γ 1 2 , −ik * r 2 t 2 ,(105)1 2 τ − iω r τ,(106)
where ω r is defined as
ω r = r Γ r .(107)
The expressions (99) and (100) can now be written as,
A e (t) = k r Re k r e −ik 2 p t = k r Re k r e −τ /2 e −iωrτ ,(108)A p (t) = − e iπ/4 √ 4π Re k r Im 1 k 2 r t −3/2 = − e iπ/4 Re k r Γ 3 r 4π Im 1 k 2 r τ −3/2 ,(109)
and the modulating function becomes
I(t) = 1 + 2 Re A e (t)A * p (t) |A e (t)| 2 + |A p (t)| 2 = 1 − 2 k r Γ 3 r 4π Im 1 k 2 r e −τ /2 τ −3/2 Re e −iωrτ e −iπ/4 e i Arg kr k r 2 e −τ + Γ 3 r 4π Im 1 k 2 r 2 τ −3 .(110)
Introducing the constant C given by Eq. (88), we get,
I(τ ) = 1 + D e −τ /2 τ −3/2 e −τ + Cτ −3 cos ω r τ + π/4 − Arg k r ,(111)
where D is given by,
D = − 2 k r Γ 3 r 4π Im 1 k 2 r .(112)
The modulating function so derived allows us to infer that:
i) I(τ ) oscillates about I = 1 with a frequency ω r .
ii) The function is modulated with an amplitude
m(τ ) = e −τ /2 τ −3/2 e −τ + Cτ −3 .(113)
which is expected to be maximum at the critical time.
iii) Apart from the above, the function I(τ ) is expected to present problems for small values of τ (since we approximated A p (t) by its behaviour at large times).
iv) Since ω r = 1/2x r , I(τ ) is expected to oscillate a lot if x r is small. This will not be the case for x r close to or bigger than unity (see for example the case of the broad σ meson where one observes no oscillation at all [18]). In Fig. 4, we compare the modulating function calculated using Eq. (105) (with the complete analytical expressions for A p (t)) and that using the approximation of the large time behaviour mentioned above.
As expected, the function presents problems at small times but the approximation of using the large time behaviour instead of the exact expression is quite good. Before resolving the problem at small times, let us first study the function m(τ ).
C. Analysis of m(τ )
If we write the function as follows:
m(τ ) = 1 e −τ /2 τ 3/2 + Ce x/2 τ −3/2 ,
then its derivative is given as
m (τ ) = 1 2 1/2 e τ /2 (τ − 3)(τ 3 − Ce τ ) (e −τ /2 τ 3/2 + Ce x/2 τ −3/2 ) 2 .
The critical times in this function are τ = 0, τ c1 , 3, τ c2 (in increasing order), where the second and fourth ones are solutions of τ 3 − Ce τ = 0 and as analyzed in section V has two real solutions. It is easy to see that τ = 0 and τ = 3 are minima, while τ = τ c1 , τ c2 are maxima. In Fig. 5 we show m(τ ) for x r = 0.1.
Here, τ c1 = 0.0184942 and τ c2 = 21.143362 with the latter corresponding to the critical time for the transition from the exponential to the power law behaviour (see Table I). The most relevant observation here is that m(τ ) does display a maximum at the critical time as expected. However, in order to have an m(τ ) that describes the modulating function correctly, we must get rid of the maximum close to τ = 0.
One way of doing this could be by constructing a function of τ − τ c such that at τ = τ c , it is given by
m(τ c ), which is 1 2 √ C . τ = Γt I(τ )
x r =0.1 The best way to do this is by expanding 1 m(τ ) in a series of τ − τ c , so that,
1 √ C m(τ ) = ∞ n=0 τ − τ c n 2 n n! n s=0 n k Γ − 1 2 Γ − 1 2 − k + (−1) n+k Γ 5 2 Γ 5 2 − k 2 τ c k = 2 + 1 4 1 − 3 τ c 2 τ − τ c 2 + 1 48 36 τ 2 c − 108 τ 3 c τ − τ c 3 + 1 192 1 − 12 τ c + 54 τ 2 c − 204 τ 3 c + 477 τ 4 c τ − τ c 4 + · · ·(114)
In order to decide on the number of relevant terms in the expansion, in Fig. 6 we display m(τ ) calculated by truncating the series at different number of terms. We see that already up to the fourth order, we obtain a good estimate of the exact m(τ ). There is no peak at small times. We must mention that m(τ ) (and hence also the modulating function) is not symmetric about τ = τ c . Hence, if we define the constants
m 2 = 1 8 1 − 3 τ c 2 ,(115)m 3 = 1 96 36 τ 2 c − 108 τ 3 c ,(116)m 4 = 1 384 1 − 12 τ c + 54 τ 2 c − 204 τ 3 c + 477 τ 4 c ,(117)
then and the modulating function can be written as
m(τ ) = 1/2 √ C 1 + m 2 τ − τ c 2 + m 3 τ − τ c 3 + m 4 τ − τ c 4 ,(118)I(τ ) = 1 + D 2 √ C cos ω r τ + π/4 − Arg k r 1 + m 2 τ − τ c 2 + m 3 τ − τ c 3 + m 4 τ − τ c 4 .(119)
In Fig. 7, we compare this expression with that of the modulating function given by Eq. (105). In a small region around the critical time, the two expressions coincide exactly with small differences away from this region. Finally, in Fig. 8, we compare the exact survival probability with that using the modulating function in (111) with the maximum and minimum values of the cosine term, i.e.,
I ± (τ ) = 1 + D e −τ /2 τ −3/2 e −τ + Cτ −3 (±1),(120)
and P ± (τ ) = I ± (τ ) P e (τ ) + P p (τ ) . Though somewhat obvious, it is interesting to note that the two curves, P + (τ ) and P − (τ ) coincide in all regions except for the transition region where they separate.
This implies that m(τ ) can indeed be used to define the transition region between the exponential and the non-exponential region at large times.
VIII. SUMMARY AND CONCLUSIONS
Writing the survival amplitudes based on the Green's function method (GC) as well as the Jost functions method (DN), as a Fourier transform similar to the one used in the Fock-Krylov method, it is shown that the GC and DN approaches are equivalent upto some constants. Such a rewriting allows one to define the densities ρ GC (E) and ρ DN (E) which are then compared with the definition of a density obtained from a statistical physics motivated expression. The latter is obtained from a relation given by Beth and Uhlenbeck which relates the density of states, ρ BU l (E), to the energy derivative of the scattering phase shift, dδ l /dE, in the l th partial wave. A theorem of Mittag-Leffler further allows ρ BU l (E) to be expressed in terms of the poles of the S-matrix [20], thus making the comparison with ρ GC (E) and ρ DN (E) straightforward. For the case of an isolated s-wave resonance, ρ GC (E) and ρ DN (E)
give the same expression as ρ BU 0 (E) plus a small correction term. Noting that the coefficients appearing in the GC and DN formalism satisfy the same conditions, a general analytic form for the survival amplitude in terms of the incomplete gamma functions is also derived. Apart from this, the analysis for large times is done by applying the steepest descent method as well as using the asymptotic expansion for the incomplete gamma function. The results obtained in both cases are the same, in particular, the t −3 power law for s-wave resonances, which is consistent with most of the literature (see however, [7,33] for exceptions).
The equation deduced for the survival amplitude allowed us to easily separate the exponential and power law behaviours and define a critical time at the intersection of the survival probability for intermediate and large times. A detailed analysis of the transition region reveals interesting aspects as well as the origin of the oscillatory behaviour of the survival probability in this region. An analytical expression for the critical transition time, τ c , is obtained in terms of the Lambert W function. Calculations of τ c for the decays which have been measured experimentally up to several half-lives with the objective of observing the power law behaviour reveal the reason for the negative results of these experiments. The number of half-lives after which the power law starts, for example, for a narrow nuclear resonance such as 56 Mn is about 300, whereas the experiment was carried out only up to 45 half-lives. However, performing measurements up to 300 half-lives would be practically impossible since the exponential decay law would destroy almost all the sample by the time the narrow resonance reaches the power law. Broad resonances such as the σ meson reach the power law much earlier, however, the lifetime is too short making the experimental observation once again difficult. The results of the present work indicate that the non-exponential behaviour of nuclear and particle resonances at large times is hard to observe.
Appendix A: Evaluation of Survival Amplitude for large times using the Steepest Descent
Method
The integrals required for the steepest descent method have the form 2 ,
C f (z)e tφ(z) dz,(A1)
where t > 0, f (z) and φ(z) are analytic functions in a region D and C ∈ D is the contour (not necessarily closed) of integration. In our case, we need to evaluate the integral M (k n , t), which is identically equal to Eq. (A1) if f (z) = (z − k n ) −1 , φ(z) = −iz 2 , C = z ∈ C : Im z = 0 and Im k n < 0 (as of now, we do not include the coefficient i/2π).
The function φ(z) has one saddle-point of order n = 2 at z = 0 since φ (0) = 0 and φ (0) = −2i = 2e −iπ/2 = ae iα . The directions of steepest descent are θ = − α n + (2m + 1) π n , m = 0, 1, . . . , n − 1.
In our case,
θ = 3 4 π, 7 4 π.
The contour C must be deformed such that it follows these directions. In Fig. 9, we show how the contour C is deformed. The line BOA is the contour C. The straight lines OC and OB are the steepest descent directions.
For linking the integrals in those contours, we have to close them with the arcs of circumference CD and BA, both of radius R. Since the integrand is analytic in the contour OCD, using the Cauchy theorem, we get,
DO = − OC + CD . (A2)
For the contour OAB, however, the integrand has a pole depending on the fact if k n satisfies the condition − π 4 < Arg z < 0 or not. The residue theorem allows us to write
OA = OB + BA −2πie −ik 2 n t F (k n ),(A3)
where the function F (k n ) is defined by
F (k n ) = 1 − π 4 < Arg z < 0, 0 i. o. c. .(A4)
Adding (A3) and (A4), we obtain
DO + OA = DA = OB + BA − OC − CD −2πie −ik 2 n t F (k n ).(A5)
Since, in the limit R → ∞, both BA and CD tend to zero, (A5) takes the following form:
∞ −∞ e −itx 2 x − k n dx = Arg z=− π 4 e −itz 2 z − k n dz − Arg z= 3π 4 e −itz 2 z − k n dz − 2πie −ik 2 n t F (k n ) = e −iπ/4 ∞ 0 e −tr 2 re −iπ/4 − k n dr − e i3π/4 ∞ 0 e −tr 2 re i3π/4 − k n dr − 2πie −ik 2 n t F (k n ).(A6)
The principal contribution to the value of the integrals for t large, comes from a neighbourhood of r = 0.
Expanding re −iπ/4 − k n −1 and re i3π/4 − k n −1 in a Taylor series up to the third order around r = 0 and calculating the integrals, we have:
∞ −∞ e −itx 2 x − k n dx = i √ π k n e iπ/4 t −1/2 − i √ π 2k 3 n e i3π/4 t −3/2 + O(t −5/2 ).(A7)
We ignore the exponential term because for large t, it is negligible with respect to the negative power of t. Substituting (A7) in (21) and using properties 2) and 3) mentioned in section II C, we have In [20], the authors show that for a system under the influence of a central potential of finite range R, if the S-matrix for the orbital angular momentum l is written as a product, i.e, S l (k) = e −2iRk n (k + k ln )(k − k * ln ) (k − k ln )(k + k * ln ) m
A(t) = 1 4 √ π e i3πiζ lm + k iζ lm − k ,(B1)
where k ln with n = 1, 2, . . . corresponding to the resonant poles of the S-matrix in the fourth-quadrant and iζ lm , with m = ±1, ±2, . . . (with the plus signs corresponding to the bound and minus to the virtual states respectively) corresponding to the poles on the imaginary axis of the complex k plane, then the residue of S l (k) at k = k ln is b ln = Res S l (k), k = k ln = 2ik ln e −2iRk ln tan Arg k ln p =n (k ln + k lp )(k ln − k * lp ) (k ln − k lp )(k ln + k * lp ) m iζ lm + k ln iζ lm − k ln , (B2) and the residue of S l (k) at k = −k * ln is
Res S l (k), k = −k * ln = −b * ln .(B3)
If the system under consideration has no bound and virtual states, Eqs (B1) and (B2) are simplified to:
S l (k) = e −2iRk n (k + k ln )(k − k * ln ) (k − k ln )(k + k * ln ) ,(B4)
b ln = 2ik ln e −2iRk ln tan Arg k ln p =n
(k ln + k lp )(k ln − k * lp ) (k ln − k lp )(k ln + k * lp ) ,(B5)
and Eq. (B3) remains the same. Here, we are interested in computing the residues of 1/S l (k) at k = −k ln and k = k * ln . For the former pole:
Res 1/S l (k), k = −k ln = lim Since the integrand has a branch point at z = 0, we take the principal branch of √ z such that | Arg z| < π. If α satisfies Arg α ∈ (− π 2 , 0) and R < |α|, using the residue theorem, we have:
I(α) = 2πiα 1/2 e −iαt .(D2)
If α is not inside or on the contour, the Cauchy's theorem leads us to, For the segment AO, z = x. When R → ∞, For the segment BA, let z = Re −iθ , where θ ∈ 0, π 2 . The integral on this segment is:
AO = − OA = − R 0 √ x x − α e −itx dx → − ∞ 0 √ x x − α e −itx dx.BA = − π/2 0 R 1/2 e −iθ/2 Re −iθ − α e −tR sin θ−itR cos θ −iRe −iθ dθ.
Taking the modulus of this integral and supposing R > |α|, we have:
BA ≤ π/2 0 R 3/2 R − |α| e −tR sin θ dθ ≤ R 3/2 R − |α| π/2 0 e −2Rtθπ dθ = π 2t R 1/2 R − |α| 1 − e −Rt .
Here, we used sin θ ≥ 2θ/π with θ ∈ 0, π 2 . When R → ∞, BA → 0, and, BA → 0.
In the limit R → ∞, I(α) is equal to
I(α) = − ∞ 0 √ x x − α e −itx dx + e −iπ/4 ∞ 0 √ y y − iα e −ty dy.
The integral [54] ∞ 0 E ν E + σ e −sE dE = Γ(ν + 1)e σs σ ν Γ(−ν, σs), Re ν > −1, Re s > 0, | Arg σ| < π,
where Γ(α, z) is the incomplete gamma function [45], allows us to write I(α) as
I(α) = − ∞ 0 √ x x − α e −itx dx + e −iπ/4 Γ 3 2 e −iαt −iα 1/2 Γ − 1 2 , −iαt .(D5)
Finally,
∞ 0 √ x α − x e −itx dx = I(α) + i √ π 2 α 1/2 e −iαt Γ − 1 2 , −iαt , | Arg α| < π.(D6)
x) ≥ −1 for x ≥ −1; and the second one W −1 (x), which takes the values W −1 (x) ≤ −1 for −1 ≤ x ≤ 0. Thus the solution of Eq. (94) would be, according the definition of the Lambert function,
FIG. 1 :
1c evaluated from the analytical expression (95) as a function of the variable R.It must be noted that (i) even though τ c and τ f it c are very similar for most values of R, for small R which corresponds to the case of broad resonances (such as the sigma meson for example[47]) they can be quite different and (ii) whereas τ c has a finite lower limit of about 5.64 mentioned above, τ f it c can even take negative values for very small R. The region of R < 1 has indeed found to be important in Comparison of critical times as a function of the variable R = r /Γ r using Eqs (95) and (97). The inset shows the same figure for a smaller range of R.
FIG. 2 :FIG. 3 :
23Survival probability (on a logarithmic scale). The blue curve is the full survival probability and the red line displays the behaviour without the modulating function (shown in black at the top). The ratio x r = Γ r /2 r here is chosen to be 0.1. Modulating function shown inFig. 2, now on a linear scale.
FIG. 4 :
4Approximate and exact form of the modulating function.
FIG. 5 :
5m(τ ) given by Eq. (113)
FIG. 6 :
6m(τ ) with the corresponding correction terms given in (114): blue line (second order), violet (third order) and green line (fourth order).
FIG. 7 :FIG. 8 :
78Exact modulating function (red line) and the approximate one (blue line). approximate form of the modulating function. Apart from this, the green enveloping curves show the survival probability evaluated without the oscillatory part. These curves are evaluated by writing the Survival probability calculated exactly (blue line) and calculated using the approximate modulating function (red line). Green lines show the curves evaluated without the oscillatory part in the modulating function (see text).
FIG. 9 :
9Contour of integration for the integral (16) and the directions of steepest descent of its integrand.
Appendix B: Residues of the inverse of the S-matrix
(FIG. 10 :
10−k ln − k lp )(−k ln + k * lp ) (−k ln + k lp )(−k ln − k * lp ) = −2ik ln e −2iRk ln tan Arg k ln p =n (k ln + k lp )(k ln − k * lp ) (k ln − k lp )(k ln + k * lp ) = −b ln , ln − k lp )(k * ln + k * lp ) (k * ln + k lp )(k * ln − k * lp ) = 2ik ln e −2iRk ln tan Arg k ln p =n (k ln + k lp )(k ln − k * lp ) (k ln − k lp )(k ln + k * lp ) 0)ψ * (r , 0)ι * (k n , r, r ) dr dr. (C1)If we write the double integral of the second term of the right side of (C1) as the conjugate of some double integral and use the fact that ι(k, r, r ) = ι(k, r , r), then, 0)ψ * (r , 0)ι * (k n , r, r ) dr dr = r, 0)ψ * (r , 0)ι(k n , r, r ) dr dr * r, 0)ψ * (r , 0)ι(k n , r, r ) dr dr .(C3)Taking the integral in the square brackets and defining a(k n ) r, 0)ψ * (r , 0)ι(k n , r, r ) dr dr,(C4)allows us to write the Eq. (C3) in the following form: Contour of integration for computing the integral (D1).
e −iαt , Re α = 0, Im α = 0, Arg α ∈ (− π 2 ,
For the segment OB, z = −iy. When R → ∞,
TABLE I :
ICritical values τ c = Γ r t c of the transition to the non-exponential power law behaviour of the sur-vival probability as a function of the parameter x r = Γ r /2 r in the model independent (95), Breit-Wigner
parametrization (96) and fitted parametrization (97) cases.
x r τ c (95) τ f it
c
(97) τ BW
c
(96)
0.1 21.1
21
15.6
0.2 17.1
17.2
12.4
0.3 14.8
15
10.4
0.4 13
13.5
9
0.5 11.9
12.3
7.8
x r τ c (95) τ f it
c
(97) τ BW
c
(96)
0.6 11
11.3
6.8
0.7 10.2
10.4
5.9
0.8 9.6
9.7
5.0
0.9 9.1
9.1
4.2
1
8.7
8.5
3.3
TABLE II :
IICritical values τ c = Γ r t c of the transition to the non-exponential behaviour for experimentally
measured particles and nuclei.
Lifetime x r = Γ r /2 r τ c (95) Number of half-lives
measured
56 Mn(3 + )→ 56 Fe(2 + ) + e − +ν e [6] 2.5789 h 1.2 × 10 −26 316
45
222 Rn → 218 Po + α [50, 51]
3.8235 d 1.2 × 10 −28 339
27, 40
K + → µ + ν µ [52]
12.443 ns 4.5 × 10 −17 204
7.3
K + → π + π 0 [52]
12.265 ns 8.4 × 10 −17 201
4
2 π < 0,
We follow the notation from Ablowitz and Fokas' book for steepest descent method. See[53], chapter 6.
AcknowledgmentsOne of the authors (N. G. K.) acknowledges the support from the Faculty of Science, Universidad de los Andes, Colombia, through grant no. P18.160322.001-17.Appendix C: Detailed computation of (k) from I(k, r, r )Appendix D: Evaluation of integrals for computing the survival amplitudeFor calculating the survival amplitude given by (68), we need to study the integralwhere C is the contour shown inFig. 10and t > 0.
. L A Khalfin, Zh. Eksp. Teor. Fiz. 331371L. A. Khalfin, Zh. Eksp. Teor. Fiz. 33, 1371 (1957).
. K Urbanowski, Acta Phys. Pol. B. 481847K. Urbanowski, Acta Phys. Pol. B 48 1847 (2017);
. Eur. Phys. J D. 71118ibid, Eur. Phys. J D 71, 118 (2017).
. L Fonda, G C Ghirardi, A Rimini, Rep. Prog. Phys. 41587L. Fonda, G. C. Ghirardi and A. Rimini, Rep. Prog. Phys. 41, 587 (1987).
. F Giraldi, Eur. Phys. J. D. 70229F. Giraldi, Eur. Phys. J. D 70 229 (2016).
. J Levitan, Phys. Lett. A. 129267J. Levitan, Phys. Lett. A 129, 267 (1988);
. H Nakazato, S Pascazio, Mod. Phys. Lett. A. 103103H. Nakazato and S. Pascazio, Mod. Phys. Lett. A 10, 3103 (1995).
. E B Norman, S B Gazes, S G Crane, D A Bennett, Phys. Rev. Lett. 602246E. B. Norman, S. B. Gazes, S. G. Crane and D. A. Bennett, Phys. Rev. Lett. 60, 2246 (1988).
. C Rothe, S I Hintschich, A P Monkman, Phys. Rev. Lett. 96163601C. Rothe, S. I. Hintschich and A. P. Monkman, Phys. Rev. Lett. 96, 163601 (2006).
. J Lawrence, J. Opt. B: Quantum Semiclass. Opt. 4446J. Lawrence, J. Opt. B: Quantum Semiclass. Opt. 4, S446 (2002).
. V Fock, N Krylov, JETP. 1793V. Fock and N. Krylov, JETP 17, 93 (1947).
G García-Calderón, Resonant States and the Decay Process. A. Frank and K. B. WolfSpringer-VerlagSymmetries in PhysicsG. García-Calderón, Resonant States and the Decay Process, "Symmetries in Physics", eds. A. Frank and K. B. Wolf, Springer-Verlag, p. 252-272 (1992).
. G García-Calderón, J L Mateos, M Moshinsky, Phys. Rev. Lett. 74337G. García-Calderón, J. L. Mateos and M. Moshinsky, Phys. Rev. Lett. 74, 337 (1995).
. H Nakazato, M Namiki, S Pascazio, Int. J. Mod. Phys. B. 10247H. Nakazato, M. Namiki and S. Pascazio, Int. J. Mod. Phys. B 10, 247 (1996).
. W Van Dijk, Y Nogami, Phys. Rev. Lett. 832867W. van Dijk and Y. Nogami, Phys. Rev. Lett. 83, 2867 (1999).
. W Van Dijk, Y Nogami, Phys. Rev. C. 6524608W. van Dijk and Y. Nogami, Phys. Rev. C 65, 024608 (2002).
. G García-Calderón, J L Mateos, M Moshinsky, Annals of Phys. 249430G. García-Calderón, J. L. Mateos and M. Moshinsky, Annals of Phys. 249, 430 (1996).
. G García-Calderon, V Riquer, R Romo, J. Phys. A. 344155G. García-Calderon, V. Riquer and R. Romo, J. Phys. A 34, 4155 (2001).
. Wytse Van Dijk, Phys. Rev. E. 9363307Wytse van Dijk, Phys. Rev. E 93, 063307 (2016).
. N G Kelkar, M Nowakowski, J. Phys. A. 43385308N. G. Kelkar and M. Nowakowski, J. Phys. A 43, 385308 (2010).
. N G Kelkar, M Nowakowski, K P Khemchandani, Phys. Rev. C. 7024601N. G. Kelkar, M. Nowakowski and K. P. Khemchandani, Phys. Rev. C 70, 024601 (2004).
. D F Jiménez, N G Kelkar, arXiv:1802.09467Ann. Phys. 39618D. F. Ramírez Jiménez and N. G. Kelkar, Ann. Phys. 396, 18 (2018); arXiv:1802.09467 (2018).
. E P Wigner, Phys. Rev. 98145E. P. Wigner, Phys. Rev. 98, 145 (1955).
. F T Smith, Phys. Rev. 118349F. T. Smith, Phys. Rev. 118, 349 (1960).
. N G Kelkar, M Nowakowski, Phys. Rev. A. 7812709N. G. Kelkar and M. Nowakowski, Phys. Rev. A 78, 012709 (2008).
. N G Kelkar, Phys. Rev. Lett. 99210403N. G. Kelkar, Phys. Rev. Lett. 99, 210403 (2007).
. E Beth, G E Uhlenbeck, Physica. 4915E. Beth and G. E. Uhlenbeck, Physica 4, 915 (1937).
K Huang, Statistical Mechanics. New YorkWileyK. Huang, Statistical Mechanics, Wiley, New York (1987).
. R F Dashen, S Ma, H J Bernstein, Phys. Rev. 137345R. F. Dashen, S. Ma and H. J. Bernstein, Phys. Rev. 137, 345 (1969).
. R F Dashen, R Rajaraman, Phys. Rev. D. 10708R. F. Dashen and R. Rajaraman, Phys. Rev. D 10, 708 (1974).
The Decay Process: An Exactly Soluble Example and its Implications. G García-Calderón, G Loyola, M Moshinsky, Symmetries in Physics. A. Frank and K. B. WolfSpringer-VerlagG. García-Calderón, G. Loyola and M. Moshinsky, The Decay Process: An Exactly Soluble Example and its Implications, "Symmetries in Physics", eds. A. Frank and K. B. Wolf, Springer-Verlag, p. 273-292 (1992).
. Xian-Wei Kang, J A Oller, Eur. Phys. J. C. 77399Xian-Wei Kang and J. A. Oller, Eur. Phys. J. C 77, 399 (2017).
. G García-Calderón, R Peierls, Nucl. Phys. A. 265443G. García-Calderón and R. Peierls, Nucl. Phys. A 265, 443 (1976).
. J Bogdanowicz, M Pindor, R Raczka, Found. Phys. 25833J. Bogdanowicz, M. Pindor and R. Raczka, Found. Phys. 25, 833 (1995).
. D S Onley, A Kumar, Am. J. Phys. 60432D. S. Onley and A. Kumar, Am. J. Phys. 60, 432 (1992).
. G García-Calderon, I Maldonado, J Villavicencio, Phys. Rev. A. 7612103G. García-Calderon, I. Maldonado and J. Villavicencio, Phys. Rev. A 76, 012103 (2007).
. G García-Calderon, Advances in Quantum Chemistry. 60407G. García-Calderon, Advances in Quantum Chemistry 60, 407 (2010).
. M Moshinsky, Phys. Rev. 84625M. Moshinsky, Phys. Rev. 84, 525 (1951); ibid 88, 625 (1952);
. G García-Caldern, A Rubio, Phys. Rev. A. 553361G. García-Caldern and A. Rubio, Phys. Rev. A 55, 3361 (1997).
C J Joachain, Quantum Collision Theory. North-Holland; AmsterdamC. J. Joachain, Quantum Collision Theory (North-Holland, Amsterdam 1975).
. R M Cavalcanti, C A A De Carvalho, Revista Brasileira de Ensino de Física. 21464R. M. Cavalcanti and C. A. A. de Carvalho, Revista Brasileira de Ensino de Física 21, 464 (1999).
. G García-Calderón, A Máttar, J Villavicencio, Phys. Scr. T. 15114076G. García-Calderón, A. Máttar and J. Villavicencio, Phys. Scr. T 151, 014076 (2012).
A I Baz, Ya B Zeldovich, A M Perelomov, Scattering, Reactions and Decay in Nonrelativistic Quantum Mechanics. SpringfieldIsrael Program for Scientific TranslationsA.I. Baz, Ya. B. Zeldovich, A.M. Perelomov, Scattering, Reactions and Decay in Nonrelativistic Quantum Mechanics, Israel Program for Scientific Translations, Springfield, 1969.
A G Sitenko, Scattering Theory. Springer-VerlagA. G. Sitenko, Scattering Theory, Springer-Verlag, 1991.
The Theory of Functions of a Complex Variable. A G Sveshnikov, A N Tikhonov, Mir PublishersA. G. Sveshnikov and A. N. Tikhonov, The Theory of Functions of a Complex Variable, Mir Publishers, 1974.
. A Brzeski, J Lukierski, Acta Physica Polonia. 6577A. Brzeski and J. Lukierski, Acta Physica Polonia, Vol B6, 577 (1975).
. K Raczynska, K Urbanowski, arXiv:1802.01441K. Raczynska and K. Urbanowski, preprint, arXiv:1802.01441 (2018).
N N Lebedev, Special functions and their applications. Dover Publications IncN. N. Lebedev, Special functions and their applications, Dover Publications Inc. (1975).
E T Copson, Asymptotic Expansions. Cambridge University PressE. T. Copson, Asymptotic Expansions, Cambridge University Press (1965).
. C Patrignani, Chin. Phys. C. 40100001C. Patrignani et al., Chin. Phys. C 40, 100001 (2016).
. G García-Calderon, J Villavicencio, Phys. Rev. A. 7362115G. García-Calderon and J. Villavicencio, Phys. Rev. A 73, 062115 (2006).
. G García-Calderon, R Romo, Phys. Rev. A. 9322118G. García-Calderon and R. Romo, Phys. Rev. A 93, 022118 (2016).
. E Rutherford, Stizungsber. Akad. Wiss. Wien, Math.-Naturwiss. Kl., Abt. 2A. 120303E. Rutherford, Stizungsber. Akad. Wiss. Wien, Math.-Naturwiss. Kl., Abt. 2A 120, 303 (1911).
. D K Butt, A R Wilson, J. Phys. A. 51248D. K. Butt and A. R. Wilson, J. Phys. A 5, 1248 (1972).
. N N Nikolaev, Sov. Phys. Usp. 95Usp. Fiz. NaukN. N. Nikolaev, Usp. Fiz. Nauk 95, 506 (1968) [Sov. Phys. Usp. 11, 522 (1968)].
M J Ablowitz, A S Fokas, Complex Variables: Introduction and Applications. Cambridge University Press2nd editionM. J. Ablowitz and A. S. Fokas, Complex Variables: Introduction and Applications, Cambridge University Press, 2 nd edition (2003).
A Erdérly, Table of Integral Transforms. IA. Erdérly, Table of Integral Transforms, Vol. I, McGrawHill (1954).
|
[] |
[
"Effect of Bjerrum pairs on electrostatic properties in an electrolyte solution near charged surfaces: A mean-field approach",
"Effect of Bjerrum pairs on electrostatic properties in an electrolyte solution near charged surfaces: A mean-field approach"
] |
[
"Jun-Sik Sin \nNatural Science Center\nKim Il Sung University\nTaesong DistrictPyongyangDemocratic People's Republic of Korea\n"
] |
[
"Natural Science Center\nKim Il Sung University\nTaesong DistrictPyongyangDemocratic People's Republic of Korea"
] |
[] |
In this paper, we investigate the consequences of ion association, coupled with the considerations of finite size effects and orientational ordering of Bjerrum pairs as well as ions and water molecules, on electric double layer near charged surfaces. Based on the lattice statistical mechanics accounting for finite sizes and dipole moments of ions, Bjerrum pairs and solvent molecules, we consider the formation of Bjerrum pairs and derive the mathematical expressions for Bjerrum pair number density as well as cation/anion number density and water molecule number density.We reveal the several significant phenomena. Firstly, it is shown that our approach naturally yields the equilibrium constant for dissociation-association equilibrium between Bjerrum pairs and ions. Secondly, at low surface charge densities, an increase in the bulk concentration of Bjerrum pairs enhances the permittivity and decreases the differential capacitance. Next, for cases where Bjerrum pairs in an alcohol electrolyte solution have a high value of dipole moment, Bjerrum pair number density increases with decreasing distance from the charged surface, and differential capacitance and permittivity is high compared to ones for the cases with lower values of Bjerrum-pair dipole moments. Finally, we show that the difference in concentration and dipole moment of Bjerrum pairs can lead to some variation in osmotic pressure between two similarly charged surfaces.
|
10.1039/d1cp01114f
|
[
"https://arxiv.org/pdf/2202.01988v1.pdf"
] | 235,073,007 |
2202.01988
|
2a86a8750e06a6344aac687da2250e7f1a80b424
|
Effect of Bjerrum pairs on electrostatic properties in an electrolyte solution near charged surfaces: A mean-field approach
4 Feb 2022
Jun-Sik Sin
Natural Science Center
Kim Il Sung University
Taesong DistrictPyongyangDemocratic People's Republic of Korea
Effect of Bjerrum pairs on electrostatic properties in an electrolyte solution near charged surfaces: A mean-field approach
4 Feb 2022numbers: 8245Gj, 8239Wj, 8717Aa Keywords: Bjerrum pairIon associationOrientational orderingElectric double layerNon- uniform size effect * Electronic address: jssin@ryongnamsanedukp 2
In this paper, we investigate the consequences of ion association, coupled with the considerations of finite size effects and orientational ordering of Bjerrum pairs as well as ions and water molecules, on electric double layer near charged surfaces. Based on the lattice statistical mechanics accounting for finite sizes and dipole moments of ions, Bjerrum pairs and solvent molecules, we consider the formation of Bjerrum pairs and derive the mathematical expressions for Bjerrum pair number density as well as cation/anion number density and water molecule number density.We reveal the several significant phenomena. Firstly, it is shown that our approach naturally yields the equilibrium constant for dissociation-association equilibrium between Bjerrum pairs and ions. Secondly, at low surface charge densities, an increase in the bulk concentration of Bjerrum pairs enhances the permittivity and decreases the differential capacitance. Next, for cases where Bjerrum pairs in an alcohol electrolyte solution have a high value of dipole moment, Bjerrum pair number density increases with decreasing distance from the charged surface, and differential capacitance and permittivity is high compared to ones for the cases with lower values of Bjerrum-pair dipole moments. Finally, we show that the difference in concentration and dipole moment of Bjerrum pairs can lead to some variation in osmotic pressure between two similarly charged surfaces.
I. INTRODUCTION
Thermodynamic properties of electrolyte solutions are determined by interactions among the three species in solution, namely, solvent molecules, anions, and cations.
In general, strong electrolytes exist in the form of strong acids, strong bases and salts, a typical example being NaCl. When NaCl dissolves in a highly polar solvent such as water, the substance is fully dissociated into cations and anions by ion-dipole interactions with solvent molecules. However, in a solvent with a lower relative permittivity, such as methanol, NaCl is not always completely dissociated into cations and anions, some fraction of ions is paired. i.e. in a low polar solvent, strong electrolytes behave as a weak electrolyte.
On the other hand, a weak electrolyte forms ions by interaction with water molecules, a well-known example being acetic acid. Acetic acid provides solvated protons and acetate ions by interaction with water molecules. Acetic acid molecules are not fully dissociated into ions when the solvent is water.
In order to estimate the thermodynamic properties of an electrolyte, the dissociation constant of ion pairs formed in an electrolyte must be known [1][2][3][4][5][6][7][8][9][10][11].
Bjerrum [12] was the first to propose the concept of ion pair, assuming that all oppositely charged ions within a certain distance of a central ion are paired. Although he obtained an estimate of the association constant by using statistical theories, the drawback of the method is that in solutions with low permittivity, the critical distance involved in defining ion pair is unreasonably large.
To overcome the shortcoming, Fuoss [13,14] suggested the theory that the cations in the solution are assumed to be conducting spheres of a certain radius and the anions to be point charges. However, Fuoss theory has also several difficulties as it does not consider dielectric saturation effects of the solvent and depends on the choice of the effective size for the ions.
Fisher and Levin in [16,17] extended the Debye-Hückel theory [15] by accounting for the existence of Bjerrum pairs, and explained phase separation and criticality in electrolyte solutions, in good agreement with simulation results [18][19][20].
The authors of [22] studied the effect of Bjerrum pairs on the screening length by means of a modified Poisson-Boltzmann theory accounting for simple association-dissociation equilibrium between free ions and Bjerrum pairs. Consequently, they elucidated that in a lower polar solvent, the screening length can be significantly larger than the Debye length, as reported by Leunissen et al [21] The authors of [23] developed the nonlinear Poisson-Boltzmann framework based on fieldtheoretical approach [24][25][26][27][28], accounting for free ions and pairs. They demonstrated that as observed in [29], the screening length can be non-monotonic as a function of the ionic concentration.
In fact, the formation of ion pairs not only reduces the concentration of free ions participating in screening but also increases relative permittivity of ionic solution by excluding less water dipoles, attributed to decrease of free ions and orientational ordering of ion pairs. However, the previous theories [22,23] could not take into account the difference in size between ions and solvent molecules, and required too high value of dipole moment of a solvent molecule to fit relative permittivity of aqueous solution. Moreover, attention in the studies has been focused on the properties of bulk electrolyte such as screening length and relative permittivity as a function of salt concentration.
On the other hand, recent studies [30][31][32][33][34][35][36][37][38][39][40][41][42] developed the free-energy based mean field theories significantly improved by considering the non-uniform size effects of ions and solvent molecules and orientational ordering of solvent dipoles, but any of them did not present estimate of how Bjerrum pairs affect the electrostatic properties close to charged surfaces.
In this paper, we incorporate not only steric effects but also orientational ordering of Bjerrum pair dipoles into mean-field approach based on lattice statistical mechanics. In other words, we extend the previously developed mean-field theories [32][33][34][35][36][37] in order to include the formation, size effect and orientational ordering of Bjerrum pairs. We demonstrate that Bjerrum pairs have a significant effect on electrostatic properties in electrolyte solution near a charged surface as well as electrostatic interaction between two similarly charged surfaces.
In particular, we focus on an important role of ion-pair association energy and dipole moment of Bjerrum pairs in determining such electrostatic properties.
II. THEORY
We consider an electrolyte solution with a monovalent electrolyte with cations/anions of elementary charge e. The number density for cations, anions, Bjerrum pairs and solvent molecules are denoted as n + , n − , n B and n w , respectively, while bulk number densities of them n +b , n −b , n Bb , n wb . The anions and cations have the same bulk concentrations n b , satisfying electro-neutrality.
The solvent molecules are modeled as dipoles having permanent dipole moment of p w . We further assume that some fraction of the cations and anions form Bjerrum pairs that are modeled as dipoles with moment p B = Le, where length L denotes a mean separation between the paired ions. The bulk number density of free ions is n +b , the number density of Bjerrum pairs being n Bb , satisfying n +b + n Bb = n b , where n b is associated with the bulk ion concentration as n b = c b × N A × 1000 and N A is the Avogadro number.
Here, −J is the ion-pair association energy which accounts for the electrostatic attraction and short-range interactions. We restrict ourselves to positive J values, J > 0. Note that negative infinity(J = −∞) denotes the case without Bjerrum pairs. We assign the effective volume of the solvent molecules, cations, anions, and Bjerrum pairs as V w , V + , V − and V B , respectively. We assume that all lattice sites are occupied by cations, anions, Bjerrum pairs and solvent molecules. The total free energy density of the system consisting of the electrode and an aqueous electrolyte can be written as follows.
f = − ε 0 (∇ψ) 2 2 + e (n + − n − ) ψ + ρ w p w E cos ω − ST − µ + n + − µ − n − − µ w n w + ( ρ B p B E cos ω − µ B n B ) ,(1)
where g (ω) = g (ω) 2π sin (ω) dω/4π in which ω is the angle between the dipole moment vector and the normal to the charged surface. ε 0 is the permittivity of the vaccum and E the electric field strength, while p w = |p w | , p B = |p B | , E = |E| , n B = ρ B (ω) , n w = ρ w (ω) .
In Eq.(1), the first term represents the self-energy of electric field, the second one is the electrostatic potential of anions and cations, the third is the electrostatic potential of solvent dipoles, the fifth one accounts for entropy contribution to the free energy, and the next three terms mean the chemical potential of cations, anions and solvent molecules and the final terms corresponds to the free energy related to Bjerrum pairs. T is the absolute temperature and k B is the Botlzmann constant. The number of arrangement can be written as the following expression
W = (n + + n − + n w + n B )! n + !n − !n w !n B ! · n B ! ρ B (ω 1 )!ρ B (ω 2 )!...ρ B (ω m )! · n w ! ρ w (ω 1 )!ρ w (ω 2 )!...ρ w (ω m )! ,(2)
which can be used only for low salt concentrations. Here we consider n w = m i=1 ρ w (ω i ) and
n B = m i=1 ρ B (ω i ).
However, Eq. (2) can be applied to lower ionic concentrations, satisfying volume fraction
φ < 0.1.
Although the above formula is suitable for the cases where either all ions and dipoles have an identical size or bulk ion cocentration is low, previous studies [34,35] show that the formula remains some reasonable also for medium salt concentration.
S = k B ln W = k B [(n + + n − + n B + n w ) ln (n + + n − + n B + n w ) − n + ln n + − n − ln n − ] +k B − ρ w ln ρ w n w − ρ B ln ρ B n B .(3)
It is assumed that in the present physical system, the incompressibility condition is always satisfied.
1 = n + V + + n − V + + n B V B + n w V w .(4)
The number densities of cations, anions, Bjerrum pairs and solvent dipoles and electrostatic potential are obtained from the fact that the free energy of the system has an extreme value at thermodynamic equilibrium of the whole system, satisfying the incompressibility condition. The Lagrangian of the system is expressed as follows
L = f dx − λ (x) (1 − n + V − n − V − n w V w − n B V B ),(5)
where λ (x) is a local Lagrange parameter. The Euler-Lagrange equation with respect to the cation number density yields the following equation.
δL δn + = ∂L ∂n + = eψ − µ + + k B T ln (n + / (n + + n − + n w + n B )) + λV + = 0.
As x gets far away from the charged surface, the following equations are satisfied.
ψ = 0, λ = λ b , n + = n +b , n − = n −b , n B = n Bb .(7)
Considering the above facts and substituting Eq. (7) into Eq. (6) results in the following expression for chemical potential of cations.
µ + = k B T ln n +b (n +b + n −b + n Bb + n wb ) + λ b V + .(8)
In the same way, chemical potentials of the anions, solvent molecules and Bjerrum pairs are as follows
µ − = k B T ln n −b (n +b + n −b + n Bb + n wb ) + λ b V − .(9)µ w = k B T ln n wb (n +b + n −b + n Bb + n wb ) + λ b V w .(10)µ B = k B T ln n Bb (n +b + n −b + n Bb + n wb ) + λ b V B .(11)
From the fact that a Bjerrum pair is formed by combination of a cation and an anion, we now can recognize the following relation between chemical potentials of Bjerrum pairs, cations and anions
µ B = µ + + µ − + J.(12)
With the help of Eqs. (8,9,11), the relation between bulk ion number density and bulk Bjerrum pair number density can be established straightforwardly.
k B T ln n Bb (n +b + n −b + n Bb + n wb ) + λ b V B − J − k B T ln n +b (n +b + n −b + n Bb + n wb ) + λ b V + − k B T ln n −b (n +b + n −b + n Bb + n wb ) + λ b V − = 0. (13)
After further manipulation, we obtain the following expression for the equilibrium constant.
K = (n b − n Bb ) 2 n Bb = n +b n −b n Bb = (n +b + n −b + n Bb + n wb ) exp (−λ b (V + + V − − V B ) /k B T ) exp (−J/k B T ) .(14)
Like in previous studies [23], Eq. (14) means that the number density of Bjerrum pairs increases with magnitude of ion-pair association energy. The formula also says that the larger the difference between the sum of volumes of a cation and an anion and the volume of a Bjerrum pair, the more Bjerrum pairs are formed.
Inserting Eq. (8) into Eq.(6) and after further manipulations, we obtain the following equations n + n +b n +b + n −b + n Bb + n wb n + + n − + n B + n w = exp (− (hV
+ + eψ) /k B T ) ,(15)n − n −b n +b + n −b + n Bb + n wb n + + n − + n B + n w = exp (− (hV − − eψ) /k B T ) ,(16)n B n Bb n +b + n −b + n Bb + n wb n + + n − + n B + n w = exp (−hV B /k B T ) sinh (p B E/ (k B T )) (p B E/ (k B T )) ,(17)n w n wb n +b + n −b + n Bb + n wb n + + n − + n B + n w = exp (−hV w /k B T ) sinh (p w E/ (k B T )) p w E/ (k B T ) ,(18)where h = λ − λ b .
Multiplying Eqs. (15,16,17,18) by V + , V − , V B , V w , respectively, and adding the each equations, we obtain
n +b + n −b + n Bb + n wb n + + n − + n B + n w = n +b V + e (−hV + −eψ)/k B T + n −b V − e (−hV − +eψ)/k B T +n wb V w e −hVw/k B T sinh (p w E/ (k B T )) p w E/ (k B T ) + n Bb V B e −hV B /k B T sinh (p B E/ (k B T )) p B E/ (k B T ) .(19)
Substituting Eq. (19) in Eq. (15,16,17,18) results in the expressions for number densities of cations, anions, Bjerrum pairs and solvent dipoles:
n + = n +b e (−hV + −eψ)/k B T D ,(20)n − = n −b e (−hV − +eψ)/k B T D ,(21)n w = n wb e (−hVw/k B T ) sinh(pwE/(k B T )) (pwE/(k B T )) D ,(22)n B = n Bb e (−hV B /k B T ) sinh(p B E/(k B T )) (p B E/(k B T )) D ,(23)
where
D = n +b V + e (−hV + −eψ)/k B T + n −b V − e (−hV − +eψ)/k B T + n wb V w e −hVw/k B T sinh (p w E/ (k B T )) (p w E/ (k B T )) + n Bb V B e −hV B /k B T sinh (p B E/ (k B T )) (p B E/ (k B T )) .
. It should be noted that for the case without Bjerrum pair, our approach corresponds to one of [31][32][33]35]. Substituting Eq. (20,21,22,23) in Eq. (19), the following equation is
obtained n +b e (−hV + −eψ)/k B T − 1 + n −b e (−hV − +eψ)/k B T − 1 + n wb e −hVw/k B T sinh (p w E/ (k B T )) (p w E/ (k B T )) − 1 + n Bb e −hV B /k B T sinh (p B E/ (k B T )) (p B E/ (k B T )) − 1 = 0. (24)
Performing minimization of Lagrangian with respect to ψ (r), the following equation is given
δL δψ = ∂ 2 L ∂r (∂ (∇ψ)) − ∂L ∂ψ = 0.(25)
Therefore, we obtain the Poisson equation that determines electrostatic potential, i.e
∇ (ε 0 ε r ∇ψ) = −e (n + − n − ) ,(26)
where effective relative permittivity is given by the following equation
ε r = 1 + P ε 0 E = 1 + n w p w L (p w E/ (k B T )) ε 0 E + n B p B L (p B E/ (k B T )) ε 0 E ,(27)
where L (x) = coth (x) − 1 x is the Langevin function. The following boundary conditions for solving the Poisson equation are used
ψ (x → ∞) = 0, E (x = 0) = − σ ε 0 ε r (x = 0) ,(28)
where σ is the surface charge density of the charged surface. So far we discussed the electrostatic properties of the electrolyte near a charged plate. Now, let's study for the case of two parallel charged surfaces. The authors of [43] proved that for the case when free energy density does not explicitly depend on the spatial variables, the osmotic pressure between two charged surfaces is determined by the following formula
f − (∂f /∂ψ ′ ) ψ ′ = const = −P,(29)
where P is the local pressure that is the sum of the osmotic pressure Π and the bulk pressure P bulk . Here, we emphasize that the alternative way to derive Eq. (30) is to integrate Eq.
(26), as demonstrated in [44][45][46][47].
After substituting Eq. (1) in Eq. (29), we get
P = − ε 0 E 2 2 −e 0 zψ (n + − n − )+µ + n + +µ − n − + µ w (ω) ρ w (ω) + µ B (ω) ρ B (ω) +T S.(30)
Considering the fact that as H → ∞, P (H = ∞) = P bulk , we get the following equation
P bulk = λ b = (2n b + n wb + n Bb ) k B T.(31)
Comparing Eq.(30) and Eq.(31), we get the following mathematical expression
Π = − ε 0 E 2 2 +k B T h−k B T n w (p w E/ (k B T )) L (p w E/ (k B T ))−k B T n B (p B E/ (k B T )) L (p B E/ (k B T )) .(32)
If we neglect the difference in size between different ions and solvent dipoles and the formation of Bjerrum pair, Eq.(32) is reduced to the same formula as in [44,45].
Taking into account Eq. (31) provides a modified expression of dissociation-association reaction.
K = (n b − n Bb ) 2 n Bb ≃ 1 V w exp − (V + + V − − V B ) V w exp (−J/k B T ) .(33)
It should be emphasized that our approach has an important advantage as compared to the method of [23]. The approach can account for not only different sizes of ions, solvent molecules and Bjerrum pairs but also different values of a Bjerrum dipole moment.
In fact, the approach of [23] is based on the assumption that ions, solvent molecules and Bjerrum pairs have an equal size. As a result, the lattice parameter does not denote the effective size of water dipoles. Moreover, the dipole moment of a solvent molecule has an unreasonable value(9.8D) much higher than the realistic value.
However, unlike in [23], our approach can use the effective volumes of ions, Bjerrum pairs and solvent molecules and the reasonable values of dipole moment of a solvent molecule and a Bjerrum pair.
III. RESULTS AND DISCUSSION
Taking into account steric effects and solvent polarization, we first consider the variations in counterion number density, Bjerrum pair number density, water molecule number density and relative permittivity.
In order to set the same dissociation constants as in [23], we use ε p = 78, In references [22,23], it was widely recognized that in aqueous electrolyte solutions, the dipole moment of a Bjerrum pair has only a lower values than ones of water dipoles, whereas in alcohol electrolyte solution, it may be higher than one of alcohol dipoles.
V + = V − = 0.1nm 3 , V B = 0.of J(J = −∞, 2k B T, 4k B T ). Here c b = 0.10mol/L, V − = 0.10nm 3 , V + = 0.10nm 3 , V p = 0.15nm 3 , σ = 0.03C/m 2 , p B = 0.5p w .
Physically, a higher value of J signifies a larger decrease of bulk electrolyte ion concentration, leading to corresponding increase of Bjerrum pair number density. It is clearly seen that as the distance is decreased, the influence of ion association on the counterions distribution is weakened, and therefore, the counterion curves show very little deviation from one another. Fig. 2(b) depicts the Bjerrum pair number density as a function of the distance from a charged surface. In this case, Bjerrum pair number density is hardly changed with the distance from the electrode. This is attributed to the fact that in the electric field made by a low surface charge density, a Bjerrum pair with a low dipole moment is only weakly forced. Fig. 2(c) depicts the water molecule number density as a function of the distance from a charged surface. As we can expect, water dipole moments are depleted near a charged surface. In addition, it is seen that the lowering of counterion number density due to ion association increases water molecule number density, according to the incompressibility condition. In order to get more complete understanding of ion association on electrostatic properties, it is necessary to consider electrostatic properties at the charged surface with surface potential. dipoles and Bjerrum pairs, providing a larger amount of water dipoles. As a result, as the surface potential is increased, Bjerrum pair number density is excluded from the closest proximity of the charged surface. Ion association results in an increase of relative permittivity at the charged surface. The reason for this enhancement is that as in Fig. 3(c), the presence of Bjerrum pairs leads to an increase in water molecule number density. At low surface potentials(< 0.2V ), the difference in permittivity between different cases is clearly exhibited but at the higher values of surface potential(> 0.2V ) disappear. slowly decrease toward zero at higher voltages. It is clearly seen that ion association lowers differential capacitance. Because the bulk ion number density(c b = 0.1mol/L) is not so high, Bjerrum pair number density is also low and consequently the differential capacitance is affected only by the decrease in bulk electrolyte concentration, but not the effect due to polarization of Bjerrum pairs. it should be noted that for the low values of Bjerrum-pair dipole moment(p B = 0.5p w , p w ), the Bjerrum pair number density decreases with decreasing the distance from the charged surface, whereas for the high values(p B = 2p w , 2.5p w ), the Bjerrum pair number density increases with decreasing the distance. In the same way as in [36], this is a consequence of the competition effect of water dipoles and Bjerrum pairs to occupy near a charged surface. The important thing is that in the case having a high value of Bjerrum pair dipole moment, the counterion number density at the charged surface is lower than corresponding ones for lower values. This is understood by comparing Fig. 6(b) and Fig. 6(c). In fact, the dipoles with different sizes and different dipole moments compete each other to occupy locations close to the charged wall. In [36], it was confirmed that the ratio of dipole moment to dipole size is the key factor in the competition. As shown in Fig. 6(b) and Fig . 6(c), in the cases where p = 2.5p w , the alcohol molecule number density decreases from 12mol/L to 5mol/L, whereas Bjerrum pair number density increases from 0.3mol/L to 0.8mol/L. Then, counterion number density at the charged surface increases according to the the incompressibility condition. Fig. 6(d) represents that a higher value of Bjerrum-pair dipole moment results in a higher value of relative permittivity. This is obvious, by combining Bjerrum-pair dipole number density (see Fig. 6(b)) and permittivity formula Eq. (27). As alcohol is a low polar solvent, 0.5p a , p a , 2p a , 2.5p a ).
For the cases where p B = 2p a , 2.5p a , the rapid decrease of alcohol dipole number density at the charged surface is attributed to the effect due to competition between different dipoles. In particular, at high voltages, differential capacitance for the case where p B = 2.5p a is much higher than corresponding one for other cases. In the same way as in Fig. 7(d), the reason for this behavior is the competition between alcohol molecules and Bjerrum pairs. bulk ion concentration is decreased and consequently, the formation of electric double layer near a charged surface requires a higher electric force. As a result, an enhancement of ion association results in an increase of centerline potential. It is also seen that at low bulk counterion concentrations, the difference in Bjerrum-pair dipole moment hardly affects the centerline potential. potential. The fact is attributed to lowering of screening effect of electrolyte solution due to decrease in free ion number density. Fig. 10(a) depicts the electrostatic potential at the centerline between two charged surfaces as a function of the separation for different cases with J = 2k B T and p B = 0.5p a , p a , 2p a , 2.5p a . It is shown that an increase of separation between two charged surfaces decreases the electrostatic potential at the centerline, and reduces the difference between the potentials for other cases. In addition, an increase in dipole moment of Bjerrum pair yields an increase in centerline potential. The reason can be explained as follows: As shown in Fig. 8(d), the formation of Bjerrum pairs enhances the relative permittivity, that is, the more the Bjerrum pair, the higher the permittivity. The higher the value of dipole moment of Bjerrum pair, the higher the permittivity. On the other hand, if the permittivity of electrolyte solution is high, the screening property of solution gets weak. Therefore, the centerline potential increases as the dipole moment of Bjerrum pair increases. Fig. 10(b) depicts the electrostatic potential at the centerline between two charged surfaces as a function of the surface potential for different cases with J = 2k B T and p B = 0.5p a , p a , 2p a , 2.5p a .
It is evident from the figure that an increase of Bjerrum pair dipole moment results in an increase of the centerline potential. The difference in centerline potential between different cases is also enhanced with increasing the surface potential. Fig. 11(a) depicts the osmotic pressure between two charged surfaces as a function of the separation for different cases with J = 2k B T, 4k B T and p B = 0.5p w , p w , while Fig. 11(b) depicts the osmotic pressure as a function of the surface potential for these cases. It is evident from the figure that as ion association is enhanced, the osmotic pressure is increased.
In fact, osmotic pressure is proportional to the h value at the centerline between charged surfaces. On the other hand, generally, h value monotonically increases with the electrostatic potential. Considering surface potential -and spatial dependence of the centerline potential (see Fig. 9(a) and (b) ), the above mentioned facts are proved. Fig. 12(a) depicts the osmotic pressure between two charged surfaces as a function of the separation for different cases with J = 2k B T, p B = 0.5p a , p a , 1.5p a , 2p a while Fig. 12(b) depicts the osmotic pressure as a function of the surface potential for these cases. It is evident from Fig. 12(a) and (b) that as Bjerrum pair has a high electric dipole moment, the osmotic pressure increases. In the same way in Fig. 11(a) and (b), this should be explained by increasing centerline potential due to the increase of dipole moment value.
The recent experiment of [29] elucidated that in the range of φ values close to 0.1, the screening length non-monotonically varies with the bulk ionic concentration, while the authors of [23] theoretically demonstrated that the screening length can increase with ionic concentration for p > p w , unlike in classical Poisson-Boltzmann theory. We use the same formula of inverse screening length as in [23]. Fig. 13(a) shows that the relative permittivity decreases with bulk ion concentration, reaches a minimum value and then increases. The decreasing behavior is attributed to the reduction in number of solvent molecules due to excluded volume effects of ions. The increase in the relative permittivity is due to enhancement in the formation of Bjerrum pairs by increase of bulk ion concentration. Fig. 13(b) represents the non-monotonic behavior of κ ef f for the case of p B > p w , as suggested in [23,29].
κ ef f = 2 (n b − n B ) ε 0 ε r k B T e(34)
The above facts imply that the present theory is an effective tool for studying Bjerrum pairs in electrolyte solutions.
Although the present theory uses the value of a single water dipole moment lower than 9.8D of [23], the value is still higher than the dipole moment of a water molecule in bulk liquid water (2.4D-2.6D) [49]. In fact, as shown in [48], this value of a single water molecule dipole moment can be further decreased to 3.1D by taking into account also electronic polarizability and cavity field of water molecules, which gives also more correct Onsager limit for the bulk solution. However, the method can not provide the analytical solution for the osmotic pressure between two charged surfaces.
In the future, we should find a free energy formulation where the model value of a single water dipole moment can be additionally decreased by taking into account other factors such as correlations between water dipoles.
IV. CONCLUSION
In the present study, we have investigated the effects of ion association on the electrostatic properties in the electrical double layer near charged surfaces by using a mean-field theory taking into account non-uniform size effect of ions, Bjerrum pairs and solvent molecules and orientational ordering of solvent dipoles and Bjerrum pair dipoles.
Our approach accounts for not only different sizes of ions, solvent molecules and Bjerrum pairs but also different dipole moments of solvent molecules and Bjerrum pairs.
In order to assess the effect of ion association on the electric double layer, we have studied the variations in the counterion number density, solvent molecule number density, Bjerrum pair number density and relative permittivity with the distance from a charged surface and with the surface potential for different values of Bjerrum pair concentration and Bjerrum-pair dipole moment. We have demonstrated that in an aqueous solution, especially at low surface potentials, ion association brings out decrease of counterion number density and increase of Bjerrum pair number density, water molecule number density and relative permittivity.
We have further demonstrated that in the alcohol electrolyte solution, the increase of dipole moment of a Bjerrum pair provides increase of counterion number density, Bjerrum pair number density and relative permittivity and decrease of alcohol molecule number density, especially at high surface potentials and at the locations close to the charged surface.
In aqueous solutions, ion association affects differential capacitance by only decreasing bulk ion concentration. However, the dipole moment of Bjerrum pairs does not affect the differential capacitance. In alcohol electrolyte solution, for the case where the dipole moment of a Bjerrum pair is higher than twice alcohol dipole moment, at high surface potentials, the differential capacitance is much higher than for the case having Bjerrum pair with smaller dipole moments.
We have also unveiled how the ion association plays an intrinsic role in establishing the electrostatic interaction between the charged surfaces. We believe that these findings will be significant in developing a more complete theory of electrolyte solution. The present study, accordingly, may act as a theoretical tool which exploits the effects of ion association.
V. CONFLICT OF INTEREST
There are no conflicts to declare.
online) Schematic of Electric Double Layer formed near a charged surface. The electrolyte solution contains cations (red circles), anions (yellow circles), Bjerrum pairs (green circles) and solvent molecules (blue circles).
15nm 3 , T = 300K for aqueous solution, while ε p = 20, V + = V − = 0.1nm 3 , V B = 0.18nm 3 , T = 300K for ethylalcohol electrolyte solution. The reason for the volumes can be understood by Eq. (33).
online) The counterion number density (a), Bjerrum pair number density (b), water molecule number density (c) and relative permittivity (d) as a function of the distance near a charged surface for the cases with different values of ion-pair association energy (J = −∞, 2k B T, 4k B T ). Circles, solid line and dashed line denote J = −∞, 2k B T and 4k B T , respectively. Here solvent is water which has the relative permittivity of 78.5, molecular weight of 18g/mol, density of mass of 1000kg/m 3 and temperature is T=300K, the bulk ion cocentration c b = 0.1mol/L, the volume of an anion V − = 0.10nm 3 , the volume of an cation V + = 0.10nm 3 , the volume of a Bjerrum pair V B = 0.15nm 3 , the surface charge density σ = 0.03C/m 2 , the dipole moment of a Bjerrum pair p B = 0.5p w . Applying the formula of permittivity Eq. (27) to the case of bulk solutions, it is derived that the dipole moment of a water molecule is 4.8D and the dipole moment of a ethylalcohol molecule is 4.26D, where 1D=3.336 × 10 −30 C · m.
online) The counterion number density (a), Bjerrum pair number density (b), water molecule number density (c) and relative permittivity (d) as a function of the surface voltage for the cases with the different values of ion-pair association energy (J = −∞, 2k B T, 4k B T ). Circles, solid line and dashed line denote J = −∞, 2k B T and 4k B T , respectively. Other parameters are the same as in Fig. 2.
Fig. 2
2(a) depicts the spatial variation in the counterion number density for different values
Fig. 2
2(d) depicts the relative permittivity as a function of the distance from a charged surface. Here we first note that when in the presence of Bjerrum pair, bulk relative permittivity is high compared to the case without Bjerrum pairs. According to the permittivity formula of Eq.(27), this is attributed to the fact that electrolyte permittivity is proportional to water molecule number density and Bjerrum pair number density. As a result, the permittivity in the presence of Bjerrum pair is higher than one in the absence of Bjerrum pair.
Fig. 3
3(a) depicts the counterion number density as a function of the surface potential for different values of ion-pair association energy, (J = −∞, 2k B T, 4k B T ). For all the cases, as pointed out in[35], a counterion number density curve first increases with surface potential, reach a maximal value and then decreases. In fact, this non-monotonic behavior is attributed to steric effects of ions and water molecules. Here, we focus on the effect of ion association on electrostatic properties. From the figure, it is seen that in the region of low surface potential (< 0.2V ), ion association lowers counterion number densities at the charged surface. However, at high surface potentials, the difference in counterion number density at the charged surface for different strengthes of ion association is diminished by counterion saturation close to the charged surface.
Fig. 3 FIG. 4 :
34(b) depicts the Bjerrum pair number density at a charged surface as a function of the surface potential for three values of J(J = −∞, 2k B T, 4k B T ). In the presence of Bjerrum pairs, the density is decreased with increasing the surface potential and nearly approaches zero. An increase in the surface potential leads to an enhanced competition between water (Color online) The differential capacitance as a function of the surface voltage for the cases with the different values of ion-pair association energy (J = −∞, 2k B T, 3k B T, 4k B T ). Circles, solid line, dashed line and dish-dotted line denote J = −∞, 2k B T, 3k B T and 4k B T , respectively. Other parameters are the same as in Fig. 2.
Fig. 3 (
3c) depicts the water molecule number density at a charged surface as a function of the surface potential for three different values of ion-pair association energy (J = −∞, 2k B T, 4k B T ). Ion association lowers bulk counterion number density, resulting in an increase of water molecule number density at the charged surface. In particular, it is seen that at low surface potentials(< 0.2V ), the difference in water molecule number density between different cases is clearly exhibited, whereas in the higher region of surface potential(> 0.2V ), the difference disappears.
Fig. 3 (
3d) depicts the relative permittivity at a charged surface as a function of the surface potential for three different values of ion-pair association energy(J = −∞, 2k B T, 4k B T ).
online) (Color online) The counterion number density (a), Bjerrum pair number density (b), water molecule number density (c) and relative permittivity (d) as a function of the distance from a charged surface for the cases with different values of a Bjerrum dipole moment (p B = 0.5p a , p a , 2p a , 2.5p a ). Circles, solid line, dashed line, plus signs denote p B = 0.5p a , p a , 2p a , 2.5p a , respectively. Here the solvent is ethylalcohol which has the relative permittivity of 20, molecular weight of 46g/mol, density of mass of 789kg/m 3 and T = 300K, c b = 0.5mol/L, J = 4k B T, V − = 0.10nm 3 , V + = 0.1nm 3 , V B = 0.18nm 3 , σ = 0.01C/m 2 .
Fig. 4
4depicts the differential capacitance as a function of surface potential. All the differential capacitances are non-monotonic functions of surface potential and show the same behavior. They first increase at low voltages, then have maxima at intermediate voltages and
Fig. 5 (
5a) depicts the counterion number density as a function of the distance from the charged surface for different values of a Bjerrum pair dipole moment in alcohol electrolyte solution for σ = +0.01C/m 2 . It is seen that counterion number densities for all the cases are equal for low surface charge densities. This is explained by the fact that the low surface charge density induces a weak electric field, so that there does not exist the variation due to difference in dipole moments of Bjerrum pairsFig. 5(b) depicts the Bjerrum pair number density as a function of the distance from the charged surface for the cases having different dipole moments of a Bjerrum pair. First of all,
Fig. 5 (
5c) depicts the water molecule number density as a function of the distance from the charged surface for different dipole moments of a Bjerrum pair. A low surface charge density induces a negligible difference in the number density of Bjerrum pairs. Fig. 5(d) depicts the relative permittivity of electrolyte solution near a charged surface as a function of the distance from the charged surface for different dipole moments of a Bjerrum pair. This shows a clear trend in which the relative permittivity of electrolyte solution decreases as the dipole moment of a Bjerrum pair increases. This is explained by the permittivity formula of the Eq.(27).
Fig. 6
6(a)-(d) depict counterion number density, Bjerrum pair number density and alcohol molecule number density and relative permittivity as a function of the distance from the charged surface for different dipole moments of a Bjerrum pair in alcohol electrolyte solution for σ = +0.1C/m 2 , respectively.
online) The counterion number density (a), Bjerrum pair number density (b), water molecule number density (c) and relative permittivity (d) as a function of the distance near a charged surface for the cases with different values of a Bjerrum dipole moment (p B = 0.5p a , p a , 2p a , 2.5p a ). Circles, solid line, dashed line, plus signs denote p B = 0.5p a , p a , 2p a , 2.5p a , respectively. Here the surface charge density is σ = 0.1C/m 2 and other parameters are the same as in Fig. 5.
FIG. 7 :
7(Color online) The counterion number density (a), Bjerrum pair number density (b), water molecule number density (c) and relative permittivity (d) as a function of the surface voltage fo the cases with different values of a Bjerrum dipole moment (p B = 0.5p a , p a , 2p a , 2.5p a ). Circles, solid line, dashed line, plus signs denote p B = 0.5p a , p a , 2p a , 2.5p a , respectively. Other parameters are the same as in Fig. 5.the Debye length shortens compared to aqueous electrolyte solutions. As a consequence, as shown inFig. 5(a-d) and 6(a-d), electrostatic properties in alcohol electrolyte solution is mainly changed only inside the 1nm region from the charged surface.
Fig. 7 (FIG. 8 :
78a) depicts the counterion number density as a function of the surface potential for different values of Bjerrum dipole moment . It can know that in the region of low surface potential (< 0.1), different values of Bjerrum pair dipole moment(p B = 0.5p a , p a , 2p a , 2.5p a ) do not provide the difference in counterion number density. However, at high surface potentials (0.1V< ζ <0.2V), the counterion number density at the charged surface for a high value of Bjerrum pair dipole moment is higher than ones for a lower value of dipole moment. The reason for this phenomena is that as above-mentioned, there exists the competition between (Color online) The differential capacitance as a function of the surface voltage for the cases with different values of a Bjerrum dipole moment (p B = 0.5p a , p a , 2p a , 2.5p a ). Circles, solid line, dashed line, plus signs denote p B = 0.5p a , p a , 2p a , 2.5p a , respectively. Other parameters are the same as in Fig. 5.alcohol dipoles and Bjerrum pair dipoles near the charged surface. An increase in surface voltage enhances the difference in counterion number density between the different cases.
Fig. 7 (
7b) depicts the Bjerrum pair number density at a charged surface as a function of the surface potential for four Bjerrum pair dipole moments. It is clear that in the region(>0.1V), for the cases with high values of Bjerrum pair dipole moment, Bjerrum pair number density rapidly increases due to the above-mentioned competition.
Fig. 7 (
7c) depicts the alcohol molecule number density at a charged surface as a function of the surface potential for four Bjerrum pair dipole moments (p B =
Fig. 7 (FIG. 9 :
79d) depicts the relative permittivity at a charged surface as a function of the surface potential for different Bjerrum pair dipole moments. It should be noted that for the cases with p B = 0.5p a , p a , 2p a , the permittivity curves behave in the same way, whereas, the (Color online) For similarly charged surfaces, (a) variation of the centerline potential with the separation distance between the charged surfaces forψ(x = H/2) = ψ(x = −H/2) =+0.5V . (b)Variation of the centerline potential with the surface potential for different values of ion-pair association energy and a Bjerrum dipole moment. The separation distance between charged surfaces is H = 5nm. Circles, solid line, triangles, dashed line and plus signs represent the cases having (J = −∞ without Bjerrum pair), (J = 2k B T, p B = 0.5p w ), (J = 2k B T, p B = p w ), (J = 4k B T, p B = 0.5p w ), (J = 4k B T, p B = p w ), respectively. The solvent is water and other parameters are the same as in Fig. 2.permittivity for the case with p B = 2.5p a very slowly decreases with the surface voltage.This can be explained as follows: On one hand, at high potentials, alcohol molecule number density drastically diminishes with the surface potential, whereas Bjerrum pair number density sharply increases with the potential. On the other hand, Bjerrum-pair dipole moment is larger than alcohol dipole moment. Considering the two facts, Eq.(27) can ensure for the permittivity slowly to decrease.
Fig. 8
8depicts the differential capacitance as a function of surface potential for the cases where p B = 0.5p a , p a , 2p a , 2.5p a . It is noted that for a Bjerrum-pair dipole moment, differential capacitance is higher than ones for smaller values of Bjerrum-pair dipole moment.FIG. 10: (Color online) For similarly charged surfaces, (a) variation of the centerline potential with the separation distance between the charged surfaces for ψ(x = H/2) = ψ(x = −H/2) = +0.5V . (b)Variation of the centerline potential with the surface potential for different values of ion-pair association energy and a Bjerrum dipole moment. The separation distance between charged surfaces is H = 2nm. Circles, solid line, triangles, dashed line and plus signs represent the cases having (J = −∞ without Bjerrum pair), (J = 2, p B = 0.5p a ), (J = 2, p B = p a ), (J = 2, p B = 2p a ), (J = 2, p B = 2.5p a ), respectively. The solvent is ethylalcohol and other parameters are the same as in Fig. 5.
Fig. 9 (
9a) depicts the electrostatic potential at the centerline between two charged surfaces as a function of the separation for different cases with J = 2k B T, 4k B T , p B = 0.5p w , p w . Here, the concept of centerline means the line or plane consisting of the middle points between two charged surfaces.Fig. 9(a) indicates that for all the cases, an increase of the separation causes a decrease in the centerline potential. It is also shown that an enhancement of ion association increases the centerline potential. In fact, as ion association is enhanced, theFIG. 11: (Color online) For similarly charged surfaces, (a) variation of the osmotic pressurel with the separation distance between the charged surfaces for ψ(x = H/2) = ψ(x = −H/2) = +0.5V . (b)Variation of the osmotic pressure with the surface potential for different values of ionpair association energy and a Bjerrum dipole moment. The separation distance between charged surfaces is H = 5nm. Circles, solid line, triangles, dashed line and plus signs represent the cases having (J = −∞ without Bjerrum pair), (J = 2k B T, p B = 0.5p w ), (J = 2k B T, p B = p w ), (J = 4k B T, p B = 0.5p w ), (J = 4k B T, p B = p w ), respectively. The solvent is water and other parameters are the same as in Fig. 2.
Fig. 9 (
9b) depicts the electrostatic potential at the centerline between two charged surfaces as a function of the surface potential for different cases with J = 2k B T, 4k B T , p B = 0.5p w , p w .The centerline potential increases as surface potential is increased, while the difference in centerline potential between different cases is also enhanced with increasing the surface 12:(Color online)For similarly charged surfaces, (a) variation of the centerline potential with the separation distance between the charged surfaces for ψ(x = H/2) = ψ(x = −H/2) = +0.5V . (b)Variation of the centerline potential with the surface potential for different values of ion-pair association energy and a Bjerrum dipole moment. The separation distance between charged surfaces is H = 2nm. Circles, solid line, triangles, dashed line and plus signs represent the cases having (J = −∞ without Bjerrum pair), (J = 2k B T, p B = 0.5p a ), (J = 2k B , p B = p a ), (J = 2k B T, p B = 2p a ), (J = 2k B T, p B = 2.5p a ), respectively. The solvent is ethylalcohol and other parameters are the same as in Fig. 5.
13: (Color online)(a) Relative permittivity, ε r , and (b) inverse screening length, κ ef f , as a function of the dimensionless bulk ionic concentrationφ = 0.5n b (V − + V + ) for p B > p w . Theresults of Eqs. (27) and (34) are shown as a solid red curve and are compared to Eqs. (27) and (34) without Bjerrum pairs (n B = 0, a plus black signs) and to the classical Debye-Hückel theory (a cross blue line). The curves are plotted for T = 298K, J = 4k B T , V + = V − = 0.1nm 3 , V B = 0.17nm 3 , p B = 17D, and ε r = 20.
Fig. 13 (
13a) and (b) display the relative permittivity and inverse screening length as a function of the dimensionless ion concentration φ = 0.5n b (V + + V − ) for the cases considering or not Bjerrum pairs and for Debye-Hückel theory.
S Durand-Vidal, J.-P Simonin, P Turq, Electrolytes at Interfaces. Kluwer Academic PublishersS. Durand-Vidal, J.-P. Simonin and P. Turq, Electrolytes at Interfaces, (Kluwer Academic Publishers, 2002).
. G Feng, M Chen, S Bi, Z A H Goodwin, E B Postnikov, N Brilliantov, M Urbakh, A A Kornyshev, Phys. Rev. X. 921024G. Feng, M. Chen, S. Bi, Z.A.H. Goodwin, E.B. Postnikov, N. Brilliantov, M. Urbakh, A.A. Kornyshev, Phys. Rev. X, 2019, 9, 021024.
. D Frydel, Y Levin, J. Chem. Phys. 24904D. Frydel, Y. Levin, J. Chem. Phys.,2018, 148, 024904.
. P P Bawol, J H Thimm, H Baltruschat, Chemelectrochem , 6P.P. Bawol, J.H. Thimm, H. Baltruschat, ChemElectroChem, 2019, 6, 6038-6049.
. J Self, N T Hahn, K D Fong, S A Mcclary, K R Zaradil, K A Persson, J. Phys. Chem. Lett. 11J. Self, N.T. Hahn, K.D. Fong, S.A. McClary, K.R. Zaradil, K. A. Persson, J. Phys. Chem. Lett., 2020, 11, 2046-2052.
. M Aghaie, H Aghaie, A Ebrahimi, J. Mol. Liq. 135M. Aghaie, H. Aghaie, A. Ebrahimi, J. Mol. Liq., 2007, 135, 72-74.
. M Holovko, V Kapko, D Henderson, D Boda, Chem. Phys. Lett. 341M. Holovko, V. Kapko, D. Henderson, D. Boda, Chem. Phys. Lett., 2001, 341, 363-368.
. C Zhu, J Yun, Q Wang, G Yang, App. Surf. Sci. 435C. Zhu, J. Yun, Q. Wang, G. Yang, App. Surf. Sci., 435, 2018, 329-337.
. Y Monascal, L Cartaya, A Alvarez-Aular, A Maldonado, G Chuchani, Chem. Phys. Lett. 703Y. Monascal, L. Cartaya, A. Alvarez-Aular, A. Maldonado, G. Chuchani, Chem. Phys. Lett., 2018, 703, 117-123.
. Z A H Goodwin, A A Kornyshev, Electrochem. Commun. 82Z.A.H. Goodwin, A.A. Kornyshev, Electrochem. Commun., 2017, 82, 129-133.
. Z A H Goodwin, G Feng, A A Kornyshev, Electrochim. Acta. 225Z.A.H. Goodwin, G. Feng, A.A. Kornyshev, Electrochim. Acta, 2017, 225, 190-196.
. N Bjerrum, Kgl. Dan. Vidensk. Selsk. Mat. Fys. Medd. N. Bjerrum, Kgl. Dan. Vidensk. Selsk. Mat. Fys. Medd., 1926, 7, 1.
. R M Fuoss, Trans. Faraday Soc. 30R. M. Fuoss, Trans. Faraday Soc. 1934, 30, 967-980.
. R M Fuoss, J. Am. Chem. Soc. 80R. M. Fuoss, J. Am. Chem. Soc. 1958, 80, 5059-5061.
. P W Debye, E Hückel, Phys. Z. 24P.W. Debye and E. Hückel, Phys. Z. 1923, 24, 185-206.
. M E Fisher, Y Levin, Phys. Rev. Lett. 3826M. E. Fisher and Y. Levin, Phys. Rev. Lett. 1993, 71, 3826.
. Y Levin, M E Fisher, Physica A. 225Y. Levin and M. E. Fisher, Physica A, 1996, 225, 164-220.
. A Z Panagiotopoulos, Fluid Phase Equilib. 76A. Z. Panagiotopoulos, Fluid Phase Equilib., 1992, 76, 97-112.
. A Z Panagiotopoulos, J. Chem. Phys. 3007A. Z. Panagiotopoulos, J. Chem. Phys., 2002, 116, 3007.
. J M Romero-Enrique, G Orkoulas, A Z Panagiotopoulos, M E Fisher, Phys. Rev. Lett. 4558J. M. Romero-Enrique, G. Orkoulas, A.Z. Panagiotopoulos and M. E. Fisher, Phys. Rev. Lett., 2000, 85, 4558.
. M E Leunissen, J Zwanikken, R Van Roij, P M Chaikin, A Van Blaaderen, Phys. Chem. Chem. Phys. 9M.E. Leunissen, J. Zwanikken, R.van Roij, P.M. Chaikin and A. van Blaaderen, Phys. Chem. Chem. Phys., 2007, 9, 6405-6414.
. J Zwanikken, R Van Roij, J. Phys.: Condens. Matter. 21424102J. Zwanikken and R. van Roij, J. Phys.: Condens. Matter, 2009, 21, 424102.
. R M Adar, T Markovich, D Andelman, J. Chem. Phys. 146R. M. Adar, T. Markovich, and D. Andelman, J. Chem. Phys., 2017, 146, 194904.
. I Borukhov, D Andelman, H Orland, Electrochim. Acta. 46I. Borukhov, D. Andelman and H. Orland, Electrochim. Acta, 2000, 46, 221-229.
. A Levy, D Andelman, H Orland, Phys. Rev. Lett. 108227801A. Levy, D. Andelman and H. Orland, Phys. Rev. Lett., 2012, 108, 227801.
. A Levy, D Andelman, H Orland, J. Chem. Phys. 164909A. Levy, D. Andelman and H. Orland, J. Chem. Phys., 2013, 139, 164909.
. T Markovich, D Andelman, R Podgornik, Europhys. Lett. 16002T. Markovich, D. Andelman and R. Podgornik, Europhys. Lett., 2014, 106, 16002.
. T Markovich, D Andelman, R Podgornik, J. Chem. Phys. 44702T. Markovich, D. Andelman and R. Podgornik, J. Chem. Phys., 2015, 142, 044702.
. A M Smith, A A Lee, S Perkin, J. Phys. Chem. Lett. 7A. M. Smith, A. A. Lee and S. Perkin, J. Phys. Chem. Lett., 2016, 7, 2157-2163..
. I Borukhov, D Andelman, H Orland, Phys. Rev. Lett. 79I. Borukhov, D. Andelman and H. Orland, Phys. Rev. Lett, 1997, 79, 435-438.
. V Kralj-Iglič, A Iglič, J.Phys.II(France). 6V. Kralj-Iglič and A. Iglič, J.Phys.II(France), 1996, 6, 477-491.
. A Iglič, E Gongadze, K Bohinc, Bioelectrochemistry. 79A. Iglič, E. Gongadze and K. Bohinc, Bioelectrochemistry, 2010, 79, 223-227.
. E Gongadze, A Iglič, Electrochim. Acta. 178E. Gongadze and A. Iglič, Electrochim. Acta, 2015, 178, 541-545.
. J.-S Sin, S.-J Im, K.-I Kim, Electrochim. Acta. 153J.-S. Sin, S.-J. Im and K.-I. Kim, Electrochim. Acta, 2015, 153, 531-539.
. J.-S Sin, K.-I Kim, C.-S Sin, Electrochim. Acta. 207J.-S. Sin, K.-I. Kim and C.-S. Sin, Electrochim. Acta, 2016, 207, 237-246.
. J.-S Sin, C.-S Sin, Phys. Chem. Chem. Phys. 18J.-S. Sin and C.-S. Sin, Phys. Chem. Chem. Phys, 2016, 18, 26509-26518.
. J.-S Sin, J. Chem. Phys. 214702J.-S. Sin, J. Chem. Phys. 2017, 147, 214702.
. V B Chu, Y Bai, J Lipfert, D Herschlag, S Doniach, Biophys. J. 93V. B. Chu, Y. Bai, J. Lipfert, D. Herschlag and S. Doniach, Biophys. J., 2007, 93, 3202-3209.
. A A Kornyshev, J. Phys. Chem. B. 111A. A. Kornyshev, J. Phys. Chem. B, 2007, 111, 5545-5557.
. J Wen, S Zhou, Z Xu, B Li, Phys. Rev. E. 41406J. Wen, S. Zhou, Z. Xu and B. Li, Phys. Rev. E., 2012, 85, 041406.
. A H Boschitsch, P V Danilov, J. Comput. Chem. 33A. H. Boschitsch and P. V. Danilov, J. Comput. Chem., 2012, 33, 1152-1164.
. A C Maggs, R Podgornik, Soft Matter. 12A. C. Maggs and R. Podgornik, Soft Matter, 2016, 12, 1219-1229.
. D Ben-Yaakov, D Andelman, D Harries, R Podgornik, J. Phys. Chem. B. 113D. Ben-Yaakov, D. Andelman, D. Harries and R. Podgornik, J. Phys. Chem. B, 2009, 113, 6001-6011.
. R P Misra, S Das, S K Mitra, J. Chem. Phys. 114703R. P. Misra, S. Das and S. K. Mitra, J. Chem. Phys., 2013, 138, 114703.
. E Gongadze, A Velikonja, S Perutkova, P Kramar, A Macěk-Lebar, V Kralj-Iglič, A Iglič, Electrochim. Acta. 126E. Gongadze, A. Velikonja, S. Perutkova, P. Kramar, A. Macěk-Lebar, V.Kralj-Iglič and A. Iglič, Electrochim. Acta, 2014, 126, 42-60.
. A Velikonja, P B Santhosh, E Gongadze, M Kulkarni, K Eleršič, Š Perutkova, V Kralj-Iglič, N P Ulrih, A Iglič, Int. J. Mol. Sci. 14A. Velikonja, P. B. Santhosh, E. Gongadze, M. Kulkarni, K. Eleršič,Š. Perutkova, V. Kralj- Iglič, N. P. Ulrih and A. Iglič, Int. J. Mol. Sci. 2013, 14, 15312-15329.
E Verwey, J Overbeek, Theory of the Stability of Lyophobic Colloids. ElsevierE. Verwey and J. Overbeek, Theory of the Stability of Lyophobic Colloids, (Elsevier, 1948).
. E Gongadze, A Iglič, Bioelectrochemistry. 87E. Gongadze and A. Iglič, Bioelectrochemistry, 2012, 87, 199-203.
K A Dill, S Bromberg, Molecular Driving Forces. New York and London, Garland ScienceK. A. Dill and S. Bromberg, Molecular Driving Forces (New York and London, Garland Science), 2003.
|
[] |
[
"Magnetism from 2p States in Alkaline Earth Monoxides: Trends with Varying N Impurity Concentration",
"Magnetism from 2p States in Alkaline Earth Monoxides: Trends with Varying N Impurity Concentration"
] |
[
"V Pardo \nDepartamento de Física Aplicada\nFacultad de Física\nUniversidad de Santiago de Compostela\nE-15782Campus Sur s/n\n\nSantiago de Compostela\nSpain\n\nDepartment of Physics\nUniversity of California\n95616DavisCA\n",
"W E Pickett \nDepartment of Physics\nUniversity of California\n95616DavisCA\n"
] |
[
"Departamento de Física Aplicada\nFacultad de Física\nUniversidad de Santiago de Compostela\nE-15782Campus Sur s/n",
"Santiago de Compostela\nSpain",
"Department of Physics\nUniversity of California\n95616DavisCA",
"Department of Physics\nUniversity of California\n95616DavisCA"
] |
[] |
2p-based magnetic moments and magnetic coupling are studied with density functional based methods for substitutional N in the alkaline earth monoxide series MgO, CaO, SrO, BaO. The hole is rather strongly localized near the N 2− ion, being somewhat more so when strong on-site interactions are included in the calculations. Strong magnetic coupling is obtained in the itinerant electron limit but decreases strongly in the localized limit in which the Coulomb repulsion within the N 2p shell (U) is much greater than the N 2p impurity bandwidth (W). We find that this limit is appropriate for realistic N concentrations. Ordering on a simple cubic sublattice may maximize the magnetic coupling due to its high directionality.PACS numbers:
|
10.1103/physrevb.78.134427
|
[
"https://arxiv.org/pdf/0808.0924v1.pdf"
] | 5,679,684 |
0808.0924
|
e2e7ed382111bf50d041d6ee4f9ec8c9feac506c
|
Magnetism from 2p States in Alkaline Earth Monoxides: Trends with Varying N Impurity Concentration
6 Aug 2008
V Pardo
Departamento de Física Aplicada
Facultad de Física
Universidad de Santiago de Compostela
E-15782Campus Sur s/n
Santiago de Compostela
Spain
Department of Physics
University of California
95616DavisCA
W E Pickett
Department of Physics
University of California
95616DavisCA
Magnetism from 2p States in Alkaline Earth Monoxides: Trends with Varying N Impurity Concentration
6 Aug 2008PACS numbers:
2p-based magnetic moments and magnetic coupling are studied with density functional based methods for substitutional N in the alkaline earth monoxide series MgO, CaO, SrO, BaO. The hole is rather strongly localized near the N 2− ion, being somewhat more so when strong on-site interactions are included in the calculations. Strong magnetic coupling is obtained in the itinerant electron limit but decreases strongly in the localized limit in which the Coulomb repulsion within the N 2p shell (U) is much greater than the N 2p impurity bandwidth (W). We find that this limit is appropriate for realistic N concentrations. Ordering on a simple cubic sublattice may maximize the magnetic coupling due to its high directionality.PACS numbers:
I. INTRODUCTION
After a strong research effort spent on magneticion doped semiconductors as spintronics materials, 1 scientific effort has recently turned to the possibility of obtaining useful ferromagnetic (FM) order at room temperature in materials without conventional magnetic atoms with open d or f shells. Isolated defects in insulators often produce magnetic moments, which occur anytime a state in the band gap is occupied by an odd number of electrons. A common example is the P impurity in Si, where the magnetism of the material evolves with increased doping as the insulator-metal transition is approached. Such strictly localized states in insulators are magnetic regardless of their underlying atomic character or their degree of localization.
Elfimov et al. 2 proposed to use the cation vacancy -the charge conjugate of the conventional F-center state -in non-magnetic oxides such as CaO 3 as a source of magnetic moment in insulators. The resulting pair of holes, which are bound to the region for an isolated vacancy, may form a magnetic center even if it does not lie in the gap, due to the large Hund's coupling on the oxygen ions neighboring the vacancy (which can no longer be simple non-magnetic O 2− ). These states extend to a few atomic neighbors, with magnetic coupling resulting from direct exchange. Magnetic order becomes possible, and at concentrations of a few percent they suggested that conduction, and half metallic ferromagnetism, might result.
The Osaka group 4 followed by suggesting that B, C, or N substituted for O in CaO could result in local moments in the 2p states of the impurity ions. The coupling between moments was found, in calculations on random alloys, to be FM and halfmetallic ferromagnetism was also predicted, possibly as high as room temperature. Elfimov et al. 5 studied the specific case of N substituting for O in SrO, applying not only the conventional local spin density approximation (LSDA) but also the stronglycorrelated form (LSDA+U) that should be appropriate for localized magnetic states. They also obtained the possibility of half metallic ferromagnetism at technologically relevant temperatures. Experimentally, magnetism has been observed as a surface effect in thin films of these non-magnetic oxides. 6 There are other examples of atomic p character giving rise to magnetism in solids. One example is the class of alkali hyperoxides (for example, RbO 2 9,10 and Rb 4 O 6 11,12 ). These materials are ionic, containing the spin-half O 1− 2 ion, and display a variety of unusual magnetism-related properties. The SrN 2 compound, an ionic material containing the N 2− 2 unit, is a related example, 13 while the SrN compound with magnetic N is predicted to be a half metallic ferromagnet. 14 Between the bulk compounds and the (nearly) isolated magnetic impurities, another promising example has been reported. At the p-type interface between LaAlO 3 and SrTiO 3 , there is (from simple electron counting) 0.5 too few electrons per interface cell to fill all of the O 2p states that are filled in both bulk materials. Holes mush exist in the O 2p bands at the interface, and Pentcheva and Pickett predicted that magnetic holes arise and should order at low temperature. 15 Subsequently, magnetic hysteresis has been observed at this interface. 16 Here we will analyze the magnetism and magnetic coupling in alkaline-earth monoxides doped with N in more detail. Substitution of O by N in an insulating host presents a clear platform for local moments, and the basic picture is simple. Besides changing the potential locally, the substitution of O by N removes one electron giving a N −2 2p 5 ion, and thereby introduces a hole that is necessarily magnetic (being spin-uncompensated). Ferromagnetism may result if the magnetic coupling has the right character and is sufficiently robust. Within the series MgO, CaO, SrO and BaO the volume varies by a factor of 2.3 and the bandgap varies by over a factor of 1.8, see Table I. By introducing dopant nitrogen atoms with concentrations of one N atom per 8, 16 and 32 O atoms, we are able to model impurity concentrations of 12.5%, 6% and 3%, respectively. The goal here is to understand the electronic structure and magnetic character of the substitutional N impurity, to study dopant interactions for the various concentrations, and to analyze the magnetic properties by calculating the condition for strong coupling that could lead to a FM T c on the order of room temperature.
These semiconductors crystallize in a rocksalt structure. Upon increasing the lattice parameter by introducing a bigger cation, there should be a tendency for the N hole state to become more localized. However, the band gap narrows in proceeding from the smallest to largest (7.2 eV for MgO, 4.0 for BaO), a trend that should cause a corresponding increase in the tendency towards delocalization of the impurity state. How these effects compete is important to determine. Another distinction is that the lowest conduction states in CaO, SrO, and BaO are d states, while MgO is very different in this regard (Mg s, O 3s, 3p). The hole state will be formed from valence orbitals, however, so the impact both of the magnitude of the band gap and the character of the conduction band must be calculated. Table II summarizes the N-N nearest neighbor distances in the different compounds for the various concentrations considered. As the concentration changes, the coordination of the impurities varies, as is provided in the table.
II. COMPUTATIONAL DETAILS
Electronic structure calculations were performed within density functional theory 17 using wien2k, 18 which utilizes an augmented plane wave plus local orbitals (APW+lo) 19 method to solve the Kohn- Sham equations. This method uses an all-electron, full-potential scheme that makes no shape approximation to the potential or the electron density. The exchange-correlation potential utilized was the Wu-Cohen version of the generalized gradient approximation 20 and strong correlation effects were introduced by means of the LSDA+U scheme 21 including an on-site effective U for the O and N p states. We have used option nldau= 1 in wien2k, i.e. the so-called "fully-localized limit", using an effective U ef f = U-J, being J the on-site Hund's rule coupling constant, taken as J= 0. The justification of utilizing the LSDA+U scheme comes from spectroscopic measurements in oxides, which estimate large values of U (5-7 eV) for the p states. 22 This has been shown to be crucial in understanding the interfacial electronic structure of transtion metal oxides, where magnetic moments can localize in the O atoms. 15 All the calculations were converged with respect to all the parameters used, to the precision necessary to support our calculations.
III. RESULTS
We begin by studying the local electronic structure of a N 2p 5 impurity in MgO. Comparing a nonmagnetic solution for the N atom and a solution with 1 µ B /N (a local moment formation in the N site), the total energy difference favors the magnetic solution by more than 100 meV/N, a value that is nearly independent of the concentration of impurities. The introduction of a value of U ef f = 5 eV produces a further stabilization of the magnetic solution with respect to the non-magnetic one (the energy difference rises above 300 meV/N).
A. Effect of atomic relaxation
Substitutional N in these monoxides will break the symmetry and may lead to some local distortion of the otherwise perfectly octahedral environment of the cation. To determine the importance of atomic relaxation we performed structural minimization for MgO 1−x N x , with x= 1/16. If sufficiently localized, the hole might occupy a p z orbital (l z = 0), leading to an elongation of the N-Mg distance along the z axis, or it could be p x ±ip y (l z = ± 1) that is unoccupied, leading to an elongation of the N-Mg distance in the plane perpendicular to the z axis. The symmetric LSDA solution shares the hole among the p orbitals equally. We have performed a structural optimization within GGA (i.e. without including strong correlation effects) which leads to an electron density describing the hole in the N atom being symmetric. The result is that Mg-N distances get modified only slightly, becoming elongated by less than 0.5%. Also, using the LSDA+U scheme for doing the structure optimization, the hole located in a p z state leads to an elongation of about 3% of the Mg-N distance along the z-axis. We have found that these small differences have little effect on the trends we study, either the localization of the magnetic moment or the magnetic coupling, hence we have neglected relaxation in the results that follow.
B. Electronic and Magnetic Trends
We have treated the N concentrations listed in Table II. For each concentration, we have assumed that the N impurities are as regularly distributed as possible (their separation is maximized). As mentioned above, one can imagine how the physics can change from MgO to BaO as the size of the cation, and the volume, increases. On one hand, greater delocalization is permitted as the band gap is reduced (by a factor 2 experimentally, by a factor 3 from our calculations for the doped compounds with U ef f = 5.5 eV). On the other hand, for a given concentration of impurities, the N-N distance becomes larger for the larger volume compounds. The former could encourage an easier propagation of the magnetic coupling throughout the crystal, while the latter would imply a smaller magnetic interaction if the size of the defect state remains the same. We first consider the U ef f = 5.5 eV electronic structure for a particular case, MgO with x= 1/16 ∼ 6%, where the density of states (DOS) for separate spin directions is shown in Fig. 1. For the minority spin, the narrow hole band lies in the gap and has a full bandwidth of W ≈ 0.3 eV. The other two N 2p minority states are occupied and lie above the O 2p bands. The separation of filled and unfilled minority N states is 2 eV, being the Hund's exchange about 0.6 eV in the occupied N p states. The conduction bands show no appreciable spin splitting. When we include spin-orbit coupling, the hole occupies the p x i p y state, l z = -1 orbital, satisfying Hund's rule.
To display the trends across the series, it is sufficient to look only at the minority DOS, which is plotted for each compound in Fig. 2. The band gap reduction through the series Mg→Ba is evident. The O 2p bandwidth is much larger for MgO but this has little effect on the defect state, except per-haps contributing a small additional broadening. In spite of this large band gap variation, and the change in the volume (hence the near neighbor distance) the position of the unoccupied hole state changes rather little, staying 2±0.5 eV above the highest occupied state. The variation that does occur is nonmonotonic and may be due to the conduction bands that fall in energy through the series. For BaO, narrowest band gap member, the N hole band merges into the bottom of the conduction Ba d bands.
Because the intra-atomic Coulomb interaction strength U is at least a few eV, these systems are in the regime U/W >> 1 (being W the N 2p hole bandwidth) so hopping of the holes will be inhibited, giving insulating behavior. This is the limit in which the LSDA+U method works well. For values of U bigger than ∼1 eV, the material is insulating whereas for smaller values (i.e. in LSDA or GGA), the N hole band intersects the upper valence bands and the band structure becomes metallic (or halfmetallic).
C. Sign, strength, and character of magnetic coupling
Since the N ions form a strong local magnetic moment, the next question is whether they couple ferromagnetically and what the strength of the magnetic interaction is. We have performed calculations on both FM and antiferromagnetic (AF) alignments to obtain the coupling between nearest neighbor N moments, for the different concentrations and monoxides under study. We estimate the Curie temperature by using the mean field expression from a Heisenberg model of the type H= -J ij S i S j , with S=1/2 (the sum is over pairs),
T c = 2zS(S + 1)J 3k B ,(1)
where z is the number of nearest neighbors and J is the exchange constant. Table III shows how the magnetic coupling between N moments varies both with the concentration and the different compound considered. The data presented in the table were obtained for a calculation without considering strong correlation effects (U= 0), but including spin-orbit effects, and also using the Wu-Cohen GGA functional. The effect of introducing U is to reduce the magnetic coupling, because it further localizes the moments. In fact, using a range of values of U up to 8 eV, we find that, for BaO and SrO, the introduction of U as small as 1.5 eV leads to magnetic coupling too small to determine reliably from supercell energy differences. The hole states no longer overlap directly, and the materials are insulating with no means to transfer magnetic coupling except classical dipolar coupling. For MgO at the higher concentrations (x=1/8 and x= 1/16), the magnetic coupling is much higher than for the other compounds, because of the much smaller lattice parameter, and the introduction of U reduces the size of the interaction but does not kill T c . Putting all these LSDA data together, we can make a plot of how the magnetic interaction strength J varies in these compounds with respect to the N-N distance, neglecting all the differences between the various compounds we are considering. The result can be seen in Figure 3, that shows an exponentiallike decay in the strength of the magnetic coupling as the N impurity atoms are separated. There is, remarkably, an upturn for N-N distances above 10Å, with values above room temperature for BaO and SrO for a concentration around x ∼ 3%. For this supercell, the magnetic impurities are located on a simple cubic sublattice at lattice constant 2a. Elfimov et al. 5 have shown that the electronic coupling between N impurities in SrO is highly directional, with strong coupling along the crystalline axes (large effective p x -p x coupling along the x-axis, for example). This opens the possibility of reaching high temperature FM attainable concentrations of N dopant in the limit U∼ W.
A double-exchange-like microscopic mechanism seems to be the best explanation of the FM coupling in these monoxides in the GGA limit, when the Fermi level lies within the impurity band. Double exchange is normally associated with carriers coupled to a background of moments, with kinetic energy favored by alignment of the moments. Electronic structure-wise, the added kinetic energy shows up as an increased bandwidth compared to antialignment. Indeed for the GGA calculations and for small values of U, the system is (half) metallic and the double exchange picture as conventionally interpreted makes sense. As U is increased beyond ∼1 eV, a gap appears, and in the large U regime the coupling is certain to be via direct exchange. However, as shown in Fig. 3, the coupling strength (as reflected in T c in this figure) does not appear to undergo any discontinuity at the metal-insulator transition. The coupling (and T c ) does however decrease rapidly as the value of U is increased. Having band states at the Fermi level leads also to a larger FM coupling. When U > U cr = 1 eV is introduced in the calculations, an insulating state is obtained and magnetic exchange coupling is reduced (see Fig. 4).
FIG. 3: Exchange coupling strength with respect to the N-N distance, considering all the LSDA data from the various compounds together. "Error" bars simply denote the range of values that were calculated for the same near neighbor distance, see Table II.
In Fig. 4 we illustrate more explicitly how the effect of U reduces the Curie temperature. Strong correlation effects produce a stronger localization of the magnetic moments, i.e. their effects extend to fewer neighbors. The reduced separation of overlap leads to a corresponding reduction of the magnetic interaction. Hence, in the limit U >> W, where this system resides for the concentrations we study, the magnetic coupling will not be strong.
IV. SUMMARY
In this paper we have studied the formation of local moments when substitutional N impurities are introduced in alkaline-earth monoxides. We have calculated the coupling for a variety of concentrations, paying particular attention to the effect of Hubbard repulsion U (since U > W for these systems, where W is the impurity band width). The local moments that form, mainly confined to the N atoms and their nearest neighbors, would lead to room temperature ferromagnetism in the itinerant electron limit via a double-exchange-like exchange coupling within the impurity band. However, in the more realistic localized electron limit U >> W the coupling must be due to direct exchange (local impurity state overlap) and becomes drastically reduced. Interestingly, the crossover between the double-exchange coupling and the direct exchange as the interaction strength is increased does not lead to any discernible anomaly in magnetic coupling strength at the metal-insulator transition. Finally, we confirm that the impurity state is highly anisotropic, and as result magnetic coupling is enhanced by ordering of the magnetic N impurities on a periodic (simple cubic) sublattice.
V. ACKNOWLEDGMENTS
We have benefited from discussion on this topic with S. S. P. Parkin and G. A. Sawatzky. This project was supported by DOE grant DE-FG02-04ER46111 and through interactions with the Predictive Capability for Strongly Correlated Systems team of the Computational Materials Science Network.
FIG. 1 :
1Total density of states for ferromagnetically aligned MgO1−xNx with x= 1/16 ≈ 6%, calculated with the LSDA+U method including spin-orbit effects using U= 5.5 eV for the 2p-states of both N and O. The majority DOS is plotted upward, the minority is plotted downward.
FIG. 2 :
2Total density of states of the various monoxides under study for FM alignment, calculated with the LSDA+U method for U= 5.5 eV. Only the minority spin channel is shown, since it illustrates all the important features: O 2p bandwidth, narrowing of the bandgap, and the position of the hole state at 2-2.5 eV above the valence band maximum..
FIG. 4 :
4Strength of the magnetic coupling in MgO obtained from an LSDA+U calculation (including spinorbit coupling) with various values of U calculated in terms of the Curie temperature.
TABLE I :
ILattice parameters 7 and band gap 8 of the corresponding monoxides under study.Compound Lattice parameter (Å) Band gap (eV)
MgO
4.12
7.2
CaO
4.81
6.2
SrO
5.16
5.3
BaO
5.52
4.0
TABLE II :
IINearest neighbors distances and number of nearest neighbors for the impurity at various concentrations and monoxides under study.Compound Concentration N-N distance (Å)
(no. neighbors)
MgO
1/8
6.0 (12)
MgO
1/16
6.0 (4)
MgO
1/32
8.4 (6)
CaO
1/16
6.8 (4)
SrO
1/8
7.3 (12)
SrO
1/16
7.3 (4)
SrO
1/32
10.3 (6)
BaO
1/8
7.8 (12)
BaO
1/16
7.8 (4)
BaO
1/32
11.0 (6)
TABLE III :
IIIStrength of the magnetic coupling converted into Curie temperature of the corresponding monoxides under study obtained from an LSDA calculation (including spin-orbit effects) and a GGA (Wu-Cohen) calculation (U= 0).Compound Concentration
Tc (K)
Tc (K)
[LSDA+SO] [GGA]
MgO
1/8
330
320
MgO
1/16
400
400
MgO
1/32
5
70
CaO
1/16
72
83
SrO
1/8
82
66
SrO
1/16
31
41
SrO
1/32
250
98
BaO
1/8
32
31
BaO
1/16
11
13
BaO
1/32
280
230
. I Žutić, J Fabian, S D Sarma, Rev. Mod. Phys. 76323I.Žutić, J. Fabian, and S. D. Sarma, Rev. Mod. Phys. 76, 323 (2004).
. I S Elfimov, S Yunoki, G A Sawatzky, Phys. Rev. Lett. 89216403I. S. Elfimov, S. Yunoki, and G. A. Sawatzky, Phys. Rev. Lett. 89, 216403 (2002).
. J M D Coey, M Venkatesan, P Stamenov, C B Fitzgerald, L S Dorneles, Phys. Rev. B. 7224450J. M. D. Coey, M. Venkatesan, P. Stamenov, C. B. Fitzgerald, and L. S. Dorneles, Phys. Rev. B 72, 024450 (2005).
. K Kenmochi, M Seike, K Sato, A Yanase, H Katayama-Yoshida, Jpn. J. Appl. Phys. 43934K. Kenmochi, M. Seike, K. Sato, A. Yanase, and H. Katayama-Yoshida, Jpn. J. Appl. Phys. 43, L934 (2004).
. I S Elfimov, A Rusydi, S I Csiszar, Z Hu, H H Hsieh, H.-J Lin, C T Chen, R Liang, G A Sawatzky, Phys. Rev. Lett. 98137202I. S. Elfimov, A. Rusydi, S. I. Csiszar, Z. Hu, H. H. Hsieh, H.-J. Lin, C. T. Chen, R. Liang, and G. A. Sawatzky, Phys. Rev. Lett. 98, 137202 (2007).
. N H Hong, J Sakai, N Poirot, V Brizé, Phys. Rev. B. 73132404N. H. Hong, J. Sakai, N. Poirot, and V. Brizé, Phys. Rev. B 73, 132404 (2006).
. K Kenmochi, V A Dinh, K Sato, A Yanase, H Katayama-Yoshida, J. Phys. Soc. Jpn. 732952K. Kenmochi, V. A. Dinh, K. Sato, A. Yanase, and H. Katayama-Yoshida, J. Phys. Soc. Jpn. 73, 2952 (2004).
A compilation of crystal data for halides and oxides. A M Stoneham, J Dhote, LondonUniversity College LondonA.M. Stoneham and J. Dhote, "A compilation of crystal data for halides and oxides (University College London, London, 2002), availabe online from www.cmmp.ucl.ac.uk/ ahh/research/crystal/homepa- ge.htm, and references therein.
. M E Lines, M A Bösch, Phys. Rev. B. 23263M. E. Lines and M. A. Bösch, Phys. Rev. B 23, 263 (1981).
. M Labhart, D Raoux, W Känzig, M A Bösch, Phys. Rev. B. 2053M. Labhart, D. Raoux, W. Känzig, and M. A. Bösch, Phys. Rev. B 20, 53 (1979).
. J Winterlik, G H Fecher, C Felser, C Muhle, M Jansen, J. Am. Chem. Soc. 1296990J. Winterlik, G. H. Fecher, C. Felser, C. Muhle, and M. Jansen, J. Am. Chem. Soc. 129, 6990 (2007).
. J J Attema, G A De Wijs, R A De Groot, J. Phys.: Condens. Matter. 19165203J. J. Attema, G. A. de Wijs, and R. A. de Groot, J. Phys.: Condens. Matter 19, 165203 (2007).
. G Auffermann, Y Prots, R Kniep, Angew. Chem. Int. Edit. 40547G. Auffermann, Y. Prots, and R. Kniep, Angew. Chem. Int. Edit. 40, 547 (2001).
. O Volnianska, P Boguslawski, Phys. Rev. B. 77220403O. Volnianska and P. Boguslawski, Phys. Rev. B 77, 220403(R) (2008).
. R Pentcheva, W E Pickett, Phys. Rev. B. 7435112R. Pentcheva and W. E. Pickett, Phys. Rev. B 74, 035112 (2006).
. A Brinkman, M Huijben, M Van Zalk, J Huijben, U Zeitler, J C Maan, Nat. Mater. W. G. Van der Wiel, G. Rijnders, D. H. A. Blank, and H. Hilgenkamp6493A. Brinkman, M. Huijben, M. Van Zalk, J. Huijben, U. Zeitler, J. C. Maan, W. G. Van der Wiel, G. Rijn- ders, D. H. A. Blank, and H. Hilgenkamp, Nat. Mater. 6, 493 (2007).
. P Hohenberg, W Kohn, Phys. Rev. 136864P. Hohenberg and W. Kohn, Phys. Rev. 136, B864 (1964).
. K Schwarz, P Blaha, Comp. Mat. Sci. 28259K. Schwarz and P. Blaha, Comp. Mat. Sci. 28, 259 (2003).
. E Sjöstedt, L Nördstrom, D Singh, Solid State Commun. 11415E. Sjöstedt, L. Nördstrom, and D. Singh, Solid State Commun. 114, 15 (2000).
. Z Wu, R E Cohen, Phys. Rev. B. 73235116Z. Wu and R. E. Cohen, Phys. Rev. B 73, 235116 (2006).
. A I Liechtenstein, V I Anisimov, J Zaanen, Phys. Rev. B. 525467A. I. Liechtenstein, V. I. Anisimov, and J. Zaanen, Phys. Rev. B 52, R5467 (1995).
. J Ghijsen, L H Tjeng, J Van Elp, H Eskes, J Westerink, G A Sawatzky, M T Czyzyk, Phys. Rev. B. 3811322J. Ghijsen, L. H. Tjeng, J. van Elp, H. Eskes, J. West- erink, G. A. Sawatzky, and M. T. Czyzyk, Phys. Rev. B 38, 11322 (1988).
|
[] |
[
"Supervising Nyström Methods via Negative Margin Support Vector Selection",
"Supervising Nyström Methods via Negative Margin Support Vector Selection"
] |
[
"Mert Al [email protected] \nDepartment of Electrical Engineering\nPrinceton University\nPrincetonNew JerseyUSA\n",
"Thee Chanyaswad \nDepartment of Electrical Engineering\nPrinceton University\nPrincetonNew JerseyUSA\n",
"Sun-Yuan Kung [email protected] \nDepartment of Electrical Engineering\nPrinceton University\nPrincetonNew JerseyUSA\n"
] |
[
"Department of Electrical Engineering\nPrinceton University\nPrincetonNew JerseyUSA",
"Department of Electrical Engineering\nPrinceton University\nPrincetonNew JerseyUSA",
"Department of Electrical Engineering\nPrinceton University\nPrincetonNew JerseyUSA"
] |
[] |
The Nyström methods have been popular techniques for scalable kernel based learning. They approximate explicit, low-dimensional feature mappings for kernel functions from the pairwise comparisons with the training data. However, Nyström methods are generally applied without the supervision provided by the training labels in the classification/regression problems. This leads to pairwise comparisons with randomly chosen training samples in the model. Conversely, this work studies a supervised Nyström method that chooses the critical subsets of samples for the success of the Machine Learning model. Particularly, we select the Nyström support vectors via the negative margin criterion, and create explicit feature maps that are more suitable for the classification task on the data. Experimental results on six datasets show that, without increasing the complexity over unsupervised techniques, our method can significantly improve the classification performance achieved via kernel approximation methods and reduce the number of features needed to reach or exceed the performance of the full-dimensional kernel machines.Preprint. Work in progress.
| null |
[
"https://arxiv.org/pdf/1805.04018v2.pdf"
] | 13,693,495 |
1805.04018
|
ebe915fe5a7b92116aba6a7b4623e4b7de58e558
|
Supervising Nyström Methods via Negative Margin Support Vector Selection
18 May 2018
Mert Al [email protected]
Department of Electrical Engineering
Princeton University
PrincetonNew JerseyUSA
Thee Chanyaswad
Department of Electrical Engineering
Princeton University
PrincetonNew JerseyUSA
Sun-Yuan Kung [email protected]
Department of Electrical Engineering
Princeton University
PrincetonNew JerseyUSA
Supervising Nyström Methods via Negative Margin Support Vector Selection
18 May 2018
The Nyström methods have been popular techniques for scalable kernel based learning. They approximate explicit, low-dimensional feature mappings for kernel functions from the pairwise comparisons with the training data. However, Nyström methods are generally applied without the supervision provided by the training labels in the classification/regression problems. This leads to pairwise comparisons with randomly chosen training samples in the model. Conversely, this work studies a supervised Nyström method that chooses the critical subsets of samples for the success of the Machine Learning model. Particularly, we select the Nyström support vectors via the negative margin criterion, and create explicit feature maps that are more suitable for the classification task on the data. Experimental results on six datasets show that, without increasing the complexity over unsupervised techniques, our method can significantly improve the classification performance achieved via kernel approximation methods and reduce the number of features needed to reach or exceed the performance of the full-dimensional kernel machines.Preprint. Work in progress.
Introduction
Kernel methods have been successful in various applications, e.g. [1][2][3][4][5]. Their main innovation is the mapping of the data onto a high-dimensional feature space, without having to compute the expansions explicitly [6,7]. This is achieved via the kernel trick, which only requires a Gram (kernel) matrix to be computed in the original feature space. Given N training samples, the kernel matrix is N × N . Hence, although this may be advantageous for small-scaled applications, for large-scaled learning -where N can be massive -the size of the kernel matrix quickly becomes an obstacle. Previous work has addressed this challenge primarily via kernel matrix approximation [8][9][10]. These methods lead to explicit, low-dimensional, approximate representations of the implicit, high-dimensional mappings for the data. Problems such as classification and regression can then be solved via the primal domain algorithms working in the approximate feature space for the kernel, as opposed to the dual domain algorithms usually used in the kernel machines.
Kernel approximation methods can be grouped under two categories; data dependent [9,[11][12][13][14][15] and data independent [10,[16][17][18]. In this work, we focus on the data dependent approach. Notably, the data dependent (Nyström) methods conceptually perform Kernel Principal Component Analysis (KPCA) with random subsets of training data to create the explicit feature mappings. As a result, they lead to decision makers, which are functions of pairwise comparisons with the training data. Yet, unlike the margin maximizing Support Vector Machines (SVMs), the so called support vectors of these models are chosen independently from the pattern recognition task.
One of the key advantages of SVM models is that they place greater emphasis on the more important samples, i.e., the support vectors [6,19,20]. Inspired by this, we propose the use of a supervised sample selection method to enhance the Nyström methods, as opposed to the traditional unsupervised Nyström variants. More specifically, we employ a two stage procedure. In the first stage, an approximate kernel classifier is trained using standard Nyström techniques. In the second stage, support vectors are chosen based on the classifier from the first stage. These are then used to extract features that are more suitable for the classification task. To the best of our knowledge, this is the first work that chooses the subsets of samples used by Nyström methods in a supervised manner.
Due to the relation of this objective to sample importance weighting for multiple Machine Learning models, we propose the negative margin criterion for the selection of the support vectors. Specifically, the negative margin criterion measures how far on the wrong side of the classification boundary a sample lies. By exploiting the ability of Nyström methods to approximate this quantity, our two stage procedure allows them to restrict the solution to a more optimal subspace, without increasing the complexity. The experimental results on six datasets demonstrate that, not only can support vector selection improve the classification performance of kernel approximation methods, but it can help exceed the performances of full-dimensional Kernel Ridge Regression and SVM as well.
Related Work
Many data independent approximations of kernel based features have been proposed. Rahimi and Recht introduced random features to approximate shift invariant kernels [10,16]. The most wellknown of such techniques is the Random Fourier Features. These methods were later extended for improved complexity and versatility [17,18]. Random features have desirable generalization properties [21], even though data dependent approximations were shown to exploit the structure in the data better, both theoretically and empirically [21,22].
The Standard Nyström algorithm can be viewed as the application of KPCA to a small, randomly chosen subset of training samples [8]. Various works have altered this method to achieve better approximations with less memory/computation. Zhang et al. use k-means centroids to perform KPCA, instead of a random subset of the data [11]. Kumar et al. combine multiple smaller scale KPCAs [12]. Li et al. utilize randomized SVD to speed up KPCA for the Nyström algorithms [15]. Additionally, non-uniform sampling schemes have been explored to improve the performance of Nyström [9,23,24]. Though, these require at least one pass over the whole kernel matrix, resulting in o(N 2 ) complexity. For linear and RBF kernels, uniform sampling has been shown to work well [25] and results in no additional complexity.
The support vector selection scheme we propose can be applied together with many Nyström variants, such as those proposed in [8,12,15]. The main difference of our approach from the previous work is that the subsets of samples used by Nyström methods are selected in a supervised manner.
Preliminaries
Our method consists of two stages, both of which utilize variants of the Nyström approximations. Therefore, we briefly discuss the variants we used here. More details about the variants other than Standard Nyström can be found in Appendix A.
Notation: We denote by K the full, N × N kernel matrix and by Φ the full, N -columned data matrix in the kernel induced feature space. K and Φ denote the approximations of the kernel and data matrices, respectively. Similarly, K(·, ·) and K(·, ·) denote the kernel function and its approximation. I n ⊂ {1, . . . , N } denotes a subset of n < N selected indices. We use MATLAB notation to refer to row and column subsets of a matrix, namely, M(I n , :) and M(:, I n ) denote subsets of n rows and columns of M, respectively, and M(I n , I n ) denotes an n × n dimensional submatrix of M. For a matrix M, we denote its best rank-k approximation by M k and its Moore-Penrose inverse by M + . · 2 denotes the the spectral norm and · F denotes the Frobenius norm for matrices.
Standard Nyström [8] algorithm projects the data into a kernel feature subspace spanned by n ≪ N samples. It produces a rank-k approximation of the kernel matrix given by K = CB + k C ⊤ , where C = K(:, I n ) and B = K(I n , I n ). This is the same as applying the non-centered KPCA feature mapping to the training data;
Φ = Σ − 1 /2 U ⊤ C ⊤ , where B k = UΣU ⊤ is the compact SVD of B k .
Owing to the Representer Theorem [26,27], applying the dual formulations of certain machine learning models using K is equivalent to applying their primal formulations using Φ. Thus, a dual solution with n pairwise comparisons can be similarly obtained via the corresponding primal domain optimization. Generally, the n data points are sampled uniformly without replacement. The computational complexity of this algorithm is O(N nk + n 3 ).
Ensemble Nyström [12] performs multiple smaller dimensional KPCA mappings, instead of a single large one. This can be done by dividing the n samples into m non-overlapping subsets to compute m separate KPCAs. The resulting feature mappings are then scaled and concatenated. The computational complexity of this algorithm is O(N nk/m + n 3 /m 2 ).
Nyström with Randomized SVD [13,15] speeds up the SVD in the Nyström algorithms via a randomized algorithm proposed in [28]. This reduces the computational complexity of Standard Nyström with rank reduction to O(N nk + n 2 k + k 3 ).
Methodology
In this section, we explain the theory underlying our method and describe the components in our two stage procedure. Our method first approximates the negative margins for all training samples by utilizing standard kernel approximation methods, then trains a classifier with kernel based features extracted by using the chosen subset of samples, which we aptly name the support vectors.
The Negative Margin Criterion
Kernel approximation methods primarily aim to create fast and accurate low rank approximations of the kernel matrix, even though approximating the kernel matrix itself is generally not an end goal. While the Nyström variants need to sample a set that represents the training data well for a good approximation, the set of support vectors often constitute a small and unrepresentative subset of the data [29,30]. This observation motivates the selection method developed in this section.
We shall first discuss why the negative margin serves as a suitable criterion for the selection of the support vectors. As two examples, let us write the primal formulations of Soft-Margin SVM (SVM, left in (1)) and Ridge Regression, also known as Least-Squares SVM (RR, right in (1)), minimize w,ε,b
1 2 w 2 + C N i=1 ε i subject to y i (w ⊤ φ (x i ) + b) ≥ 1 − ε i , ε i ≥ 0, ∀i. minimize w,ε,b 1 2 w 2 + 1 2ρ N i=1 ε 2 i subject to y i (w ⊤ φ (x i ) + b) = 1 − ε i , ∀i.(1)
where y i ∈ {−1, +1}. We can then introduce the dual coefficients {α i } N i=1 and derive the following optimality conditions for both models from the Karush-Kuhn-Tucker (KKT) conditions,
w = N i=1 y i α i φ (x i ) and N i=1 y i α i = 0.(2)
Moreover, we obtain the following optimality conditions on SVM (left in (3)) and RR (right in (3)),
α i = 0, if y i (w ⊤ φ (x i ) + b) > 1, 0 ≤ α i ≤ C, if y i (w ⊤ φ (x i ) + b) = 1, α i = C, if y i (w ⊤ φ (x i ) + b) < 1. α i = 1 ρ 1 − y i (w ⊤ φ (x i ) + b) .(3)
From these conditions, it is clear that larger values of the negative margin, −y i (w ⊤ φ (x i ) + b) lead to larger weights for the samples. For SVM, samples get 0 weights if this value is less than −1 and constant weights if it is greater than −1. For RR, the weights increase linearly with negative margin.
The optimality conditions of SVM further imply that an optimal margin classifier can be found in the subspace spanned by the high negative margin samples. Additionally, applying the Standard Nyström algorithm to data can simply be viewed as restricting the solution of the resulting model to the subspace spanned by n chosen samples in the feature space [22,31]. Thus, by performing a Nyström approximation with high negative margin samples, it is possible to restrict the solution without losing optimality.
Algorithm 1 Nyström Kernel Ridge Regression with support vector selection
Input: Training data: (X, y); model parameters: ρ, n 0 , k 0 , n f , k f ; and kernel parameters.
1. Compute a k 0 -dimensional approximate kernel feature map Φ using a Nyström variant with n 0 uniformly sampled data points. 2. Train RR using the approximate feature map Φ, by solving (5).
3. For i ∈ {1, . . . , N }, compute the negative margin −y i w ⊤ yi φ (x i ) + b yi , where (w yi , b yi )
are the parameters of the binary classifier that separates the class y i from the rest. 4. Select n f data samples that maximize the negative margin.
5.
Compute the k f -dimensional kernel feature map Φ sv using a Nyström variant with the n f selected data points. 6. Train another RR using Φ sv . Turn the resulting classifier into the standard form via (6). Output: One vs. Rest Kernel Ridge Regression classifier with n f support vectors.
Due to the symmetry of the least-squares loss, high positive margin samples may also get large RR weights in absolute value. But, since margin maximizing models reduce the pairwise comparisons by ignoring such samples, we use the negative margin as a support vector selection criterion for the RR classifier as well. This leaves us with the problem of finding such samples efficiently.
Approximating the Margins by Approximating the Kernel
To find the margin values, one would normally train a classifier with the full kernel matrix, which defeats the purpose of using low dimensional feature mappings. However, by approximating the kernel matrix, one can also approximate the margin values. In addition, the bounds on the approximations are quite tight for the Ridge Regression model, which also enjoys optimal learning bounds with Nyström features [31]. In the following, we give the bound from Cortes et al. [32], Proposition 1. Let m(x, y) and m(x, y) be the margins for the sample (x, y) produced by KRR before and after the kernel approximation, respectively. Define κ > 0 such that K(x, x) ≤ κ and K(x, x) ≤ κ for all x ∈ X . Then the following inequality holds for all (x, y) ∈ (X , Y),
|m(x, y) − m(x, y)| ≤ N κ ρ 2 K − K 2 .(4)
This bound allows us to to approximate the margin values at low computational costs by exploiting standard kernel approximation techniques. Quality of the margin approximation depends on the quality of the kernel approximation. Hence, in order to estimate the margin for all the samples, we can first approximate the kernel matrix using any of the methods described earlier. Afterwards, support vectors can be selected based on the approximate values of the negative margin.
Nyström Kernel Ridge Regression Model
For the selection of support vectors and training of the final classifier, we use the Ridge Regression model, due to its provable generalization properties [21,31] and tight bounds on the approximate margin, as presented by Proposition 1 [32]. From our experience, we also found it to be robust to the choice of hyper-parameter. Thus, it makes a good choice for our application. To generalize our support vector selection method to multi-class settings, we train a One vs. Rest RR, then use the classifier that separates a sample's own class from the others to compute the negative margin.
From the approximate feature map Φ, the RR classifier is obtained by solving
minimize W,b Φ ⊤ W + − → 1 b ⊤ − Y 2 F + ρ W 2 F (5)
where Y ∈ R N ×L is the class indicator matrix, with L being the number of classes.
Since any Nyström feature map can be written as Φ = A ⊤ K(I n , :), ∃ A ∈ R n×k , after finding the optimal W and b, the resulting hypothesis can be applied directly to a test kernel matrix via
h (X test ) = K ⊤ test AW + − → 1 b ⊤ = K ⊤ test A + − → 1 b ⊤ .(6)
This is simply a multi-class variant of the Support Vector Machine and is much cheaper to compute when L ≪ k, as A ∈ R n×L . Notice that immaterial which Nyström variant or value of k is used, K test has size n × N test , that is, the final classifier has O(n) complexity, which is solely dependent on the number of samples used to compute the feature mapping (i.e., the support vectors).
Summary
Our method consists of two stages. We train an approximate kernel based classifier in the first stage.
In the second stage, we select support vectors based on the approximate negative margins and train a classifier using only pairwise comparisons with selected samples. The Nyström methods described earlier form the backbone of these stages.
The overall methodology is provided in Algorithm 1. Steps 1-2 form the first stage by training RR on features obtained from standard kernel approximation techniques. Steps 3-6 form the second stage by obtaining approximate margin values from the classifier in the first stage, selecting the support vectors, then training RR on features obtained from supervised kernel approximation. The first stage of the algorithm corresponds to training a standard, unsupervised Nyström RR model. Consequently, the novelty of our method comes from the second stage.
When n 0 ≈ n f , k 0 ≈ k f , steps 1-2 and 5-6 of the algorithm have similar computational costs, which depend on the Nyström variant used. Steps 3-4 add a combined computational cost of O(N k 0 + N log N ), which is negligible when log N ≪ n 0 k 0 . Thus, Algorithm 1 has the same overall computational complexity as RR with standard Nyström techniques.
Experiments
Experimental Setup
The datasets used in our experiments are summarized in Table 1. All the datasets, except for Buzz data, had default train/test splits at LibSVM [39] or UCI [40] repositories. The Buzz data was randomly split into train/test sets for each independent experiment. Training and testing sets of COD-RNA had duplicate entries, which are removed a priori. We scaled the original features to be in [0, 1] and used RBF kernels, i.e.,
K(x i , x j ) = exp(−γ x i − x j 2 2 )
, for all the experiments. We performed 30 randomized trials for each experiment. The ridge ρ was set to 10 −5 , as we found it to be a good value across all the datasets. For the Standard and Ensemble Nyström methods, we use k = n to exploit the full space spanned by the chosen subset of samples. We use m = 5 for the Ensemble Nyström. For Nyström with Randomized SVD, we use k = n/2 for the computational gain. We set the oversampling parameter of Randomized SVD to 10 and the power parameter to 2.
We ran two sets of evaluations. (1) We evaluate the ability of Support Vector Selection to improve Standard Nyström. (2) We demonstrate the improvement of our method with other Nyström variants.
Evaluation of Support Vector Selection with Standard Nyström
We first demonstrate the success of support vector selection with varying degrees of the kernel approximations. We set k f = n f , k 0 = n 0 in Algorithm 1, with n 0 controlling the quality of the margin approximations during support vector selection, and n f controlling the number of support vectors in the final model. The results are provided in Figure 1, where the Standard Nyström without support vector selection is denoted by Std Nys. We include comparisons with full-dimensional Kernel SVM (KSVM) and Kernel RR (KRR), except for the Buzz dataset, whose kernel matrix size is too big for the memory. We also include the performances of Random Fourier Features (Fourier) [10].
First, the results show that the accuracy enhancement from support vector selection generally improves with the increase in the quality of the kernel matrix approximation, which agrees with Proposition 1. Furthermore, we found that even with low approximation rank (n 0 ), support vector selection can yield notable improvement. Noticeably on MNIST, with n 0 = 500, we obtain 0.28% accuracy increase over the Standard Nyström, but with n 0 = 10000, the marginal accuracy gain is at most 0.12%. Similarly for Buzz data, the difference in accuracies between n 0 = 100 and n 0 = 2000 is at most 0.15% -though n 0 = 100 already yields 0.9% increase. Hence, the results show that significant predictive performance improvement can be achieved by utilizing a small number of samples for support vector selection, namely, by exploiting very cheap approximations of the margin values.
Second, across all datasets, the Nyström with support vector selection, i.e., the supervised Nyström method, outperforms the Standard Nyström for all but a few dimensions. Particularly on COD-RNA, where the highest accuracy achieved is 95.9%, selecting only 10 and 20 support vectors with n 0 = 500 produces the accuracies of 93.4% and 95.2%, respectively. These are remarkably higher than 84.1% and 92.4%, the performances of the Standard Nyström with 10 and 20 dimensions.
Finally, the results demonstrate that with support vector selection, Nyström methods can reach or exceed the predictive performances of full-dimensional mappings at even lower dimensions. Particularly, on USPS, HAR and Letter, our method outperforms the full-dimensional KRR at 1250, 750 and 1500 dimensions, respectively, while the Standard Nyström does not. Likewise, our method reaches the KSVM performance faster on all the datasets, and outperforms KSVM on USPS, HAR and MNIST by 0.65%, 0.65% and 0.27%, using 1750, 1250 and 10000 dimensions, respectively.
Improvement Over Other Nyström Variants
In this section, we analyze the improvement of our support vector selection upon the Standard, Ensemble, and Randomized SVD variants of Nyström, denoted by Nys, ENys and RNys, respectively.
Although lower values of n 0 were shown to be viable in the previous section, we set n 0 = n f . This is a suitable scenario for training kernel machines under memory constraints. Since the two stages of our algorithm have similar computational costs under this setting, this roughly doubles the total training time over unsupervised Nyström but does not change the overall complexity. As another comparison, we include the K-Means Nyström with the same settings as [11], denoted by KNys.
The model sizes and training times of RR classifiers with different Nyström variants are demonstrated in Figures 2 and 3 vector selection (dashed curves), whereas Nys+, Enys+ and RNys+ are the results with support vector selection (solid curves). Note that the solid curves (supervised Nyström variants) in Figure 3 show the times spent in the second stage (steps 3-6) of Algorithm 1. The first stage (steps 1-2) applies the same algorithms without support vector selection, the training times for which are shown by the dashed curves. We do not include K-Means Nyström in Figure 3, as the computational overhead from k-means led to its consistent under-performance in terms of training time. The specific observations on each dataset are as follows.
USPS: All Nyström variants are significantly improved by support vector selection for a range of memory/time complexities, before accuracy begins to saturate. Among all compared methods, Nys+ performs the best by outperforming KNys by 0.37% and Nys by 0.44% at 2078 kB model size.
HAR: All Nyström variants are significantly improved by support vector selection after a certain memory/time complexity is reached. Nys+ performs the best and outperforms KNys by 0.39% and Nys by 0.51% at 3322 kB model size.
Letter: All Nyström variants are significantly improved by support vector selection for a wide range of memory/time complexities. ENys+ performs the best and outperforms KNys and ENys by more than 0.67% at 406 kB model size.
COD-RNA: Nys is significantly improved for a range of complexities, before accuracy begins to saturate. ENys and RNys are consistently improved, but to a lesser extent. Nys+ performs the best and outperforms KNys and Nys by more than 0.19% at 21 kB model size, requiring 40% less size to reach within one standard deviation of the saturation accuracy.
MNIST: All Nyström variants are significantly improved for a wide range of complexities. ENys+ performs the best and outperforms KNys by 0.26% and ENys by 0.35% at 406 kB model size.
Buzz: All Nyström variants are significantly improved for a wide range of complexities. Nys+ performs the best and outperforms KNys by 0.83% and Nys by 0.94% at 122 kB model size.
We observe from Figure 2 that among the unsupervised variants, KNys is the best on USPS, HAR, MNIST and Buzz datasets for achieving high accuracy with the smallest model size. However, it is consistently outperformed by Nys+ after supervision is introduced.
Furthermore, Figure 3 shows that steps 1-2 and 3-6 of Algorithm 1 cost roughly the same training times when n 0 = n f , k 0 = k f , and this setting suffices to give significant improvement. Hence, Algorithm 1 can enhance Nyström variants without increasing their overall computational complexity.
Summary
Our experimental observations support the claim that choosing the right solution subspace via support vector selection can be imperative for the success of scalable kernel based learning. More importantly, they illustrate that the negative margin criterion is suitable for selecting the data samples that span a good solution subspace.
In addition, support vector selection consistently improves the performances of Standard Nyström and Nyström with randomized SVD. These are also the variants that directly project the data into a subspace spanned by the chosen samples. Ensemble Nyström introduces scaling across subspace dimensions due to the lack of orthonormality between projection directions, so it does not conform fully to the main idea behind support vector selection. Nevertheless, Ensemble Nyström is also improved on all the datasets, and the improvements are very significant on data where scaling does not have an adverse effect on the performance.
Discussion
Although our method works well for classification on the 6 datasets, there are certain ways the support vector selection strategy can be improved and extended:
• To ensure that successive support vectors add more information on datasets with highly similar/redundant samples, a dissimilarity, or orthogonality criterion can be employed as in [41], in addition to the negative margin.
• A mechanism can be added to remove the outlier support vectors before selection. A simple way to do this would be to apply a threshold on the negative margin.
• The support vector selection method can be applied to regression tasks, by replacing the negative margin with the least-squares error as the selection criterion.
We leave these modifications to the algorithm, as well as application of our method with different Machine Learning models for future work. Nonetheless, we present possible extensions with K-Means Nyström in Appendix B, and a scheme to deal with data redundancy in Appendix C.
Conclusion
We have proposed a supervised sample selection methodology for Nyström methods to improve their predictive performances. Our selection method, inspired by the dual formulations of multiple machine learning models, successfully improves the classification performance obtained from Nyström variants, and leads to better classifiers in test time in terms of both accuracy and complexity. Moreover, our method allows this improvement to be achieved at a cost no more than that of training a classifier using standard Nyström techniques.
A Nyström Variants
A.1 Ensemble Nyström
Ensemble Nyström [12] performs multiple smaller dimensional KPCA mappings, instead of a single large one. For m experts, each using non-overlapping subsets of n ′ = n/m data samples, and with k ′ = k/m, this algorithm produces a rank-k approximation of the kernel matrix given by
K = m i=1 µ i C (i) B (i) k ′ + C (i) ⊤ = C blkdiag + 1 µ i B (i) k ′ m i=1 C ⊤ ,(7)
where {µ i } m i=1 are positive expert weights that add up to 1 1 , blkdiag(·) produces a block-diagonal matrix from its inputs, C = [C (1) C (2) · · · C (m) ], C (i) = K(:, I n ′ denotes the index set of n ′ samples used by the i th expert. This approximation is equivalent to computing m KPCAs over the non-overlapping subsets of samples, and concatenating the resulting feature mappings applied to the training data,
Φ = concat √ µ i Σ (i) − 1 /2 U (i) ⊤ C (i) ⊤ m i=1(8)
where concat(·) is row-wise concatenation, and B
(i)
k ′ = U (i) Σ (i) U (i) ⊤ is the compact SVD of B (i) k ′ .
Dividing the algorithm into m smaller KPCAs reduces its computational complexity to O(N nk/m+ n 3 /m 2 ). Unlike the Standard Nyström (i.e., m = 1), however, the resulting feature projections will almost never be mutually orthonormal.
A.2 Nyström with Randomized SVD
Nyström with Randomized SVD [13,15] speeds up the SVD in the Nyström algorithms via a randomized algorithm proposed in [28]. This reduces the computational complexity of Standard Nyström with rank reduction to O(N nk + n 2 k + k 3 ). It can also be applied to speed up the individual KPCAs in Ensemble Nyström, though we apply this method only to Standard Nyström in this paper. In order to obtain a considerable speed up, k needs to be much smaller than n. Although rank reduction further speeds up classifier training, it does not affect the complexity of the final model in test time, and generally does not enhance the predictive performance.
A.3 K-Means Nyström
K-Means Nyström [11] replaces uniform sampling with k-means. The representative set of samples are chosen to be the n cluster centroids produced by k-means, which runs on a larger subset of the training data. It produces a rank-k approximation of the kernel matrix given by K = CB + k C ⊤ . In this case, C is an N × n kernel matrix containing pairwise similarities between the training samples and the cluster centroids, while B is an n × n kernel matrix containing pairwise similarities between just the cluster centroids.
This method is the same as computing KPCA using the cluster centroids and applying the resulting feature mapping to the training data. Running a fixed number of k-means iterations introduces additional overhead, but does not change the overall complexity over the Standard Nyström algorithm.
B K-Means Nyström Experiments B.1 Supervising K-Means Nyström
For K-Means Nyström on large datasets, it is standard to train the k-means with a subset of the data sampled uniformly at random. We select this subset of samples using the negative margin, before training the final classifier. Then K-Means is applied to the chosen subset of samples in step 5 of Algorithm 1. This procedure significantly alters the way supervision is applied, which for other Nyström variants, is to select the support vectors directly using their approximate margin values.
In K-Means Nyström, the data is projected to a subspace spanned by the cluster centroids, which may be different from the subspace spanned by the samples themselves. Therefore, this Nyström variant is not as suitable for the application of support vector selection. Nonetheless, we provide the results for supervised K-Means Nyström here for comparison.
B.2 Experimental Setup
Similar to our experiments with Standard and Ensemble Nyström, we set k = n to exploit the full space spanned by the representative samples, in this case, the cluster centroids.
It was suggested in [11] to perform K-Means with 20000 randomly chosen samples on large datasets. Hence, we do so on Shuttle, COD-RNA, MNIST and Buzz data. For the implementation of K-Means Nyström with support vector selection (KNys+), we select 5000 samples from HAR, 10000 samples from Letter, and 20000 samples from all the other datasets as inputs to k-means. Therefore, the inputs of k-means end up being supervised, instead of the inputs of KPCA. We set n 0 = n f , meaning the same numbers of cluster centroids were used to train KPCA in the first and second stages of Algorithm 1.
B.3 Results
As demonstrated in Figure 4, obtaining k-means centroids from a chosen subset of samples only leads to improvement on 2 of the 6 datasets; Letter and MNIST. This is despite the fact that the chosen support vectors work very well on all 6 datasets, when they are not clustered by k-means. A possible explanation for this is that centroids retain high negative margins on Letter and MNIST, whereas they lose this characteristic in the kernel feature space on the other datasets. Therefore, clustering the support vectors after selection can be less effective, which prompts us to perform selection after clustering, that is, to ensure that the chosen cluster centers have high negative margins.
In the next section, we consider supervised selection being applied to the outputs instead of the inputs of the k-means. We find that this approach works well, and can be a good way to deal with data redundancy by ensuring that successive support vectors add more information to the model.
C Support Centroid Selection for K-Means Nyström
Some datasets may contain a large number of similar samples, which could be problematic for our support vector selection method. For instance, if a dataset contains duplicate entries, they will be given the same margin values, which can lead to the same projection directions being applied In this section, we modify the K-Means Nyström to perform support vector selection among the cluster centroids, which is a possible way to deal with redundancy. We then show the results on 3 datasets, where this method leads to improvement, while support vector selection alone does not.
C.1 Supervised Centroid Selection
To perform centroid selection using the negative margin, the k-means centroids need to be assigned labels. This is done by a voting scheme. The label of each cluster centroid is determined by the majority vote of the members of its cluster. Afterwards, negative margin based support vector selection is applied to the cluster centroids, instead of the original training samples.
C.2 Experimental Setup
We perform the centroid selection experiments on 3 different datasets, which are summarized in Table 2. 20000 samples are used for the k-means clustering on SVHN and IJCNN, 50000 samples are used for the k-means clustering on CovType. For K-Means Nyström with support centroid selection (KNys+), we begin with 3n f clusters, then n f of the centroids are selected with the negative margin criterion. To approximate the margins in the first step of Algorithm 1, we use n 0 randomly chosen centroids. The default setting n 0 = n f is used for our experiments in this section. SVHN: KNys+ performs the best and outperforms Nys and KNys by more than 1% above 5000 kB model size.
C.3 Results
IJCNN: KNys+ performs the best and outperforms Nys and KNys above 200 kB model size by more than 0.05%, which is less significant. However, we note that the baseline performances on this dataset are very high, being above 99% for the most part.
CovType: KNys+ performs the best and outperforms Nys and KNys by 0.1% to 0.27% above 1000 kB model size. The improvement increases as the model size increases for the range of sizes displayed.
Support vector selection originally fails to improve the predictive accuracy of Standard Nyström on these datasets. However, thanks to the reduction in sample redundancy via k-means clustering, K-Means Nyström with support centroid selection outperforms both Standard and K-Means Nyström on all three datasets for a wide range of complexities. This is especially significant on SVHN, since K-Means Nyström normally performs worse than Standard Nyström on this data.
Reducing data redundancy via k-means comes at a computational cost, which can be significant compared to the training time of Standard Nyström. Future work will incorporate redundancy reduction into the selection procedure, which might be done at a low computational cost.
Figure 1 :
1Number of features (n f ) vs. prediction accuracy. Accuracies of the final classifier generally increase with the initial approximation rank n 0 . Supervised Nyström features outperform the Standard Nyström and Random Fourier features for various dimensions across the 6 datasets.
Figure 3 :
3Training time vs. prediction accuracy. The supervised variant curves show the additional times spent for support vector selection and training of the final classifier (steps 3-6 in Algorithm 1).
Figure 4 :
4Size of the classifier model vs. prediction accuracy, using K-Means Nyström and K-Means Nyström with support vector selection.
Figure 5
5compares the accuracies of Standard Nyström (Nys), Nyström with support vector selection (Nys+), K-Means Nyström (KNys), and K-Means Nyström with support centroid selection (KNys+). The specific observations on the datasets are as follows.
Figure 5 :
5Size of the classifier model vs. prediction accuracy, using Standard Nyström, Nyström with support vector selection, K-Means Nyström, and K-Means Nyström with support centroid selection.
Table 1 :
1Summary of the datasets used in the experiments, γ is the RBF kernel parameter.Dataset
# Features
# Training
# Testing
# Classes
γ
USPS [33]
256
7291
2007
10
0.01
HAR [34]
561
7352
2947
6
0.01
Letter [35]
16
15000
5000
26
1.0
COD-RNA [36]
8
49451
97824
2
1.0
MNIST [37]
784
60000
10000
10
0.01
Buzz [38]
77
120000
20707
2
0.001
, respectively. Nys, ENys, RNys and KNys are the results without support0
1000
2000
3000
4000
5000
Memory (kB)
93
93.5
94
94.5
95
95.5
96
Accuracy [%]
USPS
Nys
Nys+
ENys
ENys+
RNys
RNys+
KNys
1000 2000 3000 4000 5000 6000 7000
Memory (kB)
94
94.5
95
95.5
96
96.5
97
97.5
Accuracy [%]
HAR
Nys
Nys+
ENys
ENys+
RNys
RNys+
KNys
0
200
400
600
800
1000
Memory (kB)
92
93
94
95
96
97
98
Accuracy [%]
Letter
Nys
Nys+
ENys
ENys+
RNys
RNys+
KNys
0
10
20
30
40
50
60
70
Memory (kB)
94
94.5
95
95.5
96
Accuracy [%]
COD-RNA
Nys
Nys+
ENys
ENys+
RNys
RNys+
KNys
0
1
2
3
4
5
6
Memory (kB)
10 4
96
96.5
97
97.5
98
98.5
Accuracy [%]
MNIST
Nys
Nys+
ENys
ENys+
RNys
RNys+
KNys
0
100
200
300
400
500
600
Memory (kB)
88
88.5
89
89.5
90
90.5
91
91.5
Accuracy [%]
Buzz
Nys
Nys+
ENys
ENys+
RNys
RNys+
KNys
Figure 2: Size of the classifier model vs. prediction accuracy. For the best accuracy/model size
trade-off, the supervised Standard Nyström (Nys+) is the overall best method to use.
0
1
2
3
4
Time (s)
93
93.5
94
94.5
95
95.5
96
Accuracy [%]
USPS
Nys
Nys+
ENys
ENys+
RNys
RNys+
0
0.5
1
1.5
2
2.5
Time (s)
94
94.5
95
95.5
96
96.5
97
97.5
Accuracy [%]
HAR
Nys
Nys+
ENys
ENys+
RNys
RNys+
0
5
10
15
20
25
30
Time (s)
94
95
96
97
98
Accuracy [%]
Letter
Nys
Nys+
ENys
ENys+
RNys
RNys+
0
1
2
3
4
5
Time (s)
95
95.2
95.4
95.6
95.8
96
Accuracy [%]
COD-RNA
Nys
Nys+
ENys
ENys+
RNys
RNys+
0
50
100
150
Time (s)
96.5
97
97.5
98
98.5
Accuracy [%]
MNIST
Nys
Nys+
ENys
ENys+
RNys
RNys+
0
2
4
6
8
10
Time (s)
88
89
90
91
92
Accuracy [%]
Buzz
Nys
Nys+
ENys
ENys+
RNys
RNys+
Table 2 :
2Summary of the datasets used in additional experiments, γ is the RBF kernel parameter. multiple times. While standard Nyström techniques can alleviate data redundancy to a large extent by sampling uniformly at random, our support vector selection scheme needs to be modified to work well on datasets with redundant entries.Dataset
# Features
# Training
# Testing
# Classes
γ
SVHN [42]
3072
73257
26032
10
0.001
IJCNN [39]
22
120000
21691
2
0.1
CovType [43]
54
500000
81012
7
1.0
Computing the ensemble weights introduces additional overhead and lacks significant benefits for the task of classification, therefore the weights can be set to 1 /m.
Kernel methods in computational biology. B Schölkopf, K Tsuda, J.-P Vert, MIT pressCambridge, MA, USAB. Schölkopf, K. Tsuda, and J.-P. Vert, Kernel methods in computational biology. Cambridge, MA, USA: MIT press, Jul. 2004.
T Joachims, Learning to classify text using support vector machines: Methods, theory and algorithms. Norwell, MA, USAKluwer Academic Publishers186T. Joachims, Learning to classify text using support vector machines: Methods, theory and algorithms, vol. 186. Norwell, MA, USA: Kluwer Academic Publishers, 2002.
Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine-belief network architecture. B Schuller, G Rigoll, M Lang, I-577-80Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE International Conference on Acoustics, Speech, and Signal essingIEEE1B. Schuller, G. Rigoll, and M. Lang, "Speech emotion recognition combining acoustic features and lin- guistic information in a hybrid support vector machine-belief network architecture," in Proc. IEEE In- ternational Conference on Acoustics, Speech, and Signal Processing, vol. 1, pp. I-577-80, IEEE, May 2004.
Training support vector machines: An application to face detection. E Osuna, R Freund, F Girosit, Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern RecognitionIEEEE. Osuna, R. Freund, and F. Girosit, "Training support vector machines: An application to face detection," in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 130-136, IEEE, Jun. 1997.
Gene selection for cancer classification using support vector machines. I Guyon, J Weston, S Barnhill, V Vapnik, Machine Learning. 46I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, "Gene selection for cancer classification using support vector machines," Machine Learning, vol. 46, pp. 389-422, Jan. 2002.
The nature of statistical learning theory. V Vapnik, SpringerNY, USA2nd ed.V. Vapnik, The nature of statistical learning theory. Verlag, NY, USA: Springer, 2nd ed., 2000.
Kernel methods and machine learning. S.-Y Kung, Cambridge University PressNew York, NY, USAS.-Y. Kung, Kernel methods and machine learning. New York, NY, USA: Cambridge University Press, 2014.
Orthogonal series density estimation and the kernel eigenvalue problem. M Girolami, Neural Computation. 14M. Girolami, "Orthogonal series density estimation and the kernel eigenvalue problem," Neural Compu- tation, vol. 14, pp. 669-688, Mar. 2002.
On the Nyström method for approximating a gram matrix for improved kernel-based learning. P Drineas, M W Mahoney, Journal of Machine Learning Research. 6P. Drineas and M. W. Mahoney, "On the Nyström method for approximating a gram matrix for improved kernel-based learning," Journal of Machine Learning Research, vol. 6, pp. 2153-2175, Dec. 2005.
Random features for large-scale kernel machines. A Rahimi, B Recht, Proc. Advances in Neural Information Processing Systems. Advances in Neural Information essing SystemsA. Rahimi and B. Recht, "Random features for large-scale kernel machines," in Proc. Advances in Neural Information Processing Systems, pp. 1177-1184, Dec. 2008.
Improved Nyström low-rank approximation and error analysis. K Zhang, I W Tsang, J T Kwok, Proc. International Conference on Machine Learning. International Conference on Machine LearningACMK. Zhang, I. W. Tsang, and J. T. Kwok, "Improved Nyström low-rank approximation and error analysis," in Proc. International Conference on Machine Learning, pp. 1232-1239, ACM, Jul. 2008.
Ensemble Nyström method. S Kumar, M Mohri, A Talwalkar, Proc. Advances in Neural Information Processing Systems. Advances in Neural Information essing SystemsS. Kumar, M. Mohri, and A. Talwalkar, "Ensemble Nyström method," in Proc. Advances in Neural Infor- mation Processing Systems, pp. 1060-1068, Dec. 2009.
Making large-scale Nyström approximation possible. M Li, J T , .-Y Kwok, B Lü, Proc. International Conference on Machine Learning. International Conference on Machine Learning631M. Li, J. T.-Y. Kwok, and B. Lü, "Making large-scale Nyström approximation possible," in Proc. Interna- tional Conference on Machine Learning, p. 631, Jun. 2010.
Memory efficient kernel approximation. S Si, C.-J Hsieh, I Dhillon, Proc. International Conference on Machine Learning. International Conference on Machine LearningS. Si, C.-J. Hsieh, and I. Dhillon, "Memory efficient kernel approximation," in Proc. International Con- ference on Machine Learning, pp. 701-709, Jun. 2014.
Large-scale Nyström kernel matrix approximation using randomized SVD. M Li, W Bi, J T Kwok, B.-L Lu, IEEE Transactions on Neural Networks and Learning Systems. 26M. Li, W. Bi, J. T. Kwok, and B.-L. Lu, "Large-scale Nyström kernel matrix approximation using ran- domized SVD," IEEE Transactions on Neural Networks and Learning Systems, vol. 26, pp. 152-164, Jan. 2015.
Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. A Rahimi, B Recht, Proc. Advances in Neural Information Processing Systems. Advances in Neural Information essing SystemsA. Rahimi and B. Recht, "Weighted sums of random kitchen sinks: Replacing minimization with random- ization in learning," in Proc. Advances in Neural Information Processing Systems, pp. 1313-1320, Dec. 2009.
Fastfood-approximating kernel expansions in loglinear time. Q Le, T Sarlós, A Smola, Proc. International Conference on Machine Learning. International Conference on Machine Learning85Q. Le, T. Sarlós, and A. Smola, "Fastfood-approximating kernel expansions in loglinear time," in Proc. In- ternational Conference on Machine Learning, vol. 85, Jun. 2013.
A la carte-learning fast kernels. Z Yang, A Wilson, A Smola, L Song, Proc. Artificial Intelligence and Statistics. Artificial Intelligence and StatisticsZ. Yang, A. Wilson, A. Smola, and L. Song, "A la carte-learning fast kernels," in Proc. Artificial Intelli- gence and Statistics, pp. 1098-1106, May 2015.
Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval. D Tao, X Tang, X Li, X Wu, IEEE Transactions on Pattern Analysis and Machine Intelligence. 28D. Tao, X. Tang, X. Li, and X. Wu, "Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval," IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, vol. 28, pp. 1088-1099, Jul. 2006.
Feature selection for self-supervised classification with applications to microarray and sequence data. S.-Y Kung, M.-W Mak, IEEE Journal of Selected Topics in Signal Processing. 2S.-Y. Kung and M.-W. Mak, "Feature selection for self-supervised classification with applications to microarray and sequence data," IEEE Journal of Selected Topics in Signal Processing, vol. 2, pp. 297- 309, Jun. 2008.
Generalization properties of learning with random features. A Rudi, L Rosasco, Proc. Advances in Neural Information Processing Systems. Advances in Neural Information essing SystemsA. Rudi and L. Rosasco, "Generalization properties of learning with random features," in Proc. Advances in Neural Information Processing Systems, pp. 3218-3228, Dec. 2017.
Nyström method vs random fourier features: A theoretical and empirical comparison. T Yang, Y.-F Li, M Mahdavi, R Jin, Z.-H Zhou, Proc. Advances in Neural Information Processing Systems. Advances in Neural Information essing SystemsT. Yang, Y.-F. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou, "Nyström method vs random fourier features: A theoretical and empirical comparison," in Proc. Advances in Neural Information Processing Systems, pp. 476-484, May 2012.
Fast approximation of matrix coherence and statistical leverage. P Drineas, M Magdon-Ismail, M W Mahoney, D P Woodruff, Journal of Machine Learning Research. 13P. Drineas, M. Magdon-Ismail, M. W. Mahoney, and D. P. Woodruff, "Fast approximation of matrix coherence and statistical leverage," Journal of Machine Learning Research, vol. 13, pp. 3475-3506, Dec. 2012.
Revisiting the Nyström method for improved large-scale machine learning. A Gittens, M W Mahoney, Journal of Machine Learning Research. 17A. Gittens and M. W. Mahoney, "Revisiting the Nyström method for improved large-scale machine learn- ing," Journal of Machine Learning Research, vol. 17, pp. 3977-4041, Apr. 2016.
Sampling methods for the Nyström method. S Kumar, M Mohri, A Talwalkar, Journal of Machine Learning Research. 13S. Kumar, M. Mohri, and A. Talwalkar, "Sampling methods for the Nyström method," Journal of Machine Learning Research, vol. 13, pp. 981-1006, Apr. 2012.
Spline models for observational data. G Wahba, 59Philadelphia, PA, USA: SiamG. Wahba, Spline models for observational data, vol. 59. Philadelphia, PA, USA: Siam, 1990.
A generalized representer theorem. B Schölkopf, R Herbrich, A J Smola, Proc. International Conference on Computational Learning Theory. International Conference on Computational Learning TheorySpringerB. Schölkopf, R. Herbrich, and A. J. Smola, "A generalized representer theorem," in Proc. International Conference on Computational Learning Theory, pp. 416-426, Springer, Jul. 2001.
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. N Halko, P.-G Martinsson, J A Tropp, SIAM Review. 53N. Halko, P.-G. Martinsson, and J. A. Tropp, "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions," SIAM Review, vol. 53, pp. 217-288, May 2011.
Support vector method for novelty detection. B Schölkopf, R C Williamson, A J Smola, J Shawe-Taylor, J C Platt, Proc. Advances in Neural Information Processing Systems. Advances in Neural Information essing SystemsB. Schölkopf, R. C. Williamson, A. J. Smola, J. Shawe-Taylor, and J. C. Platt, "Support vector method for novelty detection," in Proc. Advances in Neural Information Processing Systems, pp. 582-588, Dec. 2000.
J Friedman, T Hastie, R Tibshirani, The elements of statistical learning. NY, USASpringer12nd ed.J. Friedman, T. Hastie, and R. Tibshirani, The elements of statistical learning, vol. 1. Verlag, NY, USA: Springer, 2nd ed., 2001.
Less is more: Nyström computational regularization. A Rudi, R Camoriano, L Rosasco, Proc. Advances in Neural Information Processing Systems. Advances in Neural Information essing SystemsA. Rudi, R. Camoriano, and L. Rosasco, "Less is more: Nyström computational regularization," in Proc. Advances in Neural Information Processing Systems, pp. 1657-1665, Dec. 2015.
On the impact of kernel approximation on learning accuracy. C Cortes, M Mohri, A Talwalkar, Proc. International Conference on Artificial Intelligence and Statistics. International Conference on Artificial Intelligence and StatisticsC. Cortes, M. Mohri, and A. Talwalkar, "On the impact of kernel approximation on learning accuracy," in Proc. International Conference on Artificial Intelligence and Statistics, pp. 113-120, May 2010.
A database for handwritten text recognition research. J J Hull, IEEE Transactions on Pattern Analysis and Machine Intelligence. 16J. J. Hull, "A database for handwritten text recognition research," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, pp. 550-554, May 1994.
A public domain dataset for human activity recognition using smartphones. D Anguita, A Ghio, L Oneto, X Parra, J L Reyes-Ortiz, Proc. European Symposium on Artificial Neural Networks. European Symposium on Artificial Neural NetworksD. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, "A public domain dataset for human activity recognition using smartphones.," in Proc. European Symposium on Artificial Neural Networks, Apr. 2013.
Letter recognition using holland-style adaptive classifiers. P W Frey, D J Slate, Machine Learning. 6P. W. Frey and D. J. Slate, "Letter recognition using holland-style adaptive classifiers," Machine Learning, vol. 6, pp. 161-182, Mar. 1991.
Detection of non-coding rnas on the basis of predicted secondary structure formation free energy change. A V Uzilov, J M Keegan, D H Mathews, BMC Bioinformatics. 7173A. V. Uzilov, J. M. Keegan, and D. H. Mathews, "Detection of non-coding rnas on the basis of predicted secondary structure formation free energy change," BMC Bioinformatics, vol. 7, p. 173, Mar. 2006.
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 86Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recogni- tion," Proceedings of the IEEE, vol. 86, pp. 2278-2324, Nov. 1998.
Prédiction de l'activité dans les réseaux sociaux. F Kawala, Université Grenoble AlpesPhD thesisF. Kawala, Prédiction de l'activité dans les réseaux sociaux. PhD thesis, Université Grenoble Alpes, 2015.
Libsvm: a library for support vector machines. C.-C Chang, C.-J Lin, ACM Transactions on Intelligent Systems and Technology. 227C.-C. Chang and C.-J. Lin, "Libsvm: a library for support vector machines," ACM Transactions on Intel- ligent Systems and Technology, vol. 2, p. 27, Apr. 2011.
Uci machine learning repository. K Bache, M Lichman, K. Bache and M. Lichman, "Uci machine learning repository," 2013.
Greedy spectral embedding. M Ouimet, Y Bengio, Proc. International Conference on Artificial Intelligence and Statistics. International Conference on Artificial Intelligence and StatisticsM. Ouimet and Y. Bengio, "Greedy spectral embedding.," in Proc. International Conference on Artificial Intelligence and Statistics, Jan. 2005.
Reading digits in natural images with unsupervised feature learning. Y Netzer, T Wang, A Coates, A Bissacco, B Wu, A Y Ng, Proc. NIPS Workshop on Deep Learning and Unsupervised Feature Learning. NIPS Workshop on Deep Learning and Unsupervised Feature LearningY. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, "Reading digits in natural images with unsupervised feature learning," in Proc. NIPS Workshop on Deep Learning and Unsupervised Feature Learning, no. 2, p. 5, Dec. 2011.
Comparative accuracies of neural networks and discriminant analysis in predicting forest cover types from cartographic variables. J A Blackard, D J Dean, Proc. Southern Forestry GIS Conference. Southern Forestry GIS ConferenceJ. A. Blackard and D. J. Dean, "Comparative accuracies of neural networks and discriminant analysis in predicting forest cover types from cartographic variables," in Proc. Southern Forestry GIS Conference, pp. 189-199, Oct. 1998.
|
[] |
[
"The Shape of Polarized Gluon Distributions",
"The Shape of Polarized Gluon Distributions"
] |
[
"T Morii [email protected]††[email protected] ",
"S Tanaka ",
"T Yamanishi ",
"\nFaculty of Human Development\nDivision of Sciences for Natural Environment and Graduate School of Science and Technology\nFaculty of Human Development\nDivision of Sciences for Natural Environment\nKobe University\n657Nada, KobeJapan\n",
"\nGraduate School of Science and Technology\nKobe University\n657Nada, KobeJapan\n",
"\nKobe University\n657Nada, KobeJapan\n"
] |
[
"Faculty of Human Development\nDivision of Sciences for Natural Environment and Graduate School of Science and Technology\nFaculty of Human Development\nDivision of Sciences for Natural Environment\nKobe University\n657Nada, KobeJapan",
"Graduate School of Science and Technology\nKobe University\n657Nada, KobeJapan",
"Kobe University\n657Nada, KobeJapan"
] |
[] |
The recent high precision SMC data on polarized µp scatterings have again confirmed that very little of the proton spin is carried by quarks. To unravel the mystery of the proton spin structure, it is quite important to know the behavior of the polarized gluon distribution.By using the positivity condition of distribution functions together with the unpolarized and polarized experimental data, we restrict the x dependence of the polarized gluon distribution. †
|
10.1007/bf01553988
|
[
"https://export.arxiv.org/pdf/hep-ph/9411228v1.pdf"
] | 204,926,878 |
hep-ph/9411228
|
563ad979bb2f3afd7d60a8c9226126e61ad1036d
|
The Shape of Polarized Gluon Distributions
5 Nov 1994 October 20 1994
T Morii [email protected]††[email protected]
S Tanaka
T Yamanishi
Faculty of Human Development
Division of Sciences for Natural Environment and Graduate School of Science and Technology
Faculty of Human Development
Division of Sciences for Natural Environment
Kobe University
657Nada, KobeJapan
Graduate School of Science and Technology
Kobe University
657Nada, KobeJapan
Kobe University
657Nada, KobeJapan
The Shape of Polarized Gluon Distributions
5 Nov 1994 October 20 1994
The recent high precision SMC data on polarized µp scatterings have again confirmed that very little of the proton spin is carried by quarks. To unravel the mystery of the proton spin structure, it is quite important to know the behavior of the polarized gluon distribution.By using the positivity condition of distribution functions together with the unpolarized and polarized experimental data, we restrict the x dependence of the polarized gluon distribution. †
tion g p 1 (x) more precisely and to the smaller x region up to x = 0.006 than the previous measurements carried out by the EMC [2]. The experiment indicates that the first moment of g p 1 (x) increases about 10% compared to the EMC result, and yet that value is still far from the value predicted by the nonrelativistic quark model and the one from the Ellis-Jaffe sum rule [3]. By combining these SMC data with the experimental data of the neutron β-decays and hyperon β-decays, the polarized strange quark density in the proton is derived as follows: ∆s = −0.12 ± 0.04 ± 0.04 .
(1)
On the other hand, Preparata, Ratcliffe and Soffer have shown that a bound on the value of ∆s can be obtained by requiring the positivity of distribution functions and assuming the reasonable behavior of the unpolarized s-quark distribution s(x) [4]. Quite contrary to the SMC result, they got |∆s| ≤ 0.021 ± 0.001 ,
by using the s(x) derived from the νN deep-inelastic scattering experiments [5]. Furthermore, similar results were obtained by Preparata and Soffer who indicated the following bound on the polarized s-quark density [6]:
|∆s| ≤ 0.05 +0.02 −0.05 ,(3)
using CDHS [7] and WA25 data [8]. At first sight, these bounds seem to be contradictory to the SMC data of eq.(1). There might be, however, a compromising solution. If the gluons contribute to the proton spin through the U A (1) anomaly [9], the left-hand side of eq.(1) should be modified as
∆s → ∆s − α S 2π ∆G ,(4)
where ∆G denotes the polarization of gluons. Then the bound of |∆s| given by (2) and (3) turns out to be consistent with the SMC data of eq.(1) by taking rather large ∆G (≃ 5 − 6).
Namely ∆s remains small with the cost of large ∆G. Moreover, with this prescription quarks are to carry most of the proton spin and hence one can realize naturally the quark-parton picture. Therefore it is very important to know the magnitude of ∆G and the x dependence of the polarized gluon distribution δG(x), where ∆G = 1 0 δG(x)dx. So far there have been some interesting studies on the polarized gluon. In literature, various types of the polarized gluon distribution functions have been proposed: some of them have large ∆G (≃ 5−6) [10,11,12] and others have small ∆G ( < ∼ 2 − 3) [11,12,13,14]. The E581/704 collaboration [15] al [12], that the large ∆G should be ruled out. However, some people [16] have pointed out that the calculations significantly depend on the shape of polarized gluon distribution functions and hence the large ∆G is not necessarily ruled out but the shape of δG(x) is strongly constrained by the E581/704 data.
In this work, we study the x dependence of the polarized gluon distribution δG(x). In the previous papers [17,18], we have proposed a simple model of polarized distributions of quarks and gluons which reproduce the EMC experimental data well. In this model ∆s was determined to be rather small such as 0.019, which was consistent with the bound of (2) and
(3). As for the magnitude of ∆G, we can fix its value to be 5.32 from the experimental data of the integral value of g p 1 (x). However, as for the x dependence of δG(x), nobody knows the exact form of it at present: there remains a number of unknown factors in δG(x), which cannot be calculated perturbatively. Here by taking account of the plausible behavior of the distribution δG(x) near x ≈ 0 and x ≈ 1, we assume
δG(x) = G + (x) − G − (x) = B x γ (1 − x) p (1 + C x) ,(5)
where G + (x) and G − (x) are the gluon distributions with helicity parallel and antiparallel to the proton helicity, respectively. We further assume for simplicity (i) First, we consider the positivity condition of distribution functions to restrict γ and p.
G + (x) ≈ G − (x)
As for the unpolarized gluon distribution G(x), we assume
G(x) = G + (x) + G − (x) = A x α (1 − x) k(6)
like in the case of eq.(5). Since G + (x) and G − (x) are both positive, we obtain from eqs. (5) and (6) |
B x γ (1 − x) p | ≤ A x α (1 − x) k .(7)
From eq. (7) we get
| B | ≤ A x α+γ (1 − x) k−p ,(8)
and
| ∆G | ≤ Γ(γ + 1) Γ(p + 1) Γ(k + 3 − α) (α + γ + p − k) α+γ+p−k Γ(γ + p + 2) Γ(k + 1) Γ(2 − α) (α + γ) α+γ (p − k) p−k 1 0 xG(x)dx ,(9)
To restrict the region of γ and p from this inequality (9) with ∆G = 5.32, we need to know the value of α and k in G(x) and the intergral value of xG(x) as well. As for the x dependence of G(x), using experimental data of J/ψ productions for unpolarized muon-nucleon scatterings [19,20], we have two possible types of parameterization of
G(x) at Q 2 ≃ M 2 J/ψ GeV 2 , T ype A G(x) = 3.35 1 x (1 − x) 5.7 ,(10)T ype B G(x) = 2.36 1 x 1.08 (1 − x) 4.65 .(11)
For Type A, α is taken to be 1 by considering the ordinary Pomeron P, and parametrized so as to fit the data. On the other hand, α is chosen to be 1.08 in Type B which is recently derived from the analysis of the experimental data of the total cross section [21]. The graphs of these two distributions are given in Fig.2, where the intergral values of xG(x) in eqs. (10) and (11) are both normalized to 0.5 in conformity to the experimental data. Inserting these functions into inequality (9) with ∆G = 5.32, the allowed regions of γ and p are obtained. We have examined (9) for various combinations of γ and p, and the results are given in Table 1 and Fig.3. In Fig.3, the region below solid or dashed lines is excluded by (9). From this analysis, we conclude that a wide region of γ and p which satisfies the SMC data and the positivity condition simultaneously, is allowable with respect to the polarized gluon distribution with large ∆G (= 5.32).
(ii) Second, to restrict further the allowable region of γ and p, we compare our model calculations with the two-spin asymmetries A π 0 LL ( (−) p p ) for inclusive π 0 -productions measured by E581/704 Collaboration using polarized proton (antiproton) beams and polarized proton targets [15]. Taking δG(x) with the combination of (γ, p) which is allowed by the criterion of positivity, we calculate numerically
A π 0 LL ( (−) p p ), where the polarized quark distributions δq i (x),
which are necessary for the calculation of cross sections for some of subprocesses, are taken from ref. [17]. The results are given in Fig.4. From this figure, some combinations of γ and p are excluded. Surviving combinations of (γ, p) are shown in Table 2. Comparing the calculations with the experimental data, we have found that xδG(x) must have a peak at a smaller x than 0.05 and has to decrease very rapidly with increasing x. In short, the experimental data are reproduced well when γ is small and p is large, though it is rather difficult to say which one is the best fitting.
(iii) Finally, we look into the spin-dependent structure function of proton g p 1 (x) [1] and that of deuteron g d 1 (x) [22]. The merit of considering these parameters is that g p 1 (x) and g d 1 (x) do not include undetermined fragmentation functions which were included in A π 0 LL ( (−) p p ). Accordingly they are more sensitive to the behavior of δG(x). For this case γ might be bounded below, while this is not the case for the former two cases. For example, for γ = −0.9 the calculated values of xg p 1 (x) seem to deviate from the data for small x regions, x < 0.005. The new SMC data [1] show a tendency for g p 1 (x) to increase for small x, x < 0.01, while the calculated values with γ = −0.9 keep decreasing for such a small x region. It is expected that if γ gets smaller, the discrepancy of g p 1 (x) between the calculated values and the experimental data would become larger. In addition, for g d 1 (x) the calculation with γ = −0.9 does not fit well to the data for 0.01 < x < 0.05. The result of calculation using our δq i (x) and δG(x) with (γ, p) surviving the criteria of cases (i) and (ii) is shown in Fig.5 and Table 3.
In summary, in the models with large ∆G (= 5.32), we have studied the shape of the polarized gluon distribution. By using the positivity condition of distribution functions together with the experimental data on the two-spin asymmetries A π 0 LL ( (−) p p ) and the spin-dependent structure functions of g p 1 (x) and g d 1 (x), we have restricted the x dependence of δG(x) as given in eq. (5). As for the magnitude of γ, −0.6 < ∼ γ < ∼ −0.3 seems favorable in our analysis, and with respect to p we obtain the bound that p should be larger than 15. In other words, if γ and p are fixed in this region, for example, as γ = −0.6 and p = 17, one can reproduce all existing data quite successfully. Needless to say, the ∆s of eq.(1) can be reconciled with the bound of (2) or (3) with large ∆G (= 5.32). However, at present we do not know the theoretical ground on the origin of these values of γ and p: in the Regge terminology, the value of γ restricted above happens to be closer to the one for unpolarized valence quark distributions rather than for unpolarized gluon distributions [23], and p seems to be inconsistent with the prediction of counting rules [24]. To understand the origin of such γ and p is out of scope in this work and needs further investigations. Furthermore, if ∆G is so large (≃ 5 − 6), we are to have an approximate relation L Z q+G ≃ −∆G from the proton spin sum rule, 1 2 = 1 2 ∆Σ + ∆G + L Z q+G , where 1 2 ∆Σ represents the sum of the spin carried by quarks. Unfortunately, nobody knows the underlying physics of it. These are still problems to be solved even though the idea of the U A (1) anomaly is attractive.
It is informative to comment on another approach which has mentioned to this problem.
Recently Brodsky, Burkardt and Schmidt (BBS) [13] have proposed an interesting model of the polarized gluon distribution which incorporates color coherence and counting rule at small and large x. At x ≈ 0, the color coherence argument gives δG(x)/G(x) ≈ x 3 1 y with 1 y ≃ 3, where 1 y presents the first inverse moment of the quark light-cone momentum fraction distributions in the lowest Fock state of the proton, and leads to a relation γ = −α + 1 [13,14]. Then, contrary to our result, −0.6 < ∼ γ < ∼ −0.3, they have taken γ = 0 by choosing α = 1 which is an ordinary Pomeron intercept value. In terms of Regge theory, γ = 0 can be interpreted as follows: δG(x) at x ≈ 0 is governed by the A 1 trajectory. Although the integrated value of δG(x) in the BBS model is small such as ∆G = 0.45, the model is successful in explaining the EMC data g p 1 (x), g n 1 (x) and g d 1 (x). In addition, we calculated A π 0 LL ( (−) p p ) by using the BBS model and found that the model could reproduce A π 0 LL (pp) while the predicted value of A π 0 LL (pp) slightly deviated from the data [18]. The BBS model which has small ∆G (= 0.45) seems to be an alternative to our model which has large ∆G (= 5.32), though in the BBS model the apparent inconsistency between ∆s of eq.(1) and the bound of (2) or (3) remains to be unsolved.
It is very important to know the behavior of δG(x) and δs(x) in order to understand the spin structure of a proton. However the polarization experiments are still in their beginning and the form of these functions is not yet clear disappointingly. We hope they will be determined in the forthcoming experiments.
) with the theoretical predictions byRamsey et
at large x and take C = 0. Then there remain two parameters, γ and p. B is determined from the normalization, ∆G = 5.32. We are interested in the behavior of δG(x) under the condition of large ∆G (= 5.32) and study the allowable region of γ and p. In order to implement this, we require the positivity condition of distribution functions and utilize the recent results of several polarization experiments. As a preliminary, to examine the behavior of δG(x) in eq.(5) for various values of γ and p, we vary γ from −0.9 to 0.3 at intervals of 0.3 while we choose p independently as 5, 10, 15, 17 and 20. The results are presented in Fig.1. One can see from this figure that if one takes γ smaller, the peak of the distribution is shifted to smaller x and if one takes smaller p, the distribution has a broader shoulder. Now, let us get into the discussion on the restriction of the x dependence of δG(x).
Fig. 1 :
1The x dependence of the spin-dependent gluon distribution function xδG(x, Q 2 ) at Q 2 = 10GeV 2 for various p (= 5 − 20) with (a) γ = 0.3, (b) γ = −0.6 and (c) γ = −0.9.
Fig. 2 :
2The parametrization of the gluon distribution functions xG(x, Q 2 ) at Q 2 ≈ 10GeV 2 .The solid (dashed) line denotes Type A (B). The data of opened (closed) circles are taken from[19] ([20]).
Fig. 3 :
3The allowed region by(9) for γ and p. The solid (dashed) line corresponds to Type A (B). The region below the lines are excluded.
Fig. 4 :
4The produced π 0 transverse momenta p T dependence of A π 0 LL ( (−) p p ) for various p (= 5 − 20) with (a) γ = 0, (b) γ = −0.6 and (c) γ = −0.9. Data are taken from[15].
Fig. 5 :
5The x dependence of xg p 1 (x) and xg d 1 (x) for various p (= 5 − 20) with (a) γ = 0, (b) γ = −0.6 and (c) γ = −0.9. The data of xg p 1 (x) (xg d 1 (x)) are taken from [1, 2] ([22]).
Table captions Table 1 :
captions1The various combinations of γ and p which we have examined. The circles denote the combinations allowed from (9) whereas the crosses present the ones excluded from(9). The left-side (right-side) table corresponds to Type A (Type B) of G(x).
Table 2 :
2The various combinations of γ and p which we have examined. The circles denote the combinations allowed from the A π 0 LL whereas the crosses present the ones excluded from it. The minuses denote the combinations excluded from Table 1. The left-side (right-side) table corresponds to the A π 0 LL for pp collisions (pp collisions).
Table 3 :
3The various combinations of γ and p which we have examined. The circles denote the combinations allowed from the spin-dependent structure functions. whereas the crosses present the ones excluded from it. The minuses denote the combinations excluded from Tables 1 and 2. The left-side (right-side) table corresponds to the g p1 (x)
Table 3
3Figure captions
. D Adams, SMC Collab.Phys. Lett. 329399D. Adams et al., SMC Collab., Phys. Lett. B329 (1994) 399.
. J Ashman, EMC Collab.Phys. Lett. 206364J. Ashman et al., EMC Collab., Phys. Lett. B206 (1988) 364;
. Nucl. Phys. 3281Nucl. Phys. B328 (1989) 1.
. J Ellis, R L Jaffe, Phys. Rev. 91669J. Ellis and R. L. Jaffe, Phys. Rev. D9, (1974) 1444; D10, (1974) 1669.
. G Preparata, P G Ratcliffe, J Soffer, Phys. Lett. 273306G. Preparata, P. G. Ratcliffe and J. Soffer, Phys. Lett. B273 (1991) 306.
M Shaevitz, Collab, Proceedings of the Neutrino '90. the Neutrino '90CERNM. Shaevitz, CCFR Collab., in Proceedings of the Neutrino '90, 1990, CERN.
. G Preparata, J Soffer, Phys. Rev. Lett. 611213E)G. Preparata and J. Soffer, Phys. Rev. Lett. 61 (1988) 1167 ; 62 (1989) 1213 (E).
. H Abramowicz, CDHS Collab.Z. Phys. 2529H. Abramowicz et al., CDHS Collab., Z. Phys. C25 (1984) 29.
. D Allasia, WA25 Collab.Z. Phys. 28321D. Allasia et al., WA25 Collab., Z. Phys. C28 (1985) 321.
. G Altarelli, G G Ross, Phys. Lett. 212391G. Altarelli and G. G. Ross, Phys. Lett. B212 (1988) 391;
. R D Carlitz, J C Collins, A H Mueller, Phys. Lett. 214229R. D. Carlitz, J. C. Collins and A. H. Mueller, Phys. Lett. B214 (1988) 229;
A V Efremov, O V Teryaev, Proceedings of the International Hadron Symposium. Fischer et al.the International Hadron SymposiumBechyně, Czechoslovakia; PragueCzechoslovakian Academy of ScineceA. V. Efremov and O. V. Teryaev, in Proceedings of the International Hadron Symposium, 1988, Bechyně, Czechoslovakia, edited by Fischer et al., (Czechoslovakian Academy of Scinece, Prague, 1989).
. G Altarelli, W J Stirling, Part. World. 140G.Altarelli and W.J.Stirling, Part. World 1, (1989) 40;
. Z Kunszt, Phys. Lett. 218243Z. Kunszt, Phys. Lett. B218, (1989) 243;
. J Ellis, M Karliner, C T Sachrajda, CERN-TH-5471/89J. Ellis, M. Karliner and C. T. Sachrajda, CERN-TH-5471/89;
. M Glück, E Reya, W Vogelsang, Phys. Rev. 452552M. Glück, E. Reya and W. Vogelsang, Phys. Rev. D45, (1992) 2552.
. H Y Cheng, S N Lai, Phys. Rev. 4191H. Y. Cheng and S. N. Lai, Phys. Rev. D41, (1990) 91.
. G Ramsey, D Sivers, Phys. Rev. 432861G. Ramsey and D. Sivers, Phys. Rev. D43, (1991) 2861.
. S J Brodsky, M Burkardt, I Schmidt, PERTURBATIVE QCD CONSTRAINTS ON THE SHAPE OF POLARIZED QUARK AND GLUON DISTRIBUTIONS. 6087S. J. Brodsky, M. Burkardt and I. Schmidt, "PERTURBATIVE QCD CONSTRAINTS ON THE SHAPE OF POLARIZED QUARK AND GLUON DISTRIBUTIONS", preprint 6087 (1994).
. T Gehrmann, W J Stirling, Durham preprint DTP/94/38Z. Phys. C. T. Gehrmann and W. J. Stirling, Durham preprint DTP/94/38 (1994), to be published in Z. Phys. C.
. D L Adams, FNAL E581/704 Collab.Phys. Lett. 261197D. L. Adams et al., FNAL E581/704 Collab., Phys. Lett. B261 (1991) 197.
. W Vogelsang, A Weber, Phys. Rev. 454069W. Vogelsang and A. Weber, Phys. Rev. D45, (1992) 4069;
. K Kobayakawa, T Morii, T Yamanishi, Z. Phys. 59251K. Kobayakawa, T. Morii and T. Yamanishi, Z. Phys. C59, (1993) 251.
. K Kobayakawa, T Morii, S Tanaka, T Yamanishi, Phys. Rev. 462854K. Kobayakawa, T. Morii, S. Tanaka and T. Yamanishi, Phys. Rev. D46 (1992) 2854.
T Morii, S Tanaka, T Yamanishi, preprint KOBE-FHD-94-01Proceedings of the Particle Physics and its Future. the Particle Physics and its FutureJapanYITPT. Morii, S. Tanaka and T. Yamanishi, preprint KOBE-FHD-94-01, in Proceedings of the Particle Physics and its Future, 1994, YITP, Japan.
. D Allasia, NMC Collab.Phys. Lett. 258493D. Allasia et al., NMC Collab., Phys. Lett. B258 (1991) 493.
. J Ashman, EMC Collab.Z. Phys. 5621J. Ashman et al., EMC Collab., Z. Phys. C56 (1992) 21.
. A Donnachie, P V Landshoff, Nucl. Phys. 231189A. Donnachie and P. V. Landshoff, Nucl. Phys. B231 (1984) 189.
. B Adeva, SMC Collab.Phys. Lett. 302533B. Adeva et al., SMC Collab., Phys. Lett. B302 (1993) 533.
R L For, ; P Heimann ; 429, Collins, An Introduction to Regge Theory and High-Energy Physics. Cambridge64For example, see R. L. Heimann, Nucl. Phys. B64 (1973) 429; P. Collins: An Introduction to Regge Theory and High-Energy Physics, Cambridge (1977).
Phys. 15 (1972) 438, 675; J. F. Gunion. V N Gribov, L N Lipatov, . J Sov, Nucl, Phys. Rev. 10242V. N. Gribov and L. N. Lipatov, Sov. J. Nucl. Phys. 15 (1972) 438, 675; J. F. Gunion, Phys. Rev. D10, (1974) 242;
. R Blankenbecler, S J Brodsky, Phys. Rev. 10R. Blankenbecler and S. J. Brodsky, Phys. Rev. D10, (1974)
|
[] |
[
"Vaidya solution and its generalization in de Rham-Gabadadze-Tolley massive gravity",
"Vaidya solution and its generalization in de Rham-Gabadadze-Tolley massive gravity"
] |
[
"Ping Li \nShanghai United Center for Astrophysics (SUCA)\nShanghai Normal University\n100 Guilin Road200234ShanghaiChina\n",
"Xin-Zhou Li \nShanghai United Center for Astrophysics (SUCA)\nShanghai Normal University\n100 Guilin Road200234ShanghaiChina\n",
"Xiang-Hua Zhai \nShanghai United Center for Astrophysics (SUCA)\nShanghai Normal University\n100 Guilin Road200234ShanghaiChina\n"
] |
[
"Shanghai United Center for Astrophysics (SUCA)\nShanghai Normal University\n100 Guilin Road200234ShanghaiChina",
"Shanghai United Center for Astrophysics (SUCA)\nShanghai Normal University\n100 Guilin Road200234ShanghaiChina",
"Shanghai United Center for Astrophysics (SUCA)\nShanghai Normal University\n100 Guilin Road200234ShanghaiChina"
] |
[] |
We present a detailed study of the Vaidya solution and its generalization in de Rham-Gabadadze-Tolley (dRGT) theory. Since the diffeomorphism invariance can be restored with the Stückelberg fields φ a introduced, there is a new invariant I ab = g µν ∂µφ a ∂νφ b in the massive gravity, which adds to the ones usually encountered in general relativity. There is no conventional Vaidya solution if we choose unitary gauge. In this paper, we obtain three types of self-consistent ansatz with some nonunitary gauge, and find accordingly the Vaidya, generalized Vaidya and furry Vaidya solution. As by-products, we obtain a series of furry black hole. The Vaidya solution and its generalization in dRGT massive gravity describe the black holes with a variable horizon.
|
10.1103/physrevd.94.124022
|
[
"https://arxiv.org/pdf/1612.00543v1.pdf"
] | 119,088,296 |
1612.00543
|
05bb18ee22c80d7f13f958a95d9a7c90a874d77b
|
Vaidya solution and its generalization in de Rham-Gabadadze-Tolley massive gravity
2 Dec 2016
Ping Li
Shanghai United Center for Astrophysics (SUCA)
Shanghai Normal University
100 Guilin Road200234ShanghaiChina
Xin-Zhou Li
Shanghai United Center for Astrophysics (SUCA)
Shanghai Normal University
100 Guilin Road200234ShanghaiChina
Xiang-Hua Zhai
Shanghai United Center for Astrophysics (SUCA)
Shanghai Normal University
100 Guilin Road200234ShanghaiChina
Vaidya solution and its generalization in de Rham-Gabadadze-Tolley massive gravity
2 Dec 2016numbers: 0450Kd, 1470Kv Keywords: massive gravityexact solutionradiation coordinatevariable horizon
We present a detailed study of the Vaidya solution and its generalization in de Rham-Gabadadze-Tolley (dRGT) theory. Since the diffeomorphism invariance can be restored with the Stückelberg fields φ a introduced, there is a new invariant I ab = g µν ∂µφ a ∂νφ b in the massive gravity, which adds to the ones usually encountered in general relativity. There is no conventional Vaidya solution if we choose unitary gauge. In this paper, we obtain three types of self-consistent ansatz with some nonunitary gauge, and find accordingly the Vaidya, generalized Vaidya and furry Vaidya solution. As by-products, we obtain a series of furry black hole. The Vaidya solution and its generalization in dRGT massive gravity describe the black holes with a variable horizon.
I. INTRODUCTION
It is a significant question whether general relativity (GR) is a solitary theory from both the theoretical and phenomenological sides. One of the modifying gravity theories is the massive deformation of GR. A comprehensive review of massive gravity can be found in [1]. We can divide the massive gravity theories into two varieties: the Lorentz invariant type (LI) and the Lorentz breaking type (LB). Though for many years it was certain that the theory of LI massive gravity always contains the Boulware-Deser (BD) ghosts [2], a kind of nonlinear extension was recently constructed by de Rham, Gabadadze and Tolley (dRGT) [3][4][5][6][7]. In GR, the spherically symmetric vacuum solution to the Einstein equation is a benchmark, and its massive deformation also plays a crucial role in LI and LB theories. A detailed study of the spherically symmetric solutions is presented in LB massive gravity [8], in which we obtain a serviceable formula of the solution to the functional differential equation with spherical symmetry. Using this expression, we give some analytical examples and their phenomenological applications. We present also a detailed study of the black hole solutions in dRGT theory [9]. Since the diffeomorphism invariance can be restored with the Stückelberg fields φ a introduced, there is a new invariant I ab = g µν ∂ µ φ a ∂ ν φ b in the massive gravity, which adds to the ones usually encountered in GR. In the unitary gauge φ a = x µ δ a µ , any inverse metric g µν that has divergence including the coordinate singularity in GR would exhibit a singularity in the invariant I ab . Therefore, there is no conventional Schwarzschild metric if one selects unitary gauge. In the Ref. [9], we obtain a self-consistent ansätz in the nonunitary gauge, and find that there are seven solutions including the Schwarzschild solution, Reissner-Nordström solution and five other ones. Furthermore, * [email protected] † [email protected] these solutions may become candidates for black holes in dRGT.
The symmetric tensor field h µν ≡ g µν − η µν is the gravitational analogue to the Proca field in the massive electrodynamics, describing all five modes of the massive graviton. With the four Stückelberg fields introduced [10] and the Minkowski metric replaced by the covariant tensor ∂ µ φ a ∂ ν φ b η ab , the diffeomorphism invariance can be restored, then the symmetric tensor H µν describes the covariantized metric perturbation. In the unitary gauge, H µν reduces to h µν . There is a new basic invariant I ab = g µν ∂ µ φ a ∂ ν φ b in the massive gravity in addition to the ones usually encountered in GR since the existence of the four scalar fields φ a . In the unitary gauge, we have I ab = g µν δ a µ δ b ν . It is obvious that I ab will exhibit a singularity if g µν has any divergence including the coordinate singularity for the unitary gauge. De Rham and his colleagues [11] have pointed out that one would expect the singularities in I ab to be a problem for fluctuations around classical solutions exhibiting it. For this reason, they propose that the solution come true only if I ab is nonsingular. In this paper, we continue to use this conservative rule.
As a corollary of the above point of view, there is no conventional Schwarzschild metric of massive gravity in unitary gauge, which gives rise to the following paradox. According to the vainshtein mechanism [12], this solution of massive gravity should approximate the one of GR better and better when we increase the mass of the source. That is to say, this black hole of massive gravity near its horizon should be very similar to that of GR. However, this metric would be singular at the horizon according to the argument above. The Vaidya solution [13,14] is a nonstatic generalization of the Schwarzschild metric in GR. Obviously, there is no conventional Vaidya solution of massive gravity in the unitary gauge. Whether or not there is the conventional Vaidya solution in dRGT with two free parameters is one of the questions that motivates this paper. To find new Vaidya-type solution is another motivation.
Vaidya [13,14] solved Einstein's equations for a spher-ically symmtric radiating nonrotating body with the energy-momentum tensor of radiation T (rad) µν = ρk µ k ν , where k µ is a null vector directed radially outward and ρ is defined to be the energy density of the radiation as measured locally by an observer with 4-velocity v µ , that is to say, ρ = v µ v ν T (rad) µν . In this work, we study the Vaidya solution and its generalization in dRGT, where two parameters are freely chosen. Furthermore, we release ourselves from the limitation of the unitary gauge φ a = x µ δ a µ , and the Stückelberg field φ a is taken as a "hedgehog" configuration φ i = φ(u, r) x i r [9] and φ 0 = h(u, r), where u is the retarded time [14]. We find a class of Vaidya solutions in dRGT. On the obtained solutions, the singularities in the invariant I ab are absent except for the physical singularity r = 0, so that these solutions may be regarded as candidates for the dRGT black holes embraced by the radiation.
The paper is organized as follows: Sec. II gives a brief review of dRGT theory [6]. In Sec. III, we present three types of self-consistent ansatz with some nonunitary gauge. In Sec. IV, we find the Vaidya solution and a solution of furry black hole under the ansatz I, and in Sec. V the generalized Vaidya solution and the extended solution of furry black hole are found under the ansatz II. The generalized Vaidya solutions are studied under the ansatz III in Sec. VI. The results are summarized and discussed in Sec. VII.
II. THE MODIFIED EINSTEIN EQUATIONS IN DRGT THEORY
The gravitational action is
S = M 2 pl 2 d 4 x √ −g[R + m 2 U (g µν , φ a )],(1)
where R is the Ricci scalar, and U is a potential for the graviton which modifies the gravitational sector. The potential is composed of three parts,
U (g µν , φ a ) = U 2 + α 3 U 3 + α 4 U 4 ,(2)
where α 3 and α 4 are dimensionless parameters, and
U 2 = [K] 2 − [K 2 ], U 3 = [K] 3 − 3[K][K 2 ] + 2[K 3 ],(3)U 4 = [K] 4 − 6[K] 2 [K 2 ] + 8[K][K 3 ] + 3[K 2 ] 2 − 6[K 4 ].
Here the square brackets denote the traces, i.e., [K] = K µ µ and
K µ ν = δ µ ν − g µα ∂ α φ a ∂ ν φ b η ab ≡ δ µ ν − √ Σ µ ν(4)
where the matrix square root is √ Σ µ α √ Σ α ν = Σ µ ν , g µν is the physical metric, η ab is the reference metric and φ a are the Stückelberg scalars introduced to restore general covariance [15].
Variation of the action with respect to the metric leads to the modified Einstein equations
G µν − m 2 T (K) µν = 1 M 2 pl T (rad) µν ,(5)
where
T (K) µν = 1 √ −g δ( √ −gU ) δg µν .(6)
From (4), we have
K n µ ν = δ µ ν + Σ n k=1 (−1) k ( n k )Σ k 2 µ ν .(7)
Thus, [K n ] can be written as follows,
[K] = 4 − [ √ Σ], [K 2 ] = 4 − 2[ √ Σ] + [Σ], [K 3 ] = 4 − 3[ √ Σ] + 3[Σ] − [Σ 3 2 ], [K 4 ] = 4 − 4[ √ Σ] + 6[Σ] − 4[Σ 3 2 ] + [Σ 2 ].(8)
The symmetric tensor H µν describes the covariantized metric perturbation, which reduces to h µν in the unitary gauge. Therefore, it is natural to split φ a into two parts: φ a = x a − π a and π a = 0 in the unitary gauge. It is useful that we adopt the following decomposition in the nonunitary gauge,
π a = mA a + ∂ a π Λ 3 ,(9)
where A a describe the helicity ±1, and π is the longitudinal mode of the graviton in the decoupling limit [11]. Moreover, M pl → ∞ and m → 0 in the decoupling limit [15], while Λ 3 ≡ M pl m 2 is held fixed. This limit represents the approximation in which the energy scale E is much greater than the graviton mass scale.
III. A SELF-CONSISTENT SPHERICALLY SYMMETRIC ANSATZ
A. The metric corresponding to the radiation coordinates
The front of a gravitational wave (just like that of an electromagnetic wave) provides a unique surface Σ. Such a null hypersurface Σ is described by the equation x 0 = 0 in the radiation coordinate system. The parametric lines of the other coordinates x i (i = 1, 2, 3) will be situated in Σ. Thus, there exists a family of noninteracting null hypersurfaces which are described by x 0 = constant in this coordinate system. We note that there is a congruence of null geodesics on any null hypersurface x 0 = constant, which can be used to define a second coordinate x 1 . Therefore, we should take this congruence as the parametric lines of x 1 . In other words, we have x 2 = constant and x 3 = constant in addition to x 0 = constant on each one of the null geodesics of the congruence. Explicitly, the normal vector of surface Σ, and these geodesics are the parametric lines of x 1 , so we have g µν = δ µ 1 , namely,
g µν = 0 1 0 0 1 g 11 g 12 g 13 0 g 21 g 22 g 23 0 g 31 g 32 g 33 ,(10)
and therefore
g µν = g 00 1 g 02 g 03 1 0 0 0 g 20 0 g 22 g 23 g 30 0 g 32 g 33 .(11)
In the spherical symmetric case, the radiation coordinates x µ are usually denoted (u, r, θ, φ), where r is the usual radial coordinate and θ, φ are generalized polar angles. Accordingly, the general form of the covariant components of metric
g µν = b 2 (u, r) 1 0 0 1 0 0 0 0 0 −r 2 0 0 0 0 −r 2 sin 2 θ ,(12)
and therefore
g µν = 0 1 0 0 1 −b 2 (u, r) 0 0 0 0 −r −2 0 0 0 0 −r −2 csc 2 θ .(13)
For the static line element
ds 2 = b 2 (r)du 2 + 2dudr − r 2 dΩ 2
where u can be interpreted as retarded time coordinate and
u = t − r r0 dr b 2 (r) .(14)
Hence we obtain for the null hypersurfaces
t − r r0 dr b 2 (r) = constant.(15)
In the case of the Schwarzschild solution,
ds 2 = 1 − r s r du 2 + 2dudr − r 2 dΩ 2 ,(16)
and
u = t − r − r s ln(r − r s ),(17)
where r s is the Schwarzschild radius. Especially, r s = 0 and we have u = t − r, and the Minkowskian metric
ds 2 = du 2 + 2dudr − r 2 dΩ 2 .
(18) From (12) we have the Christoffel symbols of the second kind,
Γ 0 00 = −bb ′ , Γ 0 22 = r, Γ 0 33 = r sin 2 θ, Γ 1 00 = bḃ + b 3 b ′ , Γ 1 01 = Γ 1 10 = bb ′ , Γ 1 22 = −b 2 r, Γ 1 33 = −b 2 r sin 2 θ, Γ 2 21 = Γ 2 12 = r −1 , Γ 2 33 = − sin θ cos θ, Γ 3 31 = Γ 3 13 = r −1 , Γ 3 32 = Γ 3 23 = cot θ.(19)
All other symbols vanish. The Ricci tensor in radiation coordinates is consequently given by
R 00 = b 2 b ′2 + b 3 b ′′ + 2 r (bḃ + b 3 b ′ ), R 22 = R 33 sin 2 θ = −b 2 − 2bb ′ r + 1, R 01 = R 10 = b ′2 + bb ′′ + 2bb ′ r ,(20)
whereḃ = ∂b/∂u, b ′ = ∂b/∂r, and all other components are zero. A straightforward calculation then shows that the Ricci scalar is given by
R = 2 b ′2 + bb ′′ + 4bb ′ r + b 2 − 1 r 2 .(21)
The nonvanishing components of the mixed Einstein tensor G ν µ are then given in the following
G 0 0 = G 1 1 = − 2bb ′ r − b 2 − 1 r 2 , G 2 2 = G 3 3 = − b ′2 + bb ′′ + 2bb ′ r .(22)
B. The ansatz for Stückelberg field
We consider the general form of spherically symmetric ansatz for Stückelberg field as follows
φ 0 = h(u, r), φ i = φ(u, r) x i r .(23)
The ansatz (23) contains two additional functions h(u, r) and φ(u, r), which reduces to unitary gauge only if h(u, r) = u + r r0 dr b 2 (r) and φ(u, r) = r in the static case. The self-consistency of ansatz (23) imposes restrictions on h(u, r) and φ(u, r). Under the ansatz (23), the matrix Σ = (Σ µ ν ) takes the form
Σ = ḣ h ′ −φφ ′ h ′2 − φ ′2 0 0 (ḣ 2 −φ 2 ) − b 2 (ḣh ′ −φφ ′ ) (ḣh ′ −φφ ′ ) − b 2 (h ′2 − φ ′2 ) 0 0 0 0 φ 2 r 2 0 0 0 0 φ 2 r 2 (24)
where dots and primes denote derivatives with respect to u and r, respectively.
For a 2 × 2 matrix M, the Cayley-Hamilton theorem tells us that
[M]M = M 2 + (det M)I 2 ,(25)
where I 2 is 2 × 2 identity matrix. We define Σ 2 as the upper left-hand 2 × 2 submatrix of Σ and use det M n = (det M) n to find the square root of Σ 2 ,
Σ 2 = 1 [ √ Σ 2 ] Σ 2 + det Σ 2 I 2 ,(26)
where
det Σ 2 = (ḣφ ′ − h ′φ ) 2 ,(27)
and
[ Σ 2 ] = [Σ 2 ] + 2 det Σ 2 .(28)
Using (26)-(28), we obtain the recursion formula as follows
Σ k+1 2 2 = [ Σ 2 ]Σ k 2 2 − (det Σ 2 )Σ k−1 2 2 , (k ≥ 1),(29)
and Σ 0 2 ≡ I 2 . Thus, we have
Σ k 2 = Σ k 2 2 0 0 φ r k I 2 ,(30)
and
[K n ] = 4 + Σ n k=1 (−1) k n k Σ k 2 2 + 2 φ r k .(31)
From (6), (8) and (31), we obtain the non-zero components of T (K)µ ν as follows
T (K)0 0 = 1 − 2φ r [ Σ 2 ] + 2φ r + det Σ 2 − φ 2 r 2 + 4 [ √ Σ 2 ] φ r − 1 ḣ h ′ −φφ ′ + det Σ 2 + 3α 3 φ 2 r 2 − 1 [ Σ 2 ] + 2(1 + det Σ 2 ) 1 − φ r − 2 [ √ Σ 2 ] φ r − 1 2 (ḣh ′ −φφ ′ + det Σ 2 ) − 12α 4 φ r − 1 2 [ Σ 2 ] − (1 + det Σ 2 ) ,(32)T (K)0 1 = 2 [ √ Σ 2 ] (h ′2 − φ ′2 ) φ r − 1 2 − 3α 3 φ r − 1 ,(33)T (K)1 0 = 2 [ √ Σ 2 ] (ḣ 2 −φ 2 + b 2 (φφ ′ −ḣh ′ )) φ r − 1 2 − 3α 3 φ r − 1 ,(34)T (K)1 1 = T (K)0 0 + 2 [ √ Σ 2 ] b 2 (φ ′2 − h ′2 ) φ r − 1 2 − 3α 3 φ r − 1 ,(35)T (K)2 2 = [ Σ 2 ] − 2φ r − det Σ 2 + φ 2 r 2 − 3α 3 φ r − 1 2 ( Σ 2 − 2) − 12α 4 φ r − 1 2 [ Σ 2 ] − (1 + det Σ 2 ) ,(36)T (K)3 3 = T (K)2 2 .(37)
From the modified Einstein equation (5) in vacuum, we require that T (K)0 1 and T (K)1 0 vanish which is a self-consistent requisition for the ansatz (23). Therefore, the self-consistent ansatz can be classified into three types as follows Ansatz I:
ds 2 = b 2 (u, r)du 2 + 2dudr − r 2 dΩ 2 , φ 0 = h(u, r), φ i = x i ;(38)
Ansatz II:
ds 2 = b 2 (u, r)du 2 + 2dudr − r 2 dΩ 2 , φ 0 = h(u, r), φ i = 2 3α 3 + 1 x i ;(39)
Ansatz III:
ds 2 = b 2 (u, r)du 2 + 2dudr − r 2 dΩ 2 , φ 0 = h(u, r), φ i = h(u, r) x i r .(40)
It is easy to verify that T under all types. On the other hand, the energy-momentum tensor of a radiating field T (rad) µν can be described as the geometrical optics form [16] T (rad)
µν = − 2 r 2 q(u)δ 0 µ δ 0 ν .(41)
Combing now (13) (42), the modified Einstein equation with and without the radiating field can be rewritten as
(b 2 ) ′ r + b 2 − 1 r 2 = m 2 T (K)0 0 ,(43)(b 2 ) ′′ 2 + (b 2 ) ′ r = m 2 T (K)2 2 .(44)
There is a mathematical identify relation
(b 2 ) ′ r + b 2 − 1 r 2 ′ = 2 r (b 2 ) ′′ 2 + (b 2 ) ′ r − (b 2 ) ′ r + b 2 − 1 r 2 ,(45)
which is the key to the analytical solution. Combing (43), (44) and (45), we obtain
T (K)0 0 ′ = 2 r T (K)2 2 − T (K)0 0 ,(46)
which is a necessary condition of T (K)µ ν . In general, T (K)0 0 and T (K)2 2 are functions of b 2 (u, r) and h(u, r) under three types of ansatz. Under some suitable boundary conditions, there is always a numerical solution to the system composed of two equations (43) and (46) with two unknown functions. However, the motivation of our work is to find possible exact solutions, so we will settle these types one by one.
IV. SOLUTIONS UNDER THE ANSATZ I
In this section, we present a detailed study of solutions under the self-consistent ansatz I in dRGT with two free parameters α 3 and α 4 . The obtained solutions are free of singularities except for the conventional one appearing in GR (for instance, the singularity r = 0 in the spherically symmetric solutions).
For the ansatz I, (32) and (36) can be reduced to
T (K)0 0 = −T (K)2 2 = −[ Σ 2 ] + det Σ 2 + 1,(47)
where
det Σ 2 =ḣ 2 ,(48)
and
[ Σ 2 ] 2 = 2ḣ(h ′ + 1) − b 2 (h ′2 − 1).(49)
Thus, (46) becomes
T (K)0 0 ′ = − 4 r T (K)0 0 ,(50)
which is a separable equation and
T (K)0 0 = S(u) r 4 .(51)
Substituting (51) into (43), we have
r(b 2 ) ′ + (b 2 − 1) = m 2 S(u) r 2 ,(52)
and
b 2 = 1 − r s (u) r − m 2 S(u) r 2 .(53)
On the other hand, we have the equation of h(u, r) as follows
− 2ḣ(h ′ + 1) − b 2 (h ′2 − 1) +ḣ + 1 = S(u) r 4 ,(54)
from which the function h(u, r) can be determined. There exist two cases that (54) degenerates and becomes an ordinary differential equation: (i)h ′2 = 1 and S(u) = 0; (ii)ḣ, S(u) and r s (u) are all constants. In reality, case (i) corresponds with the Vaidya solution [14] and case (ii) correlates closely with the solution of furry black hole [9].
A. The Vaidya solution in dRGT
For the case of h ′2 = 1 and S(u) = 0, (54) is reduced toḣ − 2ḣ
1 2 + 1 = 0, f or h ′ = 1,(55)
orḣ
+ 1 = 0, f or h ′ = −1.(56)
Thus, we obtain
h = ±(u + r).(57)
In the meantime, (53) becomes
b 2 = 1 − r s (u) r .(58)
Substituting (58) into (21), we have the Ricci scalar R = 0. Since the Ricci scalar and T (K) µν vanish, the modified Einstein equation may also read as
R µν = − 2 r 2 q(u)δ 0 µ δ 0 ν .(59)
From (20), we have
R µν = −ṙ s (u) r 2 δ 0 µ δ 0 ν ,(60)
and
q(u) = dṙ s (u) du .(61)
Finally, the Vaidya solution can be written as
b 2 = 1 − r s (u) r , φ 0 = ±(u + r), φ i = x i .(62)
Due to the existence of the Stückelberg field, there is a new basic invariant I ab = g µν ∂ µ φ a ∂ ν φ b in the massive gravity in addition to the ones usually encountered in GR. de Rham and his colleagues have pointed out that the solution comes true only if I ab is nonsingular [11]. For the Vaidya solution (62), we have
I 00 = 2 − b 2 , I 0i = (2 − b 2 )n i , I ij = (n i n j − δ ij ) 1 r 2 − b 2 n i n j .(63)
B. Furry black hole
For the case ofḣ = 0 and S(u) = S, r s (u) = r s (S and r s are constants), (54) is reduced to
(1 − h ′2 )b 2 = S r 4 − 1 2 ,(64)
and
h = ± 1 − S r 4 − 1 2 b 2 1 2 dr,(65)where b 2 (r) = 1 − r s r − m 2 S r 2 .
This solution is also free of singularities except for the conventional one appearing in GR. In fact, we have
I 00 = −b 2 + S r 4 − 1 2 , I 0i = ∓b 2 1 − 1 b 2 S r 4 − 1 2 1 2 n i , I ij = (n i n j − δ ij ) 1 r 2 − b 2 n i n j .(66)
Using the coordinate transformation (14), we obtain the furry black hole solution in the Schwarzschild coordinate
ds 2 = 1 − r s r − m 2 S r 2 dt 2 − 1 − r s r − m 2 S r 2 −1 dr 2 − r 2 dΩ 2 , φ 0 = ± S 2 r 6 − S( 2 r 2 − m 2 ) + r s r m 2 S + r s r − r 2 1 2 dr, φ i = x i .(67)
V. SOLUTIONS UNDER THE ANSATZ II
In this section, we find out a generalized Vaidya solution and extended furry black holes under self-consistent ansatz II in dRGT with two free parameters α 3 and α 4 .
For the ansatz II, (32) and (36) can be reduced to
T (K)0 0 = 3 − 16α 4 3α 2 3 ([ Σ 2 ]− det Σ 2 )+ 4 9α 2 3 (12α 4 −1)−3,(68)
and
T (K)2 2 = T (K)0 0 −2([ Σ 2 ]− det Σ 2 )− 8 3α 3 [ Σ 2 ]+2 2 3α 3 + 1 2 ,(69)where det Σ 2 = 2 3α 3 + 1 2ḣ 2 ,(70)
and
[ Σ 2 ] 2 = 2ḣh ′ − b 2 h ′2 − 2 3α 3 + 1 2 + 2 det Σ 2 .(71)T (K)2 2 − T (K)0 0 = − λ + 2 2 (T (K)0 0 + 3µ),(72)
where λ and µ are undetermined constants. Using (46) and (72), we obtain
T (K)0 0 = −3Λ, S(u) r λ+2 − 3µ, for λ = −2, λ = −2,(73)
where Λ and S(µ) are integral constants. Substituting (73) into (43), we have
r(b 2 ) ′ +(b 2 −1) = 3m 2 Λr 2 , − m 2 S(u) r λ + 3m 2 µr 2 , for λ = −2, λ = −2,(74)
and subsequently the solution as follows
b 2 = 1 − rs(u) r + m 2 Λr 2 , 1 − rs(u) r + m 2 S(u) ln r r + m 2 µr 2 , 1 − rs(u) r + m 2 S(u) (λ−1)r λ + m 2 µr 2 , for λ = −2, λ = 1, λ = 1, −2.
(75) In the case of λ = −2, the resulting solution is corresponding to generalized Vaidya solution, and the case of λ = −2 correlates closely with the solution of extended furry black hole [9], as we will see in the following.
A. Generalized Vaidya solution
For the h(u, r) = ± 2 3α3 + 1 (u + r) and S(u) = 0, we have
T (K)0 0 = T (K)2 2 = − 48α 2 3 − 64α 4 27α 4 3 ,(76)
which corresponds to the case of λ = −2. Substituting (75) into (20) and (21), we have the Ricci scalar R = 12m 2 Λ and the Einstein tensor
G µν = − ṙ s (u) r 2 + 3m 2 Λb 2 δ 0 µ δ 0 ν .(77)
As a result, we have an expression of r s (u),
q(u) = dr s (u) du .(78)
Finally, the generalized Vaidya solution can be written as
b 2 = 1 − r s (u) r + (48α 2 3 − 64α 4 )m 2 r 2 81α 4 3 , φ 0 = ± 2 3α 3 + 1 (u + r), φ i = x i .(79)
B. Extended furry black holes
For the case ofḣ = 0 and S(u) = S, r s (u) = r s ( S and r s are constants), (75) is rewritten as b 2 = 1 − rs r + m 2 S ln r r + m 2 µr 2 , 1 − rs r + m 2 S (λ−1)r λ + m 2 µr 2 , for λ = 1, λ = 1, −2.
(80) From (73), we obtain the equation of h(u, r) as follows
3 − 16α 4 3α 2 3 2 3α 3 + 1 2 − h ′2 b 2 1 2 + 4 9α 2 3 (12α 4 − 1) − 3 = S r λ+2 − 3µ.
(81) Therefore, we have
h = ± 2 3α 3 + 1 2 − S 2 3 − 16α4 3α3 2 b 2 r 2λ+4 1 2 dr,(82)
and
µ = 27α 2 3 − 48α 4 + 4 27α 2 3 .(83)
This solution is also free of singularities except for the conventional one appearing in GR. In fact, a straightforward calculation then shows that I ab are given by
I 00 = −b 2 2 3α 3 + 1 2 − S 2 3 − 16α4 3α3 2 r 2λ+4 , I 0i = ∓b 2 2 3α 3 + 1 2 3α 3 + 1 2 + S 2 3 − 16α4 3α3 2 b 2 r 2λ+4 1 2 n i , I ij = 2 3α 3 + 1 2 ((1 − b 2 )n i n j − δ ij ).(84)
Using the coordinate transformation (14), we obtain the furry black hole solutions in the Schwarzschild coordinate: (i) for the case of λ = 1,
ds 2 = 1 − r s r + m 2 S ln r r + m 2 µr 2 dt 2 − 1 − r s r + m 2 S ln r r + m 2 µr 2 −1 dr 2 − r 2 dΩ 2 , φ 0 = ± 3 + 2 α3 − 16α4 3α3 − 32α4 9α 2 3 b 2 r 6 − S 2 1 2 3 − 16α4 3α3 br 3 dr, φ i = 2 3α 3 + 1 x i ;(85)
and (ii) for the case of λ = 1, −2,
ds 2 = 1 − r s r + m 2 S (λ − 1)r λ + m 2 µr 2 dt 2 − 1 − r s r + m 2 S (λ − 1)r λ + m 2 µr 2 −1 dr 2 − r 2 dΩ 2 , φ 0 = ± 3 + 2 α3 − 16α4 3α3 − 32α4 9α 2 3 b 2 r 2λ+4 − S 2 1 2 3 − 16α4 3α3 br λ+2 dr, φ i = 2 3α 3 + 1 x i .(86)
VI. A FURRY VAIDYA SOLUTION UNDER THE ANSATZ III
In this section, we find out some generalized Vaidya solutions under self-consistent ansatz III in dRGT with 12α 4 = 1 + 3α 3 + 9α 2 3 and α 3 = 0. Let us suppose further
h(u, r) = S r ξ ,(87)
where ξ and S are constants, then (32) and (36) can be reduced to
T (K)0 0 = 1 (ξ + 1) 2 − 2(ξ − 1)S 2 r 2ξ+2 + 8ξS r ξ+1 + ξ 2 − 4ξ − 1 ,(88)
and solution (58) can be rewritten as ds 2 = 1 − r s (u) r dt 2 − 2 (r s (0) − r s (u)) r − r s (0) dtdr − r(r − 2r s (0) + r s (u)) (r − r s (0)) 2 dr 2 − r 2 dΩ 2 ,
so the event horizon is
r s (u) = r s (0) + u 0 q(u)du,(97)
and r s (u) is a variable horizon. In reality, the radius of generalized Vaidya solutions are changeable, not only event horizon but also cosmological one. For all solutions, the singularities in the invariant I ab are absent. In fact, the invariant I ab can be explicitly expressed as
I 00 = 2ḣh ′ − b 2 h ′2 , I 0i = (2ḣ − b 2 h ′ )φ ′ n i , I ij = (2φφ ′ − b 2 φ ′2 + φ 2 r 2 )n i n j − φ 2 r 2 δ ij ,(98)
where (n 1 , n 2 , n 3 ) = (sin θ cos φ, sin θ sin φ, cos θ). Obvi-ously, the singularities in the invariant I ab are absent except for the physical singularity r = 0 in GR, so that these solutions of massive gravity may be regarded as candidates for the black hole in dRGT.
In addition, one may be anxious that the scalar perturbations on these backgrounds are infinitely strongly coupled in light of the result of Ref. [17]. It is found that the de Sitter background has infinitely strongly coupled fluctuations in the decoupling limit for the parameters chosen as 9α 2 3 + 3α 3 − 12α 4 + 1 = 0 [17]. Under the ansatz I and II, we have π 0 = −h(r) and π i = (1 − β)x i for the furry black holes. From (9), we obtain the vector mode A 0 = − Λ 3 m h(r) and A i = 0 which are different from those studied in [17]. However, we have to meet this question under the ansatz III unless we consider asymptotically flat background. Finally, we can also discuss the Kerr solution using our method developed in this work and will do so in a forthcoming paper.
ACKNOWLEDGMENTSFrom (43), (44), (88) and(89), we obtain the exact solutions as followswhere S and ξ are constants andandAs the generalized Vaidya solutions, there is still a re-, the solution (91) is asymptotically flat. In the case of q(u) = 0, we obtain new furry black holes from (90)-(92).VII. CONCLUSION AND DISCUSSIONIn GR, the Vaidya solution is a nonstatic generalization of the Schwartzschild metric and has some unique features, and its massive deformation also plays an interesting role in dRGT. In this work, we have developed a study of the Vaidya solution and its generalization in dRGT if the Stückeberg fields are taken as some selfconsistent ansatz. Under the ansatz I, we obtain the Vaidya solution in dRGT. Under the ansatz II and III, we obtain the Vaidya-de Sitter and the furry Vaidya so-lution, respectively. As by-products, we obtain a series of the furry black holes.The Vaidya solution and its generalization in dRGT massive gravity describe the black holes with a variable horizon. For the metricwe take Schwarzschild coordinatethen (93) can be rewritten asThere is an infinite red-shift surface in b 2 (u, r) = 0, which corresponds to the event horizon. Especially, the Vaidya
. C De Rham, Living Rev. Relativity. 177C. de Rham, Living Rev. Relativity 17, 7 (2014).
. D G Boulware, S Deser, Phys. Rev. D. 63368D. G. Boulware and S. Deser, Phys. Rev. D 6, 3368 (1972).
. G Gabadadze, Phys. Lett. B. 68189G. Gabadadze, Phys. Lett. B 681, 89 (2009).
. C De Rham, Phys. Lett. B. 688137C. de Rham, Phys. Lett. B 688, 137 (2010).
. C De Rham, G Gabadadze, Phys. Rev. D. 8244020C. de Rham and G. Gabadadze, Phys. Rev. D 82, 044020 (2010).
. C De Rham, G Gabadadze, A J Tolley, Phys. Rev. Lett. 106231101C. de Rham, G. Gabadadze, and A. J. Tolley, Phys. Rev. Lett. 106, 231101 (2011).
. H Zhang, X Z Li, Phys. Rev. D. 93124039H. Zhang and X. Z. Li, Phys. Rev. D 93, 124039 (2016).
. P Li, X Z Li, P Xi, Class. Quantum Grav. 33115004P. Li, X. Z. Li, and P. Xi, Class. Quantum Grav. 33, 115004 (2016).
. P Li, X Z Li, P Xi, Phys. Rev. D. 9364040P. Li, X. Z. Li, and P. Xi, Phys. Rev. D 93, 064040 (2016).
. K Hinterbichler, Rev. Mod. Phys. 84671K. Hinterbichler, Rev. Mod. Phys. 84, 671 (2012).
. L Berezhiani, G Chkareuli, C De Rham, G Gabadadze, A J Tolley, Phys. Rev. D. 8544024L. Berezhiani, G. Chkareuli, C. de Rham, G. Gabadadze, and A. J. Tolley, Phys. Rev. D 85, 044024 (2012).
. A I Vainshtein, Phys. Lett. B. 39393A. I. Vainshtein, Phys. Lett. B 39, 393 (1972).
. P C Vaidya, Curr. Sci. 12183P. C. Vaidya, Curr. Sci. 12, 183 (1943).
. P C Vaidya, Nature. 171260P. C. Vaidya, Nature 171, 260 (1953).
. N Arkani-Hamed, H Georgi, M D Schwartz, Ann. Phys. (Amsterdam). 30596N. Arkani-Hamed, H. Georgi, and M. D. Schwartz, Ann. Phys. (Amsterdam) 305, 96 (2003).
. M Carmeli, M Kaye, Ann. Phys. (N. Y. ). 10397M. Carmeli and M. Kaye, Ann. Phys. (N. Y. ) 103, 97 (1977).
. C De Rham, G Gabadadze, L Heisenberg, D Pirtskhalava, Phys. Rev. D. 83103516C. de Rham, G. Gabadadze, L. Heisenberg, and D. Pirt- skhalava, Phys. Rev. D 83, 103516 (2011).
|
[] |
[
"An on-line Integrated Bookkeeping: electronic run log book and Meta-Data Repository for ATLAS",
"An on-line Integrated Bookkeeping: electronic run log book and Meta-Data Repository for ATLAS"
] |
[
"M Barczyc \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"D Burckhart-Chromek \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"M Caprini \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"J Da \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"Silva Conceicao \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"M Dobson \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"J Flammer \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"R Jones \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"A Kazarov \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"S Kolos \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"D Liko \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"L Mapelli \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"I Soloviev Cern \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"Geneva \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"Switzerland R Hart \nCFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands\n",
"A Amorim \nUniversidade de Lisboa\nPortugal\n",
"D Klose \nUniversidade de Lisboa\nPortugal\n",
"J Lima \nUniversidade de Lisboa\nPortugal\n",
"L Lucio \nUniversidade de Lisboa\nPortugal\n",
"L Pedro \nUniversidade de Lisboa\nPortugal\n",
"H Wolters \nUCP Figueira da Foz\nPortugal\n",
"E Badescu \nNIPNE\nBucharestRomania\n",
"I Alexandrov \nPNPI\nGatchinaRussian Federation\n",
"V Kotov \nPNPI\nGatchinaRussian Federation\n",
"M Mineev Jinr \nPNPI\nGatchinaRussian Federation\n",
"Russian Dubna \nPNPI\nGatchinaRussian Federation\n",
"Yu Federation \nPNPI\nGatchinaRussian Federation\n",
"Ryabov \nPNPI\nGatchinaRussian Federation\n"
] |
[
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"CFNUL/FCUL\nNIKHEF\nAmsterdamNetherlands",
"Universidade de Lisboa\nPortugal",
"Universidade de Lisboa\nPortugal",
"Universidade de Lisboa\nPortugal",
"Universidade de Lisboa\nPortugal",
"Universidade de Lisboa\nPortugal",
"UCP Figueira da Foz\nPortugal",
"NIPNE\nBucharestRomania",
"PNPI\nGatchinaRussian Federation",
"PNPI\nGatchinaRussian Federation",
"PNPI\nGatchinaRussian Federation",
"PNPI\nGatchinaRussian Federation",
"PNPI\nGatchinaRussian Federation",
"PNPI\nGatchinaRussian Federation"
] |
[] |
In the context of the ATLAS experiment there is growing evidence of the importance of different kinds of Meta-data including all the important details of the detector and data acquisition that are vital for the analysis of the acquired data. The Online BookKeeper (OBK) is a component of ATLAS online software that stores all information collected while running the experiment, including the Meta-data associated with the event acquisition, triggering and storage. The facilities for acquisition of control data within the on-line software framework, together with a full functional Web interface, make the OBK a powerful tool containing all information needed for event analysis, including an electronic log book.In this paper we explain how OBK plays a role as one of the main collectors and managers of Meta-data produced on-line, and we'll also focus on the Web facilities already available. The usage of the web interface as an electronic run logbook is also explained, together with the future extensions.We describe the technology used in OBK development and how we arrived at the present level explaining the previous experience with various DBMS technologies. The extensive performance evaluations that have been performed and the usage in the production environment of the ATLAS test beams are also analysed.
| null |
[
"https://arxiv.org/pdf/cs/0306081v1.pdf"
] | 11,686,923 |
cs/0306081
|
cf6154ebede2f14250c9cca4b73ea338186c1c34
|
An on-line Integrated Bookkeeping: electronic run log book and Meta-Data Repository for ATLAS
CHEP03 March 24-28. 2003
M Barczyc
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
D Burckhart-Chromek
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
M Caprini
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
J Da
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
Silva Conceicao
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
M Dobson
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
J Flammer
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
R Jones
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
A Kazarov
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
S Kolos
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
D Liko
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
L Mapelli
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
I Soloviev Cern
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
Geneva
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
Switzerland R Hart
CFNUL/FCUL
NIKHEF
AmsterdamNetherlands
A Amorim
Universidade de Lisboa
Portugal
D Klose
Universidade de Lisboa
Portugal
J Lima
Universidade de Lisboa
Portugal
L Lucio
Universidade de Lisboa
Portugal
L Pedro
Universidade de Lisboa
Portugal
H Wolters
UCP Figueira da Foz
Portugal
E Badescu
NIPNE
BucharestRomania
I Alexandrov
PNPI
GatchinaRussian Federation
V Kotov
PNPI
GatchinaRussian Federation
M Mineev Jinr
PNPI
GatchinaRussian Federation
Russian Dubna
PNPI
GatchinaRussian Federation
Yu Federation
PNPI
GatchinaRussian Federation
Ryabov
PNPI
GatchinaRussian Federation
An on-line Integrated Bookkeeping: electronic run log book and Meta-Data Repository for ATLAS
La Jolla, California 1CHEP03 March 24-28. 2003
In the context of the ATLAS experiment there is growing evidence of the importance of different kinds of Meta-data including all the important details of the detector and data acquisition that are vital for the analysis of the acquired data. The Online BookKeeper (OBK) is a component of ATLAS online software that stores all information collected while running the experiment, including the Meta-data associated with the event acquisition, triggering and storage. The facilities for acquisition of control data within the on-line software framework, together with a full functional Web interface, make the OBK a powerful tool containing all information needed for event analysis, including an electronic log book.In this paper we explain how OBK plays a role as one of the main collectors and managers of Meta-data produced on-line, and we'll also focus on the Web facilities already available. The usage of the web interface as an electronic run logbook is also explained, together with the future extensions.We describe the technology used in OBK development and how we arrived at the present level explaining the previous experience with various DBMS technologies. The extensive performance evaluations that have been performed and the usage in the production environment of the ATLAS test beams are also analysed.
PNPI, Gatchina, Russian Federation
In the context of the ATLAS experiment there is growing evidence of the importance of different kinds of Meta-data including all the important details of the detector and data acquisition that are vital for the analysis of the acquired data. The Online BookKeeper (OBK) is a component of ATLAS online software that stores all information collected while running the experiment, including the Meta-data associated with the event acquisition, triggering and storage. The facilities for acquisition of control data within the on-line software framework, together with a full functional Web interface, make the OBK a powerful tool containing all information needed for event analysis, including an electronic log book.
In this paper we explain how OBK plays a role as one of the main collectors and managers of Meta-data produced on-line, and we'll also focus on the Web facilities already available. The usage of the web interface as an electronic run logbook is also explained, together with the future extensions.
We describe the technology used in OBK development and how we arrived at the present level explaining the previous experience with various DBMS technologies. The extensive performance evaluations that have been performed and the usage in the production environment of the ATLAS test beams are also analysed.
I. INTRODUCTION
Experiments in High Energy Physics (HEP) are becoming increasingly more complex. The construction of a new particle accelerator and associated detectors is a technological challenge that also encompasses the development of an associated software system.
The Large Hadron Colider (LHC) currently being built at the European Organization for Nuclear Research (CERN), will support four different detectors installed. The ATLAS [1] (A Toroidal LHC Appara-tuS) at LHC will require a complex trigger system. This trigger will have to reduce the original 40MHz of p-p interaction rate to a manageable 100Hz for storage. The total mass storage, including raw, reconstructed, simulated and calibration data exceeds 1 PetaByte per year. The three major types of data to be stored by ATLAS are:
• raw event data, data collected by the detector resulting from the particle collisions (events)
• conditions and Meta-data, which includes calibration constants, run conditions, accelerator conditions, trigger settings, detector configuration, and Detector Control System (DCS) conditions that determine the conditions under which every physics event occurred. These conditions are stored with an interval of validity (typically time or run number) and retrieved using time (or run number) as a key [2]. Also the DAQ status and time evolution of the Configuration Database is included in this type of data.
• reconstructed data, corresponding to objects with physical meaning (e.g. electrons, tracks, etc.) that are the result of applying software algorithms to the raw data, taking into consideration the conditions under which the raw data was taken.
An important part of the conditions database data is associated with the Trigger and Data Aquisition System [3] (TDAQ). Through the TDAQ system flow very different types of data (e.g. calibration and alignment information, configuration databases information) [4] that appear as conditions data for storage. Some types of this data, such as the accelerator's beam parameters, detector configuration, test-beam table position, are recorded by a specific component of the TDAQ system, the Online Bookkeeper (OBK). The OBK is important as a source of data for the conditions database, as an entry point to analysis jobs on the raw data and as a debug resource for the DAQ system. For this reason, OBK can be qualified as one of the biggest collectors and managers of Meta-data produced online for the ATLAS experiment. The aim of the OBK is to store information describing the data acquired by the DAQ and to provide offline access to this information [5]. It is also a powerful tool for users in the control room who can use it as a Run log book to attach their comments, or other types of support information.
II. OBK AND THE ONLINE SOFTWARE
This section will provide an overview of how OBK works in the Online Softare framework. As OBK is a software package of the Online Software system for the ATLAS TDAQ, the architecture of both, Online Software and OBK, will be briefly described.
A. The Online Software architecture
The role of the Online Software is to provide to other TDAQ systems, configuration, control and monotoring services. It does not include the processing and transportation of physics data. All packages of the Online Software create a framework generic enough to allow supervision of many distinct data taking configurations. From the architectural point of view there are three different group of components: group of components are the Configuration, Control and Monitoring. Table I shows these main packages and their components.
B. The OBK architecture
The OBK is part of the Configuration group. A first prototype was developed 4 years ago and since it has evolved significantly. Figure 1 shows a simplified Both IS and MRS use the IPC (Inter Process Comunication) package, a CORBA implementation, as messaging backbone [7]. MRS provides the facility which allows all software components in the ATLAS DAQ system and related processes to report error messages to other components of the distributed TDAQ system. IS provides the possibility for inter-application information exchange in the distributed environment. These three components of the Online SW are the providers of information to OBK databases. The data stored in OBK databases the information will be automatically available worldwide. This process begins with the data acquisition process in the distributed environment and ends up with the final users that will query the database for the most relevant informations about each data taking period. Figure 1 also shows schematically a general architecture of OBK. The OBK acquisition software subscribes to relevant MRS and IS servers in order to receive MRS/IS massages exchanged between components via CORBA/IIOP callbacks. The information is then stored using an per-run basis philosophy. The information stored includes the date/time of each run, the basic physics parameters, the status of a run -if it was a successfully completed or not, etc.
The data is then made available for offline usage. The users dispose of a set of tools like the Web interface and the C++ API which can be used to interact with OBK databases. They can retrieve data for a particular Run, or a set of Runs, or append new relevant information if needed. The web browser can display all the information of a Run including all IS informations stored, and with the C++ API there is also the facility to search for instances of a particular IS class parameter or to cicle all the run headers (StartDate, EndDate, TriggerType, etc.) in a partition.
III. PROTOTYPING WITH OBK
During the last 4 years, three different OBK prototypes were implemented. The aim was both to learn with the experience while trying to implement a package that meets the requirements [6], and to use different Data Base Management Systems (DBMS). This provided a better understanding of which DBMS most fulfills the OBK needs inside the Online Software framework. With this multi-technology approach we're gaining technological expertise about different DBMS technologies and are then able to make a solid recommendation for a technological and design solution for a production bookkeeper tool. This prototyping approach seems to achieve very good results in a long term experience like this one because each of the new OBK prototypes provides more functionality and performance gains over its predecessor. All of the prototypes use C++ as programming language and the same basic architecture but different DBMS for the persistent storage.
A. Objectivity/DB based
At the begining, LHC experiments selected Objectivity/DB for the official DBMS persistent storage. An OBK implementation using this DBMS was then the first natural choice. This prototype was designed to take full advantage of the pure Object Oriented (OO) model of the DBMS. The model used with the Objecivity/DB is organized in federations, databases, containers and object, OBK was able to map the data coming from the TDAQ system in a repository structured in such a way.
The OBK Objectivity/DB based prototype was used in 2000 testbeams with success. For data retrieval a web browser was also developed. This allows users to get in 'touch' with the data in a very natural fashion. Since 2001, this prototype was abandoned and is no longer maintained.
B. OKS based
The second prototype implemented uses OKS as a persistency mechanism. OKS is a C++ based, inmemory persistent object manager developed as a package inside the Online Software [8]. OKS is also OO but not so sophisticated as Objectivity/DB. The data files used for storage are created as XML files. They are stored in the file system which can be local, AFS or NFS. The access to the data is done by reading directly the files and not through a centralized server. The usage of XML files to store data is a very interesting feature because they are human readable and also highly portable.
The main reasons that led to this implementation were the fact that OKS is Open Source software (which makes it usable in any place without having any problems with licencing) and also that this DBMS is lighter and more oriented to systems with very high demands in terms of performance than the Objectivity/DB. There are of course some disadvantages of using OKS, mainly the lack of features that other DBMS provide like transactions.
The persistent object schema of this prototype is very similar to the first one beacause it is also OO featured and the intrinsic data storage philosophy can be very similar. A web browser with the same approach as the one from Objectivity is also provided.
This prototype provides extended features while compared to the first one, such as more programs to control the databases and a full featured C++ API. All it's new improvements were used in 2001 test beams at CERN by the users. It behaved well and acquired several Megabytes of data to the local file system of a machine and later on AFS. Despite the good behavior from OBK the problem of data dispersion and consistency soon arrived because this prototype uses several XML files to store information about each Run.
TUGP007
C. MySQL based
This is the only OBK prototype that uses a Relational DBMS. MySQL [9] is a well known, fast and reliable Open Source DBMS. It started a new phase in the OBK development -The phase of the relational model. The decision to implement a package such as OBK using MySQL was driven from the power of its underlying SQL engine and also due to the desire of trying a relational approach to OBK databases. Technically a new database schema was implemented to allow mapping of data coming mainly from OO sources forcing us to completely redesign the internal structure of OBK. We have achieved a mapping between an OO and a Relational schema that is suitable for OBK needs and started to use it.
This prototype provides all the features of the previous ones, plus further enhancements regarding the users needs and also the very important aspect of performance. In this implementation the concept of log book was also introduced and successfully deployed (see section IV B for details) and successfully used.
The MySQL implementation was used successfully in the 2002 test beam. It recorded more than 1 GByte of data, including data coming from the DCS.
IV. OBK INTERFACES FOR DATA RETRIEVAL
OBK provides a set of interfaces that allow users to interact with the databases. There are also available some tools coded in C++ which make some tasks very easy to execute. In this section the focus will be on the C++ interfaces (the Query API) and on the Web interface used to store information directly related to the users.
A. C++ Query API
Both OKS and MySQL prototype are distributed with a C++ Query API. This API exposes methods that allow data retrieval in a very user oriented approach. The API encapsulates all the necessary mechanisms to get the correct values from the OBK databases. The users do not need any special skill (like how to perform SQL queries) regarding the database schema. This also allows to preserve the integrity of the database because it doesn't allow users to manipulate it directly in what concerns for example the addiction of new information retated to a Run.
B. Web Browser
The latest version of the OBK browser includes several functionalities making it a powerful tool. It provides not only the functionalities of displaying the data coming from the OBK data acquisition process just after it was collected through the Online System but also the possibility of behaving like a Run Log Book. It was built with an excellent searching mechanism oriented to the final users which will be the Physicists. • TriggerType Cosmic, Calibration, Physics
Sorting options are also included. The OBK data display in this browser was driven by the need of a clear and very user friendly interface where the data would be easily accessible to the users. After the selection of all the criteria that the user wants to meet, the result will appear in another web page similar to the one in information, like for example which messages from the MRS were transfered between the various components of the Online Software. This is accessible by following the link provided for each Run. Figure 4 shows a typical page generated when selecting the option to display more detailed information on a particular Run. Through this new page it is possible to browse information including the messages from the MRS, from the IS and the users comments and attached files. There are two different approaches to storing comments in OBK databases:
• using binary programs provided for both online and offline comments
• Using the web browser Adding comments through the web browser gives also the option of attaching any type of files to a comment. This allows users to add information that they think might be relevant to the Run. Afterwards every one can see the comments and their respectively attached files. Some types of files are supported and will immediately be displayed in a different window. File types that are not supported will have to be downloaded and the appropriate program must be used to open them. The OBK browser itself also provides administrative tools, for example to create databases with the correct structure for OBK, and gives a set of options for user management: authentication, permissions, etc.
V. TESTING
For the evaluation of the various prototypes the focus was to analyse the different functionalites, how easy it was to map the data coming from the Online Sytem or on how complex the code of each one became. On all of these issues a set of scalablity and performance tests were addressed. One other objective of these tests was also to evaluate if OBK can handle all the information produced in the final system.
In Figure 6 it can be seen the time to store a typical IS message in function of the number of OBK data acquisition programs running simultaneously. This test was one of the tests performed in the scalability context regarding the final system.
A comparison between the three prototypes was also addressed. This test was performed with a typical Start of Run message coming from the MRS. The time for this test is not only the time spent to store the message itself but also all operations that this implies. This operation includes the creation of a new Run in the database with all associated operations like the creation of new files (in the OKS prorotype) or containers (in the case of the Objectivity/DB prototype). More details about this tests can be found in [11]. The MySQL prototype proved to be the faster one while the Objectivity/DB is the slowest. It was clear from the test that there is a dependency on the Run number for the time spent to store a message of this kind in case of both Objectivity/DB and OKS prototypes. The slope of the OKS line is less than the Objectivity/DB but when a transaction becomes commited, the Objectivity/DB prototype gets better. We attribute these results not only to the evolution in the design from prototype to prototype but also because MySQL provides a faster engine that makes the time to store these messages negligible when compared with the other prototypes. Some other performance results are displayed in tables II and III. The results presented for the Start of Run, End of Run and Comment, represent the the minimum value and the maximum observed during the tests. Tests were performed to a maximum of 500 Runs. For the IS time it's the mean time to store a IS message from OBK because it was observed that there was no significant growth. More information about these tests can be found in OBK test report [10]. This document includes a detailed description of the test procedure and other results such as functionality tests.
VI. SUMARY AND FUTURE WORK
In this paper we presented a general overview of our experience of using different DBMSs in prototyping OBK in the ATLAS Online Software framework. Since the beginning of the project we tried to understand the problem of bookkeeping for the ATLAS experiment. For that reason, OBK evolved and it now provides some tools that can be qualified as Run log book tools. The last version of OBK which is the most performant and robust one is the result of this experience. But OBK work is still in progress. Included in the list of future improvements for OBK are:
• to make it DBMS independent in order to have a more general and dynamic architecture;
• create a set of new tools to extend the actual existing ones for the log book approach;
FIG. 1 :
1Generic OBK architecture.
FIG. 2 :
2OBK browser -search mechanism.
Figure 2
2shows all the options of this search mechanism. There are a set of options on which the users can base their criteria of selection such as: • RunStatus Good/Bad • MaxEnvents The maximum number of events of the Runs • Start Date/End Date. For the Start and end of a Run. This allows to 'map' the obk run bases data type in a time interval • BeamType Muons, Electrons, etc.
Figure 3 .
3The result presents to the users some relevant information about each Run: Partition to which each Run belongs; Run Number; Start and End Date of the Run; Run Status; Number of Events; Maximum Number of Events; Trigger Type, Detector Mask and Beam Type. It is also possible to display more specificFIG. 3: OBK browser -result of the search.
FIG. 4 :
4OBK browser -more information about a particular Run.
FIG. 5 :
5OBK browser -adding a comment.
FIG. 6 :
6OBK tests -scalability test performed to the MySQL implementation.
TABLE I :
IOnline software packages and its componentsConfiguration
Configuration Databases
Online Bookkeeper
Control
Run Control
DAQ Supervisor
Process Manager
Ressource Manager
Integrated Graphical User Interface
Monitoring
Information Service
Message Reporting System
Online Histograming
Event Monitoring
diagram of OBK in which are the other Online pack-
ages that OBK interacts with. Namely, the Informa-
tion Service (IS), Message Reporting System (MRS)
and Configuration Databases (Conf. DB).
TABLE II :
IITime (in miliseconds) to Start of Run, End of Run, Comment and a typical IS message for OBK/OKS prototype. Platform Start Run End Run Comment IS Linux/egcs1.177-259
48-245
0,3
2,7
Linux/gcc2.96
47-196
29-184
0,3
1,9
TABLE III :
IIITime (in miliseconds) to Start of Run, End of Run, Comment and a typical IS message for OBK/MySQL prototype.Platform Start Run End Run Comment IS
Linux/egcs1.1
0,004
0,010
0,002
0,011
Linux/gcc2.96
0,018
0,021
0,004
0,020
AcknowledgmentsThe authors wish to thank Mario Monteiro from Softcontrol[12]who is the main developer of the OBK MySQL implementation web browser.
Technical Proposal for a General-Purpose pp Experiment at the LHC collider at CERN, CERN/LHCC/94-43. ATLAS Collaboration, Technical Proposal for a General-Purpose pp Experiment at the LHC collider at CERN, CERN/LHCC/94-43, 1994
S Paoli, Conditions DB, User Requirements and Analysis document. S. Paoli, Conditions DB, User Requirements and Analysis document, 2000
CERN/LHCC/2000-17ATLAS High-Level Triggers, DAQ and DCS Technical Proposall. http://wwwdb.web.cern.ch/wwwdb/objectivity/docs/ conditionsdb/urd/urd0.6.pdf [3] ATLAS Collaboration, ATLAS High-Level Triggers, DAQ and DCS Technical Proposall, CERN/LHCC/2000-17, March 200
A Amorim, Requirements on the Conditions Database interface to T/DAQ, ATLAS TDAQ/DCS Online Software -Lisbon Group. A.Amorim, et al, Requirements on the Conditions Database interface to T/DAQ, ATLAS TDAQ/DCS Online Software -Lisbon Group, 2003
Helmut Wolters, Design of the Run Bookkeeper System for the ATLAS DAQ prototype-1. A Amorim, ATLAS TDAQ Online SoftwareA. Amorim, Helmut Wolters, Design of the Run Bookkeeper System for the ATLAS DAQ prototype-1, ATLAS TDAQ Online Software, 1997
A Amorim, L Lucio, L Pedro, H Wolters, A Ribeiro, Online Bookkeeper Requirements, ATLAS TDAQ Online Software. A. Amorim, L.Lucio, L. Pedro, H. Wolters, A. Ribeiro, Online Bookkeeper Requirements, ATLAS TDAQ Online Software, 2002
Use of CORBA in the ATLAS DAQ Prototype. A Amorm, IEEE Transactions Nuclear Science. 454A.Amorm et al, Use of CORBA in the ATLAS DAQ Prototype, IEEE Transactions Nuclear Science, vol.45, No 4, August 1998
The OKS Persistent In-memory Object Manager. R Jones, L Mapelli, Y Ryabov, I Soloviev, Xh IEEE. R. Jones, L. Mapelli, Y. Ryabov and I. Soloviev, The OKS Persistent In-memory Object Manager. Xh IEEE Real Time 1997 Conference http://atddoc.cern.ch/Atlas/Conferences/RTI97/Bob/paper 146/paper- 146.html
. Page Mysql Web, MySQL web page: http://www.mysql.com
L Lucio, L Pedro, Antonio Amorim, Unit) Test Report of the Online Book-keeper (OBK) for the Atlas DAQ Online Software, ATLAS TDAQ Online Software. L.Lucio, L. Pedro, Antonio Amorim, (Unit) Test Re- port of the Online Book-keeper (OBK) for the Atlas DAQ Online Software, ATLAS TDAQ Online Soft- ware, November 2002 http://atddoc.cern.ch/Atlas/Notes/177/ OBKTestReport.pdf
Experience using different DBMSs in prototyping a Book-keeper for ATLAS' DAQ software. I Alexandrov, Proceedings of CHEO'01. CHEO'01Beijing, ChinaI. Alexandrov, et al, Experience using different DBMSs in prototyping a Book-keeper for ATLAS' DAQ software, 2001 Proceedings of CHEO'01, Beijing, China, pp 248-251
. Softcondtrol Web, Softcondtrol Web: http://www.softcontrolweb.com
|
[] |
[] |
[
"Branko Dragovich †e-mail:[email protected] \nInstitute of Physics\nP.O. Box 5711001Belgrade, DubnaSerbia and Montenegro, Russia\n"
] |
[
"Institute of Physics\nP.O. Box 5711001Belgrade, DubnaSerbia and Montenegro, Russia"
] |
[] |
A brief review of a superanalysis over real and p-adic superspaces is presented. Adelic superspace is introduced and an adelic superanalysis, which contains real and p-adic superanalysis, is initiated.
| null |
[
"https://export.arxiv.org/pdf/hep-th/0401044v1.pdf"
] | 18,629,180 |
hep-th/0401044
|
3d864719515e1ead6875c2faad46dbcf6bf36277
|
8 Jan 2004 24-29 July 2003,
Branko Dragovich †e-mail:[email protected]
Institute of Physics
P.O. Box 5711001Belgrade, DubnaSerbia and Montenegro, Russia
8 Jan 2004 24-29 July 2003,Some p-Adic Aspects of Superanalysis * * Based on the talk presented at the International Workshop Supersymmetries and Quantum Symmetries,
A brief review of a superanalysis over real and p-adic superspaces is presented. Adelic superspace is introduced and an adelic superanalysis, which contains real and p-adic superanalysis, is initiated.
Introduction
Supersymmetry plays very important role in construction of new fundamental models of high energy physics beyond the Standard Model. Especially it is significant in formulation of String/M-theory, which is presently the best candidate for unification of matter and interactions. Supersymmetry transformation can be regarded as transformation in a superspace, which is an ordinary spacetime extended by some anticommuting (odd) coordinates. Spacetime in M-theory is eleven-dimensional with the Planck length as the fundamental one. According to the well-known uncertainty
∆x ≥ ℓ 0 = G c 3 ≈ 10 −33 cm,(1)
one cannot measure distances smaller than the Planck length ℓ 0 . Since the derivation of (1) is based on the general assumption that real numbers and archimedean geometry are valued at all scales it means that the usual approach is broken and cannot be extended beyond the Planck scale without adequate modification which contains non-archimedean geometry. The very natural modification is to use adelic approach, since it contains real and p-adic numbers which make all possible completions of the rational numbers. As a result it follows that one has to consider possible relations between adelic and supersymmetry structures. In this report we review some aspects of p-adic superanalysis and introduce adelic superanalysis, which is a basis for investigation of the corresponding p-adic and adelic supersymmetric models.
2 Some basic properties of p-adic numbers and adeles
Let us first recall that numerical experimental results belong to the field of rational numbers Q. On the Q one can introduce the usual absolute value | · | ∞ and p-adic absolute value | · | p for each prime number p. Completion of Q with respect to | · | ∞ gives the field of real numbers Q ∞ ≡ R. If we replace | · | ∞ by | · | p then completion of Q yields a new number field known as the field of p-adic numbers Q p . Consequently, Q is dense in R as well as in Q p for every p. R has archimedean metric d ∞ (x, y) = |x − y| ∞ and Q p has non-archimedean metric
(ultrametric) d p (x, y) = |x − y| p , i.e. d p (x, y) ≤ max{d p (x, z), d p (z, y)}.
It is worth pointing out that R and Q p exhaust all possibilities to get number fields by completion of Q. Any p-adic number x ∈ Q p can be presented in the unique way x = p ν ∞ k=0 a k p k where ν ∈ Z, a k ∈ {0, 1, · · · , p − 1} , what resembles representation of a real number y = ± 10 µ k=0 −∞ b k 10 k , µ ∈ Z, b k ∈ {0, 1, · · · , 9} but with expansion in the opposite way. There are two main types of functions with p-adic argument: p-adic valued and real-valued (or complex-valued). The reader who is not familiar with p-adic numbers and their functions can see, e.g. [1].
To regard simultaneously real and p-adic properties of rational numbers and their completions one uses concept of adeles. An adele x (see, e.g. [2]) is an infinite sequence x = (x ∞ , x 2 , · · · , x p , · · ·), where x ∞ ∈ R and x p ∈ Q p with the restriction that for all but a finite set S of primes p one has x p ∈ Z p = {x ∈ Q p : |x| p ≤ 1} = {x ∈ Q p : x = a 0 + a 1 p + a 2 p 2 + · · ·}. Componentwise addition and multiplication endow a ring structure to the set of adeles A. A can be defined as
A = S A S , A S = R × p∈S Q p × p ∈S Z p .(2)
Q is naturally embedded in A. Ring A is also a locally compact topological space. Important functions on A are related to mappings f : A → A and ϕ : A → R (C).
Elements of p-adic and adelic string theory
A notion of p-adic string and hypothesis on the existence of non-archimedean geometry at the Planck scale were introduced by Volovich [3] and have been investigated by many researchers (reviews of an early period are in [1] and [4]). Very successful p-adic analogues of the Veneziano and Virasoro-Shapiro amplitudes were proposed in [5] as the corresponding Gel'fand-Graev [2] beta functions. Using this approach, Freund and Witten obtained [6] an attractive adelic formula A ∞ (a, b) p A p (a, b) = 1, which states that the product of the crossing symmetric Veneziano (or Virasoro-Shapiro) amplitude and its all p-adic counterparts equals unity (or a definite constant). This gives possibility to consider an ordinary fourpoint function, which is rather complicate, as an infinite product of its inverse p-adic analogues, which have simpler forms. The ordinary crossing symmetric Veneziano amplitude can be defined by a few equivalent ways and its integral form is
A ∞ (a, b) = R |x| a−1 ∞ |1 − x| b−1 ∞ dx,(3)
where it is taken = 1, T = 1/π, and a = −α(s)
= −1− s 2 , b = −α(t), c = −α(u) with the conditions s + t + u = −8 and a + b + c = 1.
According to [5] p-adic Veneziano amplitude is a simple p-adic counterpart of (3), i.e.
A p (a, b) = Qp |x| a−1 p |1 − x| b−1 p dx,(4)
where now x ∈ Q p . In both (3) and (4) kinematical variables a, b, c are real or complex-valued parameters. Thus in (4) only string world-sheet parameter x is treated as p-adic variable, and all other quantities maintain their usual real values. Unfortunately, there is a problem to extend the above product formula to the higher-point functions. Some possibilities to construct p-adic superstring amplitudes are considered in [7] (see also [8], [9], and [10]). A recent interest in p-adic string theory has been mainly related to an extension of adelic quantum mechanics [11] and p-adic path integrals to string amplitudes [12]. An effective nonlinear p-adic string theory (see, e.g. [4]) with an infinite number of space and time derivatives has been recently of a great interest in the context of the tachyon condensation [13]. It is also worth mentioning successful formulation and development of p-adic and adelic quantum cosmology (see [14] and references therein) which demonstrate discreteness of minisuperspace with the Planck length ℓ 0 as the elementary one.
Elements of p-adic and adelic superanalysis
Here it will be first presented some elements of real and p-adic superanalysis along approach introduced by Vladimirov and Volovich [15] and elaborated by Khrennikov [16]. Then I shall generalize this approach to adelic superanalysis.
Let
Λ(Q v ) = Λ 0 (Q v ) ⊕ Λ 1 (Q v ) be Z 2 -graded vector space over Q v , (v = ∞, 2, 3, · · · , p, · · ·),
where elements a ∈ Λ 0 (Q v ) and b ∈ Λ 1 (Q v ) have even (p(a) = 0) and odd (p(b) = 1) parities. Such Λ(Q v ) space is called v-adic (real and p-adic) superalgebra if it is endowed by an associative algebra with unity and parity multiplication p(ab) ≡ p(a) + p(b) (mod 2). Supercommutator is defined in the usual way: [a, b} = a b − (−1) p(a)p(b) b a. Superalgebra Λ(Q v ) is called (super)commutative if [a, b} = 0 for any a ∈ Λ 0 (Q v ) and b ∈ Λ 1 (Q v ). As illustrative examples of commutative superalgebras one can consider finite dimensional v-adic Grassmann algebras G(Q v : η 1 , η 2 , · · · , η m ) which dimension is 2 m and generators η 1 , η 2 , · · · , η m satisfy anticommutative relations η i η j + η j η i = 0. The role of norm necessary to build analysis on commutative superalgebra Λ(Q v ) plays the absolute value | · | ∞ for real case and p-adic norm | · | p for p-adic cases.
Let Λ(Q v ) be a fixed commutative v-adic superalgebra. v-Adic superspace of dimension (n, m) over Λ(Q v ) is
Q n,m Λ(Qv) = Λ n 0 (Q v ) × Λ m 1 (Q v )(5)
and it is an extension of the standard v-adic space. In the sequel we will mainly have in mind that Λ 0 (
Q v ) = Q v or that Q v is replaced by Q v ( √ τ ), where √ τ ∈ Q v .
Then our v-adic (i.e. real and p-adic) superspace can be defined as Q n,m
Λ(Qv) = Q n v × Λ m 1 (Q v ) which points are X (v) = (X (v) 1 , X (v) 2 , · · · , X (v) n , X (v) n+1 , · · · , X (v) n+m ) = (x (v) 1 , x (v) 2 , · · · , x (v) n , θ (v) 1 , · · · , θ (v) m ) = (x (v) , θ (v) ), where coordinates x (v) 1 , x (v) 2 , · · · , x (v) n are commutative, p(x (v) i ) = 0, and θ (v) 1 , θ (v) 2 · · · , θ (v) m are anticommutative (Grass- mann), p(θ (v) j ) = 1. Since supercommutator [X (v) i , X (v) j } = X (v) i X (v) j − (−1) p(X (v) i )p(X (v) j ) X (v) j X (v) i = 0, coordinates X (v) i , (i = 1, 2, · · · , n + m) are called supercommuting. A norm of X (v) can be defined as ||X (v) || = max{ |x (v) i | v , |θ (v) j | v }.
In the sequel, to decrease number of indices we often omit them when they are understood from the context.
One can define functions F v (X) on open subsets of superspace Q n,m Λ(Qv ) , as well as their continuity and differentiability (for some details, see [15] and [16]). One has to differ the left and the right partial derivatives: ∂ L Fv ∂θ j , ∂ R Fv ∂θ j . It is worth noting that derivatives of p-adic valued function of p-adic arguments are formally the same as those for real functions of real arguments. Integral calculus for p-adic valued functions is more subtle than in the real case, since there is no p-adic valued Lebesgue measure [17]. One can use antiderivatives, but one has to take care about pseudoconstants, which are some exotic functions with zero derivatives. However, for analytic functions one can well define definite integrals using the corresponding antiderivatives [1]. Integration with anticommuting variables is introduced by axiomatic approach requiring linearity and translation invariance in both real and p-adic cases. In particular, one obtains the following two indefinite integrals: d θ
(v) j = 0 and θ (v) j d θ (v) j = 1.
When Q n v corresponds to an n-dimensional spacetime, functions F v (x, θ) on superspace Q n,m Λ(Qv) are called v-adic superfields. Due to the fact that there is only finite number of non-zero products with anticommuting variables, expansions of F v (x, θ) over θ j , (j = 1, 2, · · · , m) are finite, i.e. there are 2 m terms in the corresponding Taylor expansion. Description of supersymmetric models by superfields is very compact and elegant [18].
We can now turn to adelic superanalysis. It is natural to define the corresponding Z 2 -graded vector space over A as
Λ(A) = S Λ S , Λ S = Λ(R) × p∈S Λ(Q p ) × p ∈S Λ(Z p ) ,(6)
where Λ(Z p ) = Λ 0 (Z p ) ⊕ Λ 1 (Z p ) is a graded vector space over the ring of padic integers Z p and S is a finite set of primes p. Graded vector space (6) becomes adelic superalgebra by requiring that Λ(R) , Λ(Q p ) , Λ(Z p ) are superalgebras. Adelic supercommutator may be regarded as a collection of real and all p-adic supercommutators. Thus adelic superalgebra (6) is commutative. An example of commutative adelic superalgebra is the following adelic Grassmann algebra:
G(A : η 1 , η 2 , · · · , η m ) = S G S (η 1 , η 2 , · · · , η m ) G S (η 1 , · · · , η m ) = G(R : η 1 , · · · , η m )× p∈S G(Q p : η 1 , · · · , η m )× p ∈S G(Z p : η 1 , · · · , η m ).
(7) Adelic superspace of dimension (n, m) has the form
A n,m Λ(A) = S A n,m Λ(A),S , A n,m Λ(A),S = R n,m Λ(R) × p∈S Q n,m Λ(Qp) × p ∈S Z n,m Λ(Zp) ,(8)
where Z n,m Λ(Zp) is (n, m)-dimensional p-adic superspace over superalgebra Λ(Z p ). Closer to supersymmetric models is the superspace A n,m
Λ(A) = S A n,m Λ(A),S where A n,m Λ(A),S = (R n × Λ m 1 (R)) × p∈S (Q n p × Λ m 1 (Q p )) × p ∈S (Z n p × Λ m 1 (Z p )).(9)
Points of adelic superspace X have the coordinate form X = (X (∞) , X (2) , · · · , X (p) , · · ·), where for all but a finite set of primes S it has to be ||X (p) || = max |X (p) i | p ≤ 1. The corresponding adelic valued functions (superfields) must satisfy adelic structure, i.e. F (X) = (F ∞ , F 2 , · · · , F p , · · ·) with condition |F p | p ≤ 1 for all but a finite set of primes S. In the spirit of this approach one can continue to build adelic superanalysis.
Concluding remarks
In this report are presented some elements of p-adic and adelic generalization of superanalysis over real numbers. We have been restricted to superanalysis over the field of p-adic numbers Q p and the corresponding ring of adeles A. It is worth noting that algebraic extensions of Q p give much more possibilities than in the real case, where there is only one extension, i.e. the field of complex numbers C = R( √ −1). In fact there is at least one τ ∈ Q p such that Q p (τ 1 n ) = Q p for a fixed p and any integer n ≥ 2. Note that there are three and seven p-adic distinct quadratic extensions Q p ( √ τ ) if p = 2 and p = 2, respectively. Using these quadratic extensions, supersymmetric quantum mechanics with p-adic valued functions is constructed by Khrennikov [16]. The above approach to adelic superanalysis may be easily generalized to the case when R → C, Q p → Q p ( √ τ ), Z p → Z p ( √ τ ). Algebraically closed and ultrametrically complete analogue of C is C p , which is an infinite dimensional vector space. Thus p-adic algebraic extensions offer enormously rich and very challenging field of research in analysis as well as in superanalysis. It is also very desirable to find a formulation of superanalysis which would be a basis for supersymmetric generalization of complex-valued p-adic and adelic quantum mechanics [11] as well as of related quantum field theory and Superstring/Mtheory [12].
Acknowledgements The work on this paper was supported in part by the Serbian Ministry of Science, Technologies and Development under contract No 1426 and by RFFI grant 02-01-01084 .
V S Vladimirov, I V Volovich, E I Zelenov, Adic Analysis and Mathematical Physics. SingaporeWorld ScientificV.S. Vladimirov, I.V. Volovich and E.I. Zelenov, p-Adic Analysis and Math- ematical Physics, World Scientific, Singapore, 1994.
I M Gel'fand, M I Graev, I I Piatetskii-Shapiro, Representation Theory and Automorphic Functions. Nauka, Moscowin RussianI.M. Gel'fand, M.I. Graev and I.I. Piatetskii-Shapiro, Representation Theory and Automorphic Functions (in Russian), Nauka, Moscow, 1966.
I V Volovich, p-Adic string. 4I.V. Volovich, p-Adic string, Class. Quantum Grav. 4 (1987) L83-L87.
. L Brekke, P G O Freund, Phys. Rep. 233PhysicsL. Brekke and P.G.O. Freund, p-Adic Numbers in Physics, Phys. Rep. 233 (1993) 1-63.
Non-Archimedean strings. P G O Freund, M Olson, Phys. Lett. B. 199P.G.O. Freund and M. Olson, Non-Archimedean strings, Phys. Lett. B 199 (1987) 186-190.
Adelic string amplitudes. P G O Freund, E Witten, Phys. Lett. B. 199P.G.O. Freund and E. Witten, Adelic string amplitudes, Phys. Lett. B 199 (1987) 191-194.
. I Ya, B G Aref'eva, I V Dragovich, P-Adic Volovich, Superstring, Phys. Lett. B. 214I.Ya. Aref'eva, B.G. Dragovich and I.V. Volovich, p-Adic Superstring, Phys. Lett. B 214 (1988) 339-349.
Non-archimedean string dynamics. L Brekke, P G O Freund, M Olson, E Witten, Nucl. Phys. B. 302L. Brekke, P.G.O. Freund, M. Olson and E. Witten, Non-archimedean string dynamics, Nucl. Phys. B 302 (1988) 365-402.
Adelic String and Superstring Amplitudes. Ph, E Ruelle, D Thiran, J Werstegen, Weyers, Mod. Phys. Lett. A. 4Ph. Ruelle, E. Thiran, D. Werstegen and J. Weyers, Adelic String and Su- perstring Amplitudes, Mod. Phys. Lett. A 4 (1989) 1745-1752.
Adelic Formulas for Gamma and Beta Functions of One-Class Quadratic Fields: Applications to 4-Particle Scattering String Amplitudes. V S Vladimirov, Proc. Steklov Math. Institute. 228V.S. Vladimirov, Adelic Formulas for Gamma and Beta Functions of One- Class Quadratic Fields: Applications to 4-Particle Scattering String Ampli- tudes, Proc. Steklov Math. Institute 228 (2000) 67-80.
Adelic Model of Harmonic Oscillator. B Dragovich, Theor. Math. Phys. 101B. Dragovich, Adelic Model of Harmonic Oscillator, Theor. Math. Phys. 101 (1994) 1404-1412;
. Adelic Harmonic, Oscillator , Int. J. Mod. Phys. A. 10Adelic Harmonic Oscillator, Int. J. Mod. Phys. A 10 (1995) 2349-2365.
On Adelic Strings, hep-th/0005200; On p-Adic and Adelic Generalization of Quantum Field Theory. B Dragovich, hep-th/0312046p-Adic and Adelic Quantum Mechanics. 102B. Dragovich, On Adelic Strings, hep-th/0005200; On p-Adic and Adelic Generalization of Quantum Field Theory, Nucl. Phys. B (Proc. Suppl.) 102, 103 (2001) 150-155; p-Adic and Adelic Quantum Mechanics, hep-th/0312046.
Tachyon Condesation and Brane Descent Relations in p-Adic String Theory. D Ghoshal, A Sen, Nucl. Phys. B. 584D. Ghoshal and A. Sen, Tachyon Condesation and Brane Descent Relations in p-Adic String Theory, Nucl. Phys. B 584 (2000) 300-312.
Nešić and I.V. Volovich, p-Adic and Adelic Minisuperspace Quantum Cosmology. G S Djordjević, B Dragovich, Lj , Int. J. Mod. Phys. A. 17G.S. Djordjević, B. Dragovich, Lj. Nešić and I.V. Volovich, p-Adic and Adelic Minisuperspace Quantum Cosmology, Int. J. Mod. Phys. A 17 (2002) 1413- 1433.
. V S Vladimirov, I V Volovich, Superanalysis. I. Differential Calculus, Teor. Mat. Fizika. 59V.S. Vladimirov and I.V. Volovich, Superanalysis. I. Differential Calculus, Teor. Mat. Fizika 59 (1984) 3-27;
. Superanalysis. II. Integral Calculus, Teor. Mat. Fizika. 60Superanalysis. II. Integral Calculus, Teor. Mat. Fizika 60 (1984) 169-198.
. A Yu, Khrennikov, Superanalysis, Nauka, Moscowin RussianA.Yu. Khrennikov, Superanalysis (in Russian), Nauka, Moscow, 1997.
Ultrametric Calculus : an introduction to p-adic analysis. W H Schikhof, Cambridge U.P., CambridgeW.H. Schikhof, Ultrametric Calculus : an introduction to p-adic analysis, Cambridge U.P., Cambridge, 1984.
J Wess, J Bagger, Supersymmetry and Supergravity. PrincetonPrinceton Univ. PressJ. Wess and J. Bagger, Supersymmetry and Supergravity, Princeton Univ. Press, Princeton, 1983.
|
[] |
[
"Impact of Pointing Errors on the Performance of Mixed RF/FSO Dual-Hop Transmission Systems",
"Impact of Pointing Errors on the Performance of Mixed RF/FSO Dual-Hop Transmission Systems"
] |
[
"Ieee ",
"Communications Letters ",
"X Vol ",
"No ",
"Xxx Xx "
] |
[] |
[] |
In this work, the performance analysis of a dualhop relay transmission system composed of asymmetric radiofrequency (RF)/free-space optical (FSO) links with pointing errors is presented. More specifically, we build on the system model presented in [1] to derive new exact closed-form expressions for the cumulative distribution function, probability density function, moment generating function, and moments of the end-to-end signal-to-noise ratio in terms of the Meijer's G function. We then capitalize on these results to offer new exact closed-form expressions for the higher-order amount of fading, average error rate for binary and M -ary modulation schemes, and the ergodic capacity, all in terms of Meijer's G functions. Our new analytical results were also verified via computer-based Monte-Carlo simulation results.Index Terms-Asymmetric dual-hop relay system, pointing errors, mixed RF/FSO systems.I. INTRODUCTIONI N recent times, free-space optical (FSO) or optical wireless communication systems have gained an increasing interest due to its various characteristics including higher bandwidth and higher capacity compared to the traditional radio frequency (RF) communication systems. In addition, FSO links are license-free and hence are cost-effective relative to the traditional RF links. These features of FSO communication systems potentially enable solving the issues that the RF communication systems face due to the expensive and scarce spectrum [2]-[6]. However, the atmospheric turbulence may lead to a significant degradation in the performance of the FSO communication systems[2]. Additionally, thermal expansion, dynamic wind loads, and weak earthquakes result in the building sway phenomenon that causes vibration of the transmitter beam leading to a misalignment between transmitter and receiver known as pointing error. These pointing errors may lead to significant performance degradation and are a serious issue in urban areas, where the FSO equipments are placed on high-rise buildings [7]-[9].On the other hand, relaying technology has gained enormous attention for quite a while now since it not only provides wider and energy-efficient coverage but also increased capacity in the wireless communication systems. As such many efforts have been made to study the relay system performance under various fading conditions [10]-[13]. These independent studies consider symmetric channel conditions i.e. the links at the hops are similar in terms of the fading distributions though it is more practical to experience different/asymmetric link
|
10.1109/wcl.2013.042313.130138
|
[
"https://arxiv.org/pdf/1302.4225v1.pdf"
] | 7,945,440 |
1302.4225
|
b1b419ebc7ce5fb8500347db8053cb402576b031
|
Impact of Pointing Errors on the Performance of Mixed RF/FSO Dual-Hop Transmission Systems
2013 1
Ieee
Communications Letters
X Vol
No
Xxx Xx
Impact of Pointing Errors on the Performance of Mixed RF/FSO Dual-Hop Transmission Systems
2013 1arXiv:1302.4225v1 [cs.IT]
In this work, the performance analysis of a dualhop relay transmission system composed of asymmetric radiofrequency (RF)/free-space optical (FSO) links with pointing errors is presented. More specifically, we build on the system model presented in [1] to derive new exact closed-form expressions for the cumulative distribution function, probability density function, moment generating function, and moments of the end-to-end signal-to-noise ratio in terms of the Meijer's G function. We then capitalize on these results to offer new exact closed-form expressions for the higher-order amount of fading, average error rate for binary and M -ary modulation schemes, and the ergodic capacity, all in terms of Meijer's G functions. Our new analytical results were also verified via computer-based Monte-Carlo simulation results.Index Terms-Asymmetric dual-hop relay system, pointing errors, mixed RF/FSO systems.I. INTRODUCTIONI N recent times, free-space optical (FSO) or optical wireless communication systems have gained an increasing interest due to its various characteristics including higher bandwidth and higher capacity compared to the traditional radio frequency (RF) communication systems. In addition, FSO links are license-free and hence are cost-effective relative to the traditional RF links. These features of FSO communication systems potentially enable solving the issues that the RF communication systems face due to the expensive and scarce spectrum [2]-[6]. However, the atmospheric turbulence may lead to a significant degradation in the performance of the FSO communication systems[2]. Additionally, thermal expansion, dynamic wind loads, and weak earthquakes result in the building sway phenomenon that causes vibration of the transmitter beam leading to a misalignment between transmitter and receiver known as pointing error. These pointing errors may lead to significant performance degradation and are a serious issue in urban areas, where the FSO equipments are placed on high-rise buildings [7]-[9].On the other hand, relaying technology has gained enormous attention for quite a while now since it not only provides wider and energy-efficient coverage but also increased capacity in the wireless communication systems. As such many efforts have been made to study the relay system performance under various fading conditions [10]-[13]. These independent studies consider symmetric channel conditions i.e. the links at the hops are similar in terms of the fading distributions though it is more practical to experience different/asymmetric link
conditions at different hops i.e. each link may differ in the channel conditions from the other link [1], [14]- [17]. This is due to the fact that the signals on each hop are transmitted either via different communications systems or the signals might have to commute through physically different paths. For instance, as proposed in [1], a relaying system based on both FSO as well as RF characteristics can be expected to be more adaptive and constitute an effective communication system in a real-life environment.
The model utilized in our work is similar to the one presented in [1]. Although [1] lacks the motivation behind such a model, we understand and proceed with such a model based on the following explanation. Considering an uplink scenario, besides all the advantages of FSO over RF, that very much motivates this work is the concept of multiplexing i.e. we can multiplex users with RF only capability into a single FSO link. This comes with the reasoning that there exists a connectivity gap between the backbone network and the last-mile access network and hence this last mile connectivity can be delivered via high-speed FSO links [18]. For instance, in developing countries where there might not be much of a fiber optic structure and hence to increase its reach and bandwidth to the last mile, it will require huge amount of economic resources to dig up the current brown-field. It will be much better to simply install FSO transmitters and detectors on the high-rise buildings and cover the last mile by having the users with RF capability to communicate via their respective RF bands and let the rest be taken care of by the FSO links to get it through to the backbone as can be observed from Fig. 1. This multiplexing feature will avoid the bottleneck situation for the system capacity and in fact be a faster option relative to traditional RF-RF communications wherein multiple RF's being sent through a single FSO at once. Hence, there are two outstanding features, among others, of this system to make it very advantageous over the current traditional system. Firstly, maximum possible RF messages can be aggregated into a single FSO link thereby utilizing the system to the maximum possible capacity. Simultaneously, the system benefits from another feature of having the RF link always available irrespective of FSO transmission since the RF and the FSO operate on completely different sets of frequencies allowing for no interference between them at any instant. Therefore, the RF frequency bands can be utilized by other possible devices/users around in range to their benefit while the FSO link is yet under operation. Above all, having FSO will avoid any sort of interference(s) also due to its point-to-point transmission feature unlike RF where the transmission is a broadcast leading to possible interference(s). In Fig. 1 buildings. Since similar optical transmitters and detectors are used for FSO and fiber optics, similar bandwidth capabilities are achievable [18]. Therefore, this will get the required job done saving numerous amount of economic resources by utilizing FSO instead of digging up the current brownfield to install fiber optics between the different buildings. Another set of motivation behind such a system involves a fact that the users are mostly mobile and with only RF capabilities (no FSO capabilities). Installing FSO capability on these mobile users does not seem to be a justified approach. Simultaneously, we also fall short of bandwidth (BW) every now and then. Hence, to save on BW and to save on the economic resources by avoiding unnecessary modifications to the current mobile devices, we have introduced such a system wherein the users remain as is with RF only capability(s) and yet can be part of and/or make use of the FSO featured network. In another instance, we can think of a building floor (femto-cell in a heterogeneous network) where the users can send and receive through the backbone via FSO transmitter and detector respectively, placed at one of the corners of that floor. This FSO transmitter/detector can communicate with other such devices over other high-rise buildings and ultimately hop to the backbone. Additionally, to increase the spectral efficiency of such a system, we can study the effects on the performance of the system by selection of N -best users to be multiplexed. Hence, to perform such a study and due to space limitations, this manuscript tackles only the simplest possible scenario/special case of such a heterogeneous system and studies its statistical characteristics and ultimately the performance measures with on-going work on ultimately addressing all the issues mentioned above and/or as shown in Fig. 1 and beyond. However, the results presented in [1] were derived under the assumption of non-pointing errors in the FSO link but were limited to cumulative distribution function (CDF)/outage prob-ability (OP). In this work, we build on the model presented in [1] to study the impact of pointing errors on the performance of asymmetric RF/FSO dal-hop transmission systems with fixed gain relays. For instance, we derive the CDF, probability density function (PDF), moment generating function (MGF), and moments of the end-to-end signal-to-noise ratio (SNR) of such systems. We then apply this statistical characterization of the SNR to derive closed-form expressions of the higherorder amount of fading (AF), average bit-error rate (BER) of binary modulation schemes, average symbol error rate (SER) of M -ary amplitude modulation (M-AM), M -ary phase shift keying (M-PSK) and M -ary quadrature amplitude modulation (M-QAM), and the ergodic capacity in terms of Meijer's G functions.
II. CHANNEL AND SYSTEM MODELS
We employ the same model as was employed in [1] and hence, the end-to-end SNR can be given as γ = γ1γ2 γ2+C , where γ 1 represents the SNR of the RF hop i.e. S-R link, γ 2 represents the SNR of the FSO hop i.e. R-D link, and C is a fixed relay gain [1], [10], [19].
The RF link (i.e. S-R link) is assumed to follow Rayleigh fading whose SNR follows an exponential distribution, parameterized by the average SNR γ 1 of the S-R link, with a PDF given by f γ1 (γ 1 ) = 1/γ 1 exp(−γ 1 /γ 1 ) [19]. On the other hand, it is assumed that the FSO link (i.e. R-D link) experiences Gamma-Gamma fading with pointing error impairments whose SNR PDF is given under indirect modulation/direct detection (IM/DD) by [8,Eq. (12)], [9,Eq. (20)] that can be expressed in a simpler form by utilizing [20,Eq. (6.2.4)], as
f γ2 (γ 2 ) = ξ 2 2γ 2 Γ(α)Γ(β) G 3,0 1,3 αβ γ 2 γ 2 ξ 2 + 1 ξ 2 , α, β ,(1)
where γ 2 is the average SNR of the R-D link, α and β are the fading parameters related to the atmospheric turbulence conditions [3]- [5] with lower values of α and β indicating severe atmospheric turbulence conditions, ξ is the ratio between the equivalent beam radius at the receiver and the pointing error displacement standard deviation (jitter) at the receiver [8], [9],
III. CLOSED-FORM STATISTICAL CHARACTERISTICS
A. Cumulative Distribution Function
The CDF is given by [10] F γ γ = Pr
γ 1 γ 2 γ 2 + C < γ ,(2)
which can be written as
F γ (γ) = ∞ 0 Pr γ 1 γ 2 γ 2 + C < γ|γ 2 f γ2 (γ 2 ) dγ 2 = 1 − α β ξ 2 2 √ γ 2 Γ(α)Γ(β) exp (−γ/γ 1 ) × ∞ 0 (1/ √ γ 2 ) exp (−γ C/ (γ 2 γ 1 )) × G 3,0 1,3 α β γ 2 γ 2 ξ 2 ξ 2 − 1, α − 1, β − 1 dγ 2 ,(3)F γ (γ) = 1 − A 1 exp (−γ/γ 1 ) G 6,0 1,6 B γ 1 γ κ 1 κ 2 ,(4)
where
κ 1 = ξ 2 2 + 1, κ 2 = ξ 2 2 , α 2 , α+1 2 , β 2 , β+1 2 , 0, A 1 = ξ 2 2 α+β 8πΓ(α)Γ(β) , and B = (αβ) 2 C 16 γ 2
. For the non-pointing errors case, when ξ → ∞, it can be easily shown that the CDF in (4) converges to
F γ (γ) = 1 − A 2 exp (−γ/γ 1 ) G 5,0 0,5 B γ 1 γ − κ 3 ,(5)
where κ 3 = α 2 , α+1 2 , β 2 , β+1 2 , 0, and A 2 =
B. Probability Density Function
Differentiating (4) with respect to γ, using the product rule then utilizing [22,Eq. (07.34.20.0001.01)], we obtain after some algebraic manipulations the PDF in exact closed-form in terms of Meijer's G functions as
f γ (γ) = A 1 2γγ 1 exp (−γ/γ 1 ) 2γ 1 G 5,0 0,5 B γ 1 γ − κ 3 +(2γ − ξ 2 γ 1 ) G 6,0 1,6 B γ 1 γ κ 1 κ 2 .(6)
For the non-pointing errors case, when ξ → ∞, it can be easily shown that the PDF in (6) converges to
f γ (γ) = A 2 16γ 1 γ 2 exp (−γ/γ 1 ) 16γ 2 G 5,0 0,5 B γ 1 γ − κ 3 +(αβ) 2 C G 5,0 0,5 B γ 1 γ − κ 4 ,(7)
where κ 4 = α 2 − 1, α−1 2 , β 2 − 1, β−1 2 , 0.
C. Moment Generating Function
The MGF defined as M γ (s) E [e −γs ] can be expressed in terms of CDF as M γ (s) = s ∞ 0 e −γs F γ (γ)dγ. Using this equation by placing (4) into it and utilizing [21, Eq. (7.813.1)], we get after some manipulations the MGF of γ as
M γ (s) = 1 − s A 1 s + 1/γ 1 G 6,1 2,6 B s γ 1 + 1 0, κ 1 κ 2 .(8)
When ξ → ∞ (i.e. non-pointing error case), the MGF in (8) can be easily shown to converge to
M γ (s) = 1 − s A 2 s + 1/γ 1 G 5,1 1,5 B s γ 1 + 1 0 κ 3 .(9)
D. Moments
The moments defined as E [γ n ] can be expressed in terms of the complementary CDF (CCDF) F c
γ (γ) = 1 − F γ (γ) as E [γ n ] = n ∞ 0 γ n−1 F c γ (γ)dγ. Now,E [γ n ] = n A 1 γ n 1 G 6,1 2,6 B 1 − n, κ 1 κ 2 .(10)
When ξ → ∞, the moments in (10) can be easily shown to converge to
E [γ n ] = n A 2 γ n 1 G 5,1 1,5 B 1 − n κ 3 .(11)
IV. APPLICATIONS TO THE PERFORMANCE OF ASYMMETRIC RF/FSO RELAY TRANSMISSION SYSTEMS
A. Higher-Order Amount of Fading
The AF is an important measure for the performance of a wireless communication system as it can be utilized to parameterize the distribution of the SNR of the received signal. In particular, the n th -order AF for the instantaneous SNR γ is defined as AF
(n) γ = E [γ n ]/E [γ]
n − 1 [24]. Now, utilizing this equation by substituting (10) into it, we get the n th -order AF as
AF (n) γ = n A 1−n 1 G 6,1 2,6 B 1 − n, κ 1 κ 2 G 6,1 2,6 B 0, κ 1 κ 2 −n − 1.
(12) For n = 2, as a special case, we get the classical AF [25] as
AF = AF (2) γ = 2A −1 1 G 6,1 2,6 B −1,κ1 κ2 G 6,1 2,6 B 0,κ1 κ2 2 − 1.(13)
For the non-pointing errors case, when ξ → ∞, it can be easily shown that the n th -order AF in (12) converges to
AF (n) γ = n A 1−n 2 G 5,1 1,5 B 1 − n κ 3 G 5,1 1,5 B 0 κ 3 −n − 1.
(14) For n = 2, as a special case, we get the classical AF [25], for non-pointing errors case, as
AF = AF (2) γ = 2A −1 2 G 5,1 1,5 B −1 κ 3 G 5,1 1,5 B 0 κ 3 −2 − 1.(15)P b = 1 2 − A 1 q p Γ(p) −1 2 (q + 1/γ 1 ) p G 6,1 2,6 B q γ 1 + 1 1 − p, κ 1 κ 2 ,(16)
where the parameters p and q account for different modulation schemes. For an extensive list of modulation schemes represented by these parameters, one may look into [26]- [29] or refer to Table I. For the non-pointing errors case, when ξ → ∞, the BER in (16) can be easily shown to converge to
P b = 1 2 − A 2 q p Γ(p) −1 2 (q + 1/γ 1 ) p G 5,1 1,5 B q γ 1 + 1 1 − p κ 3 .(17)
2) Average SER: In [30], the conditional SER has been presented in a desirable form and utilized to obtain the average SER of M-AM, M-PSK, and M-QAM. For example, for M-PSK the average SER P s over generalized fading channels is given by [30,Eq. (41)]. Similarly, for M-AM and M-QAM, the average SER P s over generalized fading channels is given by [30,Eq. (45)
P s = M − 1 M + A 1 π (M −1)π M 0 sin 2 (π/M )/ sin 2 φ 1/γ 1 − sin 2 (π/M )/ sin 2 φ × G 6,1 2,6 B 1 − sin 2 (π/M )/ sin 2 φ γ 1 0, κ 1 κ 2 dφ,(18)P s = M − 1 M + 2A 1 (M − 1) M π × π 2 0 3/ M 2 − 1 sin 2 φ 1/γ 1 − 3/ (M 2 − 1) sin 2 φ × G 6,1 2,6 B 1 − 3/ (M 2 − 1) sin 2 φ γ 1 0, κ 1 κ 2 dφ,(19)
and
P s = 2 1 − 1/ √ M − 1 − 1/ √ M 2 + (4 A 1 /π) 1 − 1/ √ M × π 2 0 3/ 2 (M − 1) sin 2 φ 1/γ 1 − 3/ 2 (M − 1) sin 2 φ × G 6,1 2,6 B 1 − 3/ 2 (M − 1) sin 2 φ γ 1 0, κ 1 κ 2 − (4 A 1 /π) 1 − 1/ √ M 2 × π 4 0 3/ 2 (M − 1) sin 2 φ 1/γ 1 − 3/ 2 (M − 1) sin 2 φ × G 6,1 2,6 B 1 − 3/ 2 (M − 1) sin 2 φ γ 1 0, κ 1 κ 2 dφ,(20)
respectively. The analytical SER performance expressions obtained in (18), (19), and (20) are exact and can be easily estimated accurately by utilizing the Gauss-Chebyshev Quadrature (GCQ) formula [31,Eq. (25.4.39)] that converges rapidly, requiring only few terms for an accurate result [11].
C. Ergodic Capacity
The ergodic channel capacity C defined as C E [log 2 (1 + γ)] can be expressed in terms of the CCDF of in it and using the integral identity [26,Eq. (20)], the ergodic capacity can be expressed in terms of the extended generalized bivariate Meijer's G function (EGBMGF) (see [26] and references therein) as C = A 1 γ 1 ln(2) G 1,0:1,1: 6,0 1,0:1,1:1,6
γ as C = 1/ ln(2) ∞ 0 (1 + γ) −1 F c γ (γ)dγ [32,1 0 0 κ 1 κ 2 γ 1 , B . (21)
For the non-pointing errors case, when ξ → ∞, the ergodic capacity in (21) can be easily shown to converge to C = A 2 γ 1 ln(2) G 1,0:1,1: 5,0 1,0:1,1: 0,5
1 0 0 κ 3 γ 1 , B . (22)
The expression in (21) and (22) can be easily and efficiently evaluated by utilizing the MATHEMATICA® implementation of the EGBMGF given in [26, Table II].
V. RESULTS AND DISCUSSION
The average BER performance of different digital binary modulation schemes are presented in Fig. 2 based on the values of p and q as presented in Table I. We can observe from Fig. 2 that the simulation results provide a perfect match to the analytical results obtained in this work.
It can be seen from Fig. 2 that, as expected, CBPSK outperforms NBFSK. Also, the effect of pointing error can be observed in Fig. 2 i.e. as the effect of pointing error (as the value of ξ increases, the effect of pointing error decreases) increases, the BER deteriorates and vice versa. It can be shown that as the atmospheric turbulence conditions get severe i.e. as the values of α and β start dropping, the BER starts deteriorating and vice versa. Similar results for any other binary modulations schemes and any other values of α's, β's, c's, and ξ's can be observed.
Similarly, in Fig. 3, as the atmospheric turbulence conditions get severe, the ergodic capacity starts decreasing (i.e. the higher the values of α and β, the higher will be the ergodic capacity). Also, the effect of pointing error can be observed in Fig. 3. Note that as the value of ξ increases (i.e. the effect of pointing error decreases) the ergodic capacity decreases.
VI. CONCLUDING REMARKS
We derived novel exact closed-form expressions for the CDF, the PDF, the MGF, and the moments of an asymmetric dual-hop relay transmission system composed of both RF and FSO environments with pointing errors in terms of Meijer's G functions. Further, we derived analytical expressions for various performance metrics of an asymmetric dual-hop RF/FSO relay transmission system with pointing errors including the higher-order AF, error rate of a variety of modulation schemes, and the ergodic capacity in terms of Meijer's G functions. In addition, this work presents simulation examples to validate and illustrate the mathematical formulation developed in this work and to show the effect of the atmospheric turbulence and pointing error conditions severity and unbalance on the system performance.
Γ(.) is the Gamma function as defined in [21, Eq. (8.310)], and G(.) is the Meijer's G function as defined in [21, Eq. (9.301)].
] and [30, Eq. (48)] respectively. On substituting (8) into [30, Eq. (41)], [30, Eq. (45)], and [30, Eq. (48)], we can get the SER of M-PSK, M-AM, and M-QAM, as shown below
Eq. (15)]. Utilizing this equation by exploiting the identity [33, p. 152] (1 + az)
Fig. 2 .
2Average BER of different binary modulation schemes showing impact of pointing errors (varying ξ) with fading parameters α = 2.1, β = 3.5, and C = 0.6.
Fig. 3 .
3−to−Noise Ratio (SNR) per Hop (dB) Effect of pointing errors (varying ξ) on the ergodic capacity with varying fading parameters α's and β's, and C = 0.6.
, there exists no fiber optics structure between theUSER 1
USER 2
USER N
BUILDING
BUILDING
RELAY (RF to FSO Converter)
FSO DETECTOR
FIBER-OPTIC
CONNECTION
TO BACKBONE
INTERNET
RF
FSO
USER 1
USER 2
USER N
RF
CLUSTER 1
CLUSTER 2
Laptop
Laptop
Fig. 1. System model block diagram of an asymmetric mixed RF/FSO dual-
hop transmission system.
TABLE I BER
IPARAMETERS OF BINARY MODULATIONSModulation
p
q
Coherent Binary Frequency Shift Keying (CBFSK)
0.5
0.5
Coherent Binary Phase Shift Keying (CBPSK)
0.5
1
Non-Coherent Binary Frequency Shift Keying (NBFSK)
1
0.5
Differential Binary Phase Shift Keying (DBPSK)
1
1
B. Error Probability
1) Average BER: Substituting (4) into [26, Eq. (12)] and
utilizing [21, Eq. (7.813.1)], we get the average BER P b of a
variety of binary modulations as
Performance analysis of the asymmetric dual-hop relay transmission with mixed RF/FSO links. E Lee, J Park, D Han, G Yoon, IEEE Photonics Technology Letters. 2321E. Lee, J. Park, D. Han, and G. Yoon, "Performance analysis of the asymmetric dual-hop relay transmission with mixed RF/FSO links," IEEE Photonics Technology Letters, vol. 23, no. 21, pp. 1642-1644, Nov. 2011.
L C Andrews, R L Phillips, C Y Hopen, Laser Beam Scintillation with Applications. Bellingham, WASPIEL. C. Andrews, R. L. Phillips, and C. Y. Hopen, Laser Beam Scintillation with Applications. Bellingham, WA: SPIE, 2001.
Average symbol error probability of general-order rectangular quadrature amplitude modulation of optical wireless communication systems over atmospheric turbulence channels. K P Peppas, C K Datsikas, IEEE/OSA Journal of Optical Communications and Networking. 22K. P. Peppas and C. K. Datsikas, "Average symbol error probability of general-order rectangular quadrature amplitude modulation of optical wireless communication systems over atmospheric turbulence channels," IEEE/OSA Journal of Optical Communications and Networking, vol. 2, no. 2, pp. 102-110, Feb. 2010.
BPSK subcarrier intensity modulated free-space optical communications in atmospheric turbulence. W O Popoola, Z Ghassemlooy, IEEE/OSA Journal of Lightwave Technology. 278W. O. Popoola and Z. Ghassemlooy, "BPSK subcarrier intensity mod- ulated free-space optical communications in atmospheric turbulence," IEEE/OSA Journal of Lightwave Technology, vol. 27, no. 8, pp. 967- 973, Apr. 2009.
Average bit error rate of the Alamouti scheme in Gamma-Gamma fading channels. J Park, E Lee, G Yoon, IEEE Photonics Technology Letters. 234J. Park, E. Lee, and G. Yoon, "Average bit error rate of the Alamouti scheme in Gamma-Gamma fading channels," IEEE Photonics Technol- ogy Letters, vol. 23, no. 4, pp. 269-271, Feb. 2011.
Relay-assisted free-space optical communication. M Safari, M , IEEE Transactions on Wireless Communications. 712M. Safari and M. Uysal, "Relay-assisted free-space optical communica- tion," IEEE Transactions on Wireless Communications, vol. 7, no. 12, pp. 5441-5449, Dec. 2008.
BER performance of FSO links over strong atmospheric turbulence channels with pointing errors. H G Sandalidis, T A Tsiftsis, G K Karagiannidis, M Uysal, IEEE Communications Letters. 121H. G. Sandalidis, T. A. Tsiftsis, G. K. Karagiannidis, and M. Uysal, "BER performance of FSO links over strong atmospheric turbulence channels with pointing errors," IEEE Communications Letters, vol. 12, no. 1, pp. 44-46, Jan. 2008.
Optical wireless communications with heterodyne detection over turbulence channels with pointing errors. H G Sandalidis, T A Tsiftsis, G K Karagiannidis, Journal of Lightwave Technology. 2720H. G. Sandalidis, T. A. Tsiftsis, and G. K. Karagiannidis, "Optical wire- less communications with heterodyne detection over turbulence channels with pointing errors," Journal of Lightwave Technology, vol. 27, no. 20, pp. 4440-4445, Oct. 2009.
Further results on the capacity of free-space optical channels in turbulent atmosphere. W Gappmair, IET Communications. 59W. Gappmair, "Further results on the capacity of free-space optical channels in turbulent atmosphere," IET Communications, vol. 5, no. 9, pp. 1262-1267, Jun. 2011.
A performance study of dual-hop transmissions with fixed gain relays. M O Hasna, M.-S Alouini, IEEE Transactions on Wireless Communications. 36M. O. Hasna and M.-S. Alouini, "A performance study of dual-hop transmissions with fixed gain relays," IEEE Transactions on Wireless Communications, vol. 3, no. 6, pp. 1963-1968, Nov. 2004.
A novel framework on exact average symbol error probabilities of multihop transmission over amplify-and-forward relay fading channels. F Yilmaz, O Kucur, M.-S Alouini, Proceedings of 7 th International Symposium on Wireless Communication Systems (ISWCS' 2010). 7 th International Symposium on Wireless Communication Systems (ISWCS' 2010)York, U.K.F. Yilmaz, O. Kucur, and M.-S. Alouini, "A novel framework on exact average symbol error probabilities of multihop transmission over amplify-and-forward relay fading channels," in Proceedings of 7 th International Symposium on Wireless Communication Systems (ISWCS' 2010), York, U.K., Nov. 2010, pp. 546-550.
Outage probability of Rician fading relay channels. Y Zhu, Y Xin, P.-Y. Kam, IEEE Transactions on Vehicular Technology. 574Y. Zhu, Y. Xin, and P.-Y. Kam, "Outage probability of Rician fading relay channels," IEEE Transactions on Vehicular Technology, vol. 57, no. 4, pp. 2648-2652, Jul. 2008.
Error analysis of non coherent FSK with variable gain relaying in dual-hop Nakagami-m relay fading channel. S N Datta, S Chakrabarti, R Roy, Proceedings of 2010 International Conference on Signal Processing and Communications (SPCOM' 2010). 2010 International Conference on Signal Processing and Communications (SPCOM' 2010)Bangalore, IndiaS. N. Datta, S. Chakrabarti, and R. Roy, "Error analysis of non coherent FSK with variable gain relaying in dual-hop Nakagami-m relay fading channel," in Proceedings of 2010 International Conference on Signal Processing and Communications (SPCOM' 2010), Bangalore, India, Jul. 2010, pp. 1-5.
Two hop amplify-and-forward transmission in mixed Rayleigh and Rician fading channels. H A Suraweera, R H Y Louie, Y Li, G K Karagiannidis, B Vucetic, IEEE Communication Letters. 134H. A. Suraweera, R. H. Y. Louie, Y. Li, G. K. Karagiannidis, and B. Vucetic, "Two hop amplify-and-forward transmission in mixed Rayleigh and Rician fading channels," IEEE Communication Letters, vol. 13, no. 4, pp. 227-229, Apr. 2009.
Performance analysis of amplify-forward relay in mixed Nakagami-m and Rician fading channels. A K Gurung, F S Al-Qahtani, Z M Hussain, H Alnuweiri, Proceedings of 2010 International Conference on Advanced Technologies for Communications (ATC' 2010). 2010 International Conference on Advanced Technologies for Communications (ATC' 2010)Ho Chi Minh City, VietnamA. K. Gurung, F. S. Al-Qahtani, Z. M. Hussain, and H. Alnuweiri, "Performance analysis of amplify-forward relay in mixed Nakagami-m and Rician fading channels," in Proceedings of 2010 International Con- ference on Advanced Technologies for Communications (ATC' 2010), Ho Chi Minh City, Vietnam, Oct. 2010, pp. 321-326.
Product of the powers of generalized Nakagami-m variates and performance of cascaded fading channels. F Yilmaz, M.-S Alouini, Proceedings of IEEE Global Telecommunications Conference. IEEE Global Telecommunications ConferenceHonolulu, Hawaii, USF. Yilmaz and M.-S. Alouini, "Product of the powers of generalized Nakagami-m variates and performance of cascaded fading channels," in Proceedings of IEEE Global Telecommunications Conference, 2009. (GLOBECOM 2009), Honolulu, Hawaii, US, Nov.-Dec. 2009, pp. 1-8.
Serial free-space optical relaying communications over Gamma-Gamma atmospheric turbulence channels. C K Datsikas, K P Peppas, N C Sagias, G S Tombras, IEEE/OSA Journal of Optical Communications and Networking. 28C. K. Datsikas, K. P. Peppas, N. C. Sagias, and G. S. Tombras, "Se- rial free-space optical relaying communications over Gamma-Gamma atmospheric turbulence channels," IEEE/OSA Journal of Optical Com- munications and Networking, vol. 2, no. 8, pp. 576-586, Aug. 2010.
Free space optical connectivity for last mile solution in Bangladesh. N Saquib, M S R Sakib, A Saha, M Hussain, Proceedings of 2 nd International Conference on Education Technology and Computer (ICETC' 10). 2 nd International Conference on Education Technology and Computer (ICETC' 10)Shanghai, ChinaN. Saquib, M. S. R. Sakib, A. Saha, and M. Hussain, "Free space optical connectivity for last mile solution in Bangladesh," in Proceedings of 2 nd International Conference on Education Technology and Computer (ICETC' 10), Shanghai, China, Jun. 2010, pp. 484-487.
M K Simon, M.-S Alouini, Digital Communication over Fading Channels. Hoboken, New Jersey, USAJohn Wiley & Sons, Inc2nd ed.M. K. Simon and M.-S. Alouini, Digital Communication over Fading Channels, 2nd ed. Hoboken, New Jersey, USA: IEEE: John Wiley & Sons, Inc., 2005.
The Algebra of Random Variables. M D Springer, WileyNew YorkM. D. Springer, The Algebra of Random Variables. New York: Wiley, Apr. 1979.
I S Gradshteyn, I M Ryzhik, Table of Integrals, Series and Products. New YorkAcademic PressI. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series and Products. New York: Academic Press, 2000.
Mathematica Edition: Version 8.0. I , Wolfram Research, Wolfram Research, IncChampaign, IllinoisI. Wolfram Research, Mathematica Edition: Version 8.0. Champaign, Illinois: Wolfram Research, Inc., 2010.
The algorithm for calculating integrals of hypergeometric type functions and its realization in reduce system. V S Adamchik, O I Marichev, Proceedings of International Symposium on Symbolic and Algebraic Computation (ISSAC' 90). International Symposium on Symbolic and Algebraic Computation (ISSAC' 90)New York, USAV. S. Adamchik and O. I. Marichev, "The algorithm for calculating integrals of hypergeometric type functions and its realization in reduce system," in Proceedings of International Symposium on Symbolic and Algebraic Computation (ISSAC' 90), New York, USA, 1990, pp. 212- 224.
Novel asymptotic results on the highorder statistics of the channel capacity over generalized fading channels. F Yilmaz, M.-S Alouini, Proceedings of IEEE 13 th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC' 2012). IEEE 13 th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC' 2012)Cesme, TurkeyF. Yilmaz and M.-S. Alouini, "Novel asymptotic results on the high- order statistics of the channel capacity over generalized fading channels," in Proceedings of IEEE 13 th International Workshop on Signal Pro- cessing Advances in Wireless Communications (SPAWC' 2012), Cesme, Turkey, Jun. 2012, pp. 389-393.
Reception through Nakagami fading multipath channels with random delays. U Charash, IEEE Transactions on Communications. 274U. Charash, "Reception through Nakagami fading multipath channels with random delays," IEEE Transactions on Communications, vol. 27, no. 4, pp. 657-670, Apr. 1979.
A new formula for the BER of binary modulations with dual-branch selection over generalized-K composite fading channels. I S Ansari, S Al-Ahmadi, F Yilmaz, M.-S Alouini, H Yanikomeroglu, IEEE Transactions on Communications. 5910I. S. Ansari, S. Al-Ahmadi, F. Yilmaz, M.-S. Alouini, and H. Yanikomeroglu, "A new formula for the BER of binary modula- tions with dual-branch selection over generalized-K composite fading channels," IEEE Transactions on Communications, vol. 59, no. 10, pp. 2654-2658, Oct. 2011.
Selection diversity receivers over nonidentical Weibull fading channels. N C Sagias, D A Zogas, G K Kariaginnidis, IEEE Transactions on Vehicular Technology. 546N. C. Sagias, D. A. Zogas, and G. K. Kariaginnidis, "Selection diversity receivers over nonidentical Weibull fading channels," IEEE Transactions on Vehicular Technology, vol. 54, no. 6, pp. 2146-2151, Nov. 2005.
Unknown bounds on performance in Nakagami channels. A H Wojnar, IEEE Transactions on Communications. 341A. H. Wojnar, "Unknown bounds on performance in Nakagami chan- nels," IEEE Transactions on Communications, vol. 34, no. 1, pp. 22-24, Jan. 1986.
On the sum of Gamma random variates with application to the performance of maximal ratio combining over Nakagami-m fading channels. I S Ansari, F Yilmaz, M.-S Alouini, Proceedings of IEEE 13 th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC' 2012). IEEE 13 th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC' 2012)Cesme, TurkeyI. S. Ansari, F. Yilmaz, and M.-S. Alouini, "On the sum of Gamma random variates with application to the performance of maximal ratio combining over Nakagami-m fading channels," in Proceedings of IEEE 13 th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC' 2012), Cesme, Turkey, Jun. 2012, pp. 394- 398.
A unified approach for calculating error rates of linearly modulated signals over generalized fading channels. M.-S Alouini, A J Goldsmith, IEEE Transactions on Communications. 479M.-S. Alouini and A. J. Goldsmith, "A unified approach for calculat- ing error rates of linearly modulated signals over generalized fading channels," IEEE Transactions on Communications, vol. 47, no. 9, pp. 1324-1334, Sep. 1999.
M Abramowitz, I A Stegun, Handbook of Mathematical Functions. New YorkDover10th edM. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions, 10th ed. New York: Dover, Dec. 1972.
Estimating ergodic capacity of cooperative analog relaying under different adaptive source transmission techniques. A Annamalai, R C Palat, J Matyjas, Proceedings of 2010 IEEE Sarnoff Symposium. 2010 IEEE Sarnoff SymposiumPrinceton, NJA. Annamalai, R. C. Palat, and J. Matyjas, "Estimating ergodic capacity of cooperative analog relaying under different adaptive source trans- mission techniques," in Proceedings of 2010 IEEE Sarnoff Symposium, Princeton, NJ, Apr. 2010, pp. 1-5.
A M Mathai, R K Saxena, The H-Function with Applications in Statistics and Other Disciplines. New YorkWiley EasternA. M. Mathai and R. K. Saxena, The H-Function with Applications in Statistics and Other Disciplines. New York: Wiley Eastern, 1978.
|
[] |
[
"Feasibility at the LHC, FCC-he and CLIC for sensitivity estimates on anomalous τ -lepton couplings",
"Feasibility at the LHC, FCC-he and CLIC for sensitivity estimates on anomalous τ -lepton couplings"
] |
[
"A Gutiérrez-Rodríguez \nFacultad de Física\nUniversidad Autónoma de Zacatecas Apartado Postal C-580\n98060ZacatecasMéxico\n",
"M Köksal \nDeparment of Optical Engineering\nSivas Cumhuriyet University\n58140SivasTurkey\n",
"A A Billur \nDeparment of Physics\nSivas Cumhuriyet University\n58140SivasTurkey\n",
"M A Hernández-Ruíz \nUnidad Académica de Ciencias Químicas\nUniversidad Autónoma de Zacatecas Apartado Postal C-585\n98060ZacatecasMéxico\n"
] |
[
"Facultad de Física\nUniversidad Autónoma de Zacatecas Apartado Postal C-580\n98060ZacatecasMéxico",
"Deparment of Optical Engineering\nSivas Cumhuriyet University\n58140SivasTurkey",
"Deparment of Physics\nSivas Cumhuriyet University\n58140SivasTurkey",
"Unidad Académica de Ciencias Químicas\nUniversidad Autónoma de Zacatecas Apartado Postal C-585\n98060ZacatecasMéxico"
] |
[] |
In this paper, we present detailed studies on the feasibility at pp, e − p and e + e − colliders for model-independent sensitivity estimates on the total cross-section and on the anomalous τ + τ − γ interaction through the tau pair production channels pp → pττ γp, e − p → e − ττ γp and e + e − → e + ττ γe − at the γ * γ * → τ + τ − γ mode. Measurements of the anomalous couplings of the τ -leptoñ a τ andd τ provide an excellent opportunity to probing extensions of the Standard Model. We estimate the sensitivity at the 95% Confidence Level, and we consider that the τ -lepton decays leptonically or semi-leptonically. We found that of the three considered colliders, the future CLIC at high energy and high luminosity should provide the best sensitivity on the dipole moments of the τ -leptonã τ = [−0.00128, 0.00105] and |d τ (ecm)| = 6.4394 × 10 −18 , which show a potential advantage compared to those from LHC and FCC-he.
| null |
[
"https://arxiv.org/pdf/1903.04135v1.pdf"
] | 119,333,380 |
1903.04135
|
a11c63544b0fb8b62cd71fc2620a0c22f52dd635
|
Feasibility at the LHC, FCC-he and CLIC for sensitivity estimates on anomalous τ -lepton couplings
11 Mar 2019
A Gutiérrez-Rodríguez
Facultad de Física
Universidad Autónoma de Zacatecas Apartado Postal C-580
98060ZacatecasMéxico
M Köksal
Deparment of Optical Engineering
Sivas Cumhuriyet University
58140SivasTurkey
A A Billur
Deparment of Physics
Sivas Cumhuriyet University
58140SivasTurkey
M A Hernández-Ruíz
Unidad Académica de Ciencias Químicas
Universidad Autónoma de Zacatecas Apartado Postal C-585
98060ZacatecasMéxico
Feasibility at the LHC, FCC-he and CLIC for sensitivity estimates on anomalous τ -lepton couplings
11 Mar 2019(Dated: March 12, 2019)numbers: 1340Em1460Fg Keywords: Electric and Magnetic MomentsTausLHCFCC-heCLIC
In this paper, we present detailed studies on the feasibility at pp, e − p and e + e − colliders for model-independent sensitivity estimates on the total cross-section and on the anomalous τ + τ − γ interaction through the tau pair production channels pp → pττ γp, e − p → e − ττ γp and e + e − → e + ττ γe − at the γ * γ * → τ + τ − γ mode. Measurements of the anomalous couplings of the τ -leptoñ a τ andd τ provide an excellent opportunity to probing extensions of the Standard Model. We estimate the sensitivity at the 95% Confidence Level, and we consider that the τ -lepton decays leptonically or semi-leptonically. We found that of the three considered colliders, the future CLIC at high energy and high luminosity should provide the best sensitivity on the dipole moments of the τ -leptonã τ = [−0.00128, 0.00105] and |d τ (ecm)| = 6.4394 × 10 −18 , which show a potential advantage compared to those from LHC and FCC-he.
I. INTRODUCTION
The study of the τ -lepton by the ATLAS and CMS Collaborations [1,2] at the Large Hadron Collider (LHC) has developed significantly in recent years to the point where they have a very active physical program. Furthermore, with the existence of a M H = 125.18 ± 0.16 GeV [3] scalar boson [4][5][6][7] established by the ATLAS [8] and CMS [9] experiments, making it possible to complete the Standard Model (SM) of particle physics, that is the theory that describes the particles of matter we know, and their interactions.
However, there are fundamental problems in the SM like: dark matter, dark energy, hierarchy problem, neutrino masses, the asymmetry between matter and antimatter, etc.. These problems demand the construction of new machines that operate at a much higher energy than the LHC, with cleaner environments and that allow exploring other components of the universe. For these and other reasons, the scientific community of High Energy Physics has the challenge of discovering that the universe is made in its entirety.
There are several proposals to build new, powerful high-energy and high-luminosity hadron-hadron (pp), lepton-hadron (e − p) and lepton-lepton (e + e − ) colliders in the future at CERN for the post LHC era that will open up new horizons in the field of fundamental physics.
The future ep colliders, such as the Large Hadron Electron Collider (LHeC) [10][11][12] and the Future Circular Collider Hadron Electron (FCC-he) [11][12][13][14][15][16][17], are a hybrid between the pp and e + e − colliders, and they will complement the physical program of the LHC. These colliders have the peculiarity that can be installed at a much lower cost than that of the pp collider. Furthermore, they provide invaluable information on the Higgs and top sectors, as well as of others heavy particles as the τ -lepton. The FCC-he study puts great emphasis in the scenarios of high-intensity and high energy frontier colliders. These colliders, with its high precision and high-energy, could extend the search of new particles and interactions well beyond the LHC. In addition, in comparison with the LHC, the FCC-he has the advantage of providing a clean environment with small background contributions from QCD strong interactions. In the case of the future e + e − collider as the Compact Linear Collider (CLIC) [18], although with a much lower center-of-mass energy than the pp colliders, is ideal for precision measurements due to very low backgrounds.
In this paper we have based our study on three phenomenological analyses for finding physics beyond the Standard Model (BSM) at present and future colliders to be able to compare the electromagnetic properties of the τ -lepton. We consider pp collisions at the LHC with 13, 14 TeV and luminosities 10, 30, 50, OPAL |Re(d τ (10 −16 ecm))| < 3.7 95% [22] ARGUS |Re(d τ (10 −16 ecm))| < 4.6 95% [24] |Im(d τ (10 −16 ecm))| < 1.8 95%
The theoretical prediction on the MDM of the τ -lepton in the SM is well known with several digits [19]:
SM : a τ = 0.00117721(5),(1)
while the DELPHI [20], L3 [21], OPAL [22], BELLE [23] and ARGUS [24] Collaborations report the current experimental bounds on the MDM and the EDM in Table I.
The best experimental results on the MDM and the EDM are reported by the DELPHI and BELLE collaborations using the following processes e + e − → e + γ * γ * e − → e + e − τ + τ − and e + e − → τ + τ − , respectively. The EDM of the τ -lepton, is a very sensitive probe for CP violation induced by new CP phases BSM [25][26][27]. It is worth mentioning that the current Particle Data Group limit was obtained by DELPHI Collaboration [20] using data from the total cross-section e + e − → e + γ * γ * e − → e + e − τ + τ − at LEP2.
The MDM and EDM of the τ -lepton allow a stringent test for new physics and have been deeply investigated by many authors, see Refs. [19, for a summary on sensitivities achievable on the anomalous dipole moments of the τ -lepton in different context.
A direct comparison between Eq. (1) and the results given in Table I clearly shows that the experiment is far from determining the anomaly for the MDM of the τ -lepton in the SM. It is therefore of great interest to investigate and propose mechanisms modelindependent to probe the dipole moments of the τ -lepton with the parameters of the present and future colliders, i.e. the LHC, FCC-he and CLIC, rendering such an investigation both very interesting and timely.
The outline of the paper is organized as follows: In Section II, we introduce the τ -lepton effective electromagnetic interactions. In Section III, we show sensitivity estimates on the total cross-section and the τ -lepton MDM and EDM through pp → pττ γp at the LHC, e − p → e − ττ γp at the FCC-he and e + e − → e + ττ γe − at the CLIC. Finally, we present our conclusions in Section IV.
II. THE EFFECTIVE LAGRANGIAN FOR τ -LEPTON ELECTROMAGNETIC
DIPOLE MOMENTS
We following Refs. [35,56,57], in order to analyze in a model-independent manner the total cross-section and the electromagnetic dipole moments of the τ -lepton through the channels pp → pττ γp at the LHC, e − p → e − ττ γp at the FCC-he and e + e − → e + ττ γe − at the CLIC and using the effective Lagrangian description. This approach is appropriate for describing possible new physics effects. In this context, all the heavy degrees of freedom are integrated out leading to obtain the effective interactions with the SM particles spectrum.
Furthermore, this is justified due to the fact that the related observables have not shown any significant deviation from the SM predictions so far. Thus, below we describe the effective Lagrangian we use with potential deviations from the SM for the anomalous τ + τ − γ coupling and fix the notation:
L ef f = L SM + n α n Λ 2 O (6) n + h.c.,(2)
where, L ef f is the effective Lagrangian which contains a series of higher-dimensional operators built with the SM fields, L SM is the renormalizable SM Lagrangian, Λ is the mass scale at which new physics expected to be observed, α n are dimensionless coefficients and O (6) n represents the dimension-six gauge-invariant operator.
A. τ + τ − γ vertex form factors
The most general structure consistent with Lorentz and electromagnetic gauge invariant for the τ + τ − γ vertex describing the interaction of an on-shell photon γ with two on-shell fermions τ + τ − can be written in terms of four form factors [19,[58][59][60][61]:
Γ α τ = eF 1 (q 2 )γ α + ie 2m τ F 2 (q 2 )σ αµ q µ + e 2m τ F 3 (q 2 )σ αµ q µ γ 5 + eF 4 (q 2 )γ 5 (γ α − 2q α m τ q 2 ). (3)
In this expression, q is the four-momentum of the photon, e and m τ are the charge of the electron and the mass of the τ -lepton. Since the two leptons are on-shell the form factors
Q τ = F 1 (0).(4)
ii) F 2 (0) defines the anomalous MDM:
a τ = F 2 (0).(5)
iii) F 3 (0) describes the EDM:
d τ = e 2m τ F 3 (0).(6)
iv) F 4 (0) is the Anapole form factor:
F A = − F 4 q 2 .(7)
It is worth mentioning that in the SM at tree level, F 1 = 1 and F 2 = F 3 = F 4 = 0. In addition, should be noted that the F 2 term behaves under C and P like the SM one, while the F 3 term violates CP.
B. Gauge-invariant operators of dimension six for τ -lepton dipole moments
Theoretically, experimentally and phenomenologically most of the τ -lepton anomalous electromagnetic vertices involve off-shell τ -leptons. In our study, one of the τ -leptons is off-shell and measured quantity is not directly a τ and d τ . For this reason deviations of the τ -lepton dipole moments from the SM values are examined in a model-independent way using the effective Lagrangian formalism. This formalism is defined by high-dimensional operators which lead to anomalous τ + τ − γ coupling. For our study, we apply the dimensionsix effective operators that contribute to the MDM and EDM [62][63][64][65] of the τ -lepton:
L ef f = 1 Λ 2 C 33 LW Q 33 LW + C 33 LB Q 33 LB + h.c. ,(8)
where
Q 33 LW = l τ σ µν τ R σ I ϕW I µν ,(9)Q 33 LB = l τ σ µν τ R ϕB µν .(10)
Here ℓ τ is the tau leptonic doublet and ϕ is the Higgs doublet, while B µν and W I µν are the U(1) Y and SU(2) L gauge field strength tensors.
After electroweak symmetry breaking from the effective Lagrangian given by Eq. (8), the Higgs gets a vacuum expectation value υ = 246 GeV and the corresponding CP even κ and CP oddκ observables are obtained:
κ = 2m τ e √ 2υ Λ 2 Re cos θ W C 33 LB − sin θ W C 33 LW ,(11)κ = 2m τ e √ 2υ Λ 2 Im cos θ W C 33 LB − sin θ W C 33 LW ,(12)
where, as usual sin θ W (cos θ W ) is the sine (cosine) of the weak mixing angle.
The effective Lagrangian given by Eq. (8) gives additional contributions to the electromagnetic moments of the τ -lepton, which usually are expressed in terms of the parameters a τ andd τ . They can be described in terms of κ andκ as follows:
a τ = κ,(13)d τ = e 2m τκ .(14)
III. THE TOTAL CROSS-SECTIONS IN pp, e − p AND e + e − COLLIDERS As we mentioned above, the τ -lepton anomalous couplings offer an interesting window to physics BSM. Furthermore, usually the current and future colliders probing the feasibility of measured the anomalous couplings that are enhanced for higher values of the particle mass, making the τ -lepton the ideal candidate among the leptons to observe these new couplings.
We point out that the total cross-section for the channels pp → pττ γp at the LHC, e − p → e − ττ γp at the FCC-he and e + e − → e + ττ γe − at the CLIC are large enough to allow for a study of the anomalous electromagnetic couplings of the τ -lepton. The schematic diagram corresponding to these processes is given in Fig. 1, and the subprocess γ * γ * → ττ γ can be produced via the set of Feynman diagrams depicted in Fig. 2.
It must be noticed that, unlike direct processes e + e − → τ + τ − [24,66], [22,58] and H → τ + τ − γ [67], the two-photon processes γ * γ * → τ + τ − γ offers several advantages to study the electromagnetic tau couplings at the LHC, FCC-he and CLIC. The characteristics that distinguish them from the direct processes are mainly:
e + e − → τ + τ − γ [21], Z → τ + τ − γ
1) High sensitivity onã τ andd τ . 2) Increase of the cross-section for high energies and high luminosity. 3) They are extremely clean reactions because there is no interference with weak interactions as they are purely quantum electrodynamics (QED) reactions. 4)
The photon-photon fusion processes are free from the uncertainties originated by possible anomalous Zγγ couplings. 5) Since the photons in the initial state are almost real and the invariant mass of the tau-pairs is very small, we expect the effects of unknown form-factors to be negligible. 6) Furthermore, a very important feature is that the present and future colliders such as LHC, FCC-he and CLIC can produce very hard photons at high luminosity in the Equivalent Photon Approximation (EPA) of high energy pp, e − p and e + e − beams, with which the final state photon identification has the advantage to determine the tau pair identification.
The main theoretical tool of our study for sensitivity estimates on the total cross-section of the processes pp → pττ γp, e − p → e − ττ γp and e + e − → e + ττ γe − and on the anomalous τ -lepton couplings, is the EPA. In the literature this approach is commonly referred to as the Weizsacker-Williams Approximation (WWA) [68,69]. In general, EPA is a standard semiclassical alternative to the Feynman rules for calculation of the electromagnetic interaction cross sections. This approximation has many advantages. It helps to obtain crude numerical estimates through simple formulas. Furthermore, this approach may principally ease the experimental analysis because it gives an opportunity one to directly achieve a rough crosssection for γ * γ * → X subprocess through the research of the reaction pp (e − p, e + e − ) →
pp (e − p, e + e − )X, where X symbolizes objects generated in the final state. The essence of the EPA is as follows, photons emitted from incoming charged particles which have very low virtuality are scattered at very small angles from the beam pipe and because the emitted quasi-real photons have a low Q 2 virtuality, these are almost real.
It is worth mentioning that the exclusive two-photon processes can be distinguished from fully inelastic processes by the following experimental signatures: after of the elastic emission of a photon, incoming charged particles (electron or proton) are scattered with a small angle and escapes detection from the central detectors. This generate a missing energy signature called forward large-rapidity gap, in the corresponding forward region of the central detector [70]. This method have been observed experimentally at the LEP, Tevatron and LHC [71][72][73][74][75][76][77].
Also, another experimental signature can be implemented by forward particle tagging.
These detectors are to tag the electrons and protons with some energy fraction loss. One of the well known applications of the forward detectors is the high energy photon induced interaction with exclusive two lepton final states. Two almost real photons emitted by charged particles beams interact each other to produce two leptons γ * γ * → ℓ − ℓ + . Deflected particles and their energy loss will be detected by the forward detectors mentioned above but leptons will go to central detector. Produced lepton pairs have very small backgrounds [78]. Use of very forward detectors in conjunction with central detectors with a precise synchronization, can efficiently reduce backgrounds from pile-up events [78][79][80][81].
CMS and TOTEM Collaborations at the LHC began these measurements using forward detectors between the CMS interaction point and detectors in the TOTEM area about 210 m away on both sides of interaction point [86]. However, LHeC and CLIC have a program of forward physics with extra detectors located in a region between a few tens up to several hundreds of metres from the interaction point [11,87].
A. Benchmark parameters, selected cuts and χ 2 fitting
In this work, to evaluate the total cross-section σ(pp → pττ γp), σ(e − p → e − ττ γp) and
σ(e + e − → e + ττ γe − ) and to probe the dipole momentsã τ andd τ , we examine the potential of LHC, FCC-he and CLIC based γ * γ * colliders with the main parameters given in Table II. Furthermore, in order to suppress the backgrounds and optimize the signal sensitivity, we impose for our study the following kinematic basic acceptance cuts for τ + τ − γ events at the LHC, FCC-he and CLIC:
p γ t > 20 GeV, |η γ | < 2.5, p τ + ,τ − t > 20 GeV, |η τ + ,τ − | < 2.5, ∆R(τ − , γ) > 0.4, ∆R(τ + , τ − ) > 0.4, ∆R(τ + , γ) > 0.4.(15)
Here the cuts given by Eq. (15) are applied to the photon transverse momentum p γ t , to the photon pseudorapidity η γ , which reduces the contamination from other particles misidentified as photons, to the tau transverse momentum p τ − ,τ + t for the final state particles, to the tau pseudorapidity η τ which reduces the contamination from other particles misidentified as tau and to ∆R(τ − , γ), ∆R(τ − , τ + ) and ∆R(τ + , γ) which give the separation of the final state particles. It is fundamental that we apply these cuts to reduce the background and to optimize the signal sensitivity to the particles of the τ + τ − γ final state. Tau identification efficiency depends of a specific process, background processes, some kinematic parameters and luminosity. For the processes examined, investigations of tau identification have not been examined yet for LHC, FCC-he and CLIC detectors. In this case, identification efficiency can be detected as a function of transverse momentum and rapidity of the τ -lepton. We have considered the following cuts for the selection of the τ -lepton as used in many studies [67,88]
p τ + ,τ − t > 20 GeV , |η τ + ,τ − | < 2.5.
The above cuts on the τ -leptons ensure that their decay products are collimated which allows their momenta to be reconstructed reasonably accurately, despite the unmeasured energy going into neutrinos [89].
Another important element in our study is the level or degree of sensitivity of our results.
In this sense, to estimate the 95% Confidence Level (C.L.) sensitivity on the parametersã τ andd τ , a χ 2 fitting is performed. The χ 2 distribution [51,90] is defined by Thus, we assume that the branching ratio of the τ -lepton pair in the final state to be BR(Pure-leptonic) = 0.123 or BR(Semi-leptonic) = 0.46 [3].
χ 2 (ã τ ,d τ ) = σ SM − σ BSM ( √ s,ã τ ,d τ ) σ SM (δ st ) 2 + (δ sys ) 2 2 ,(16)
On the other hand, it should be noted that in all the processes considered in this article, the total cross-section of the pp → pττ γp, e − p → e − ττ γp and e + e − → e + ττ γe − signals are computed using the CalcHEP package [91], which can computate the Feynman diagrams, integrate over multiparticle phase space, and simulate events.
B. The total cross-section of the pp → pγ * γ * p → pτ + τ − γp signal at LHC In the EPA, the quasireal photons emitted from both proton beams collide with each other and produce the subprocess γ * γ * → τ − τ + γ. The spectrum of photon emitted by proton can be written as follows [91,92]:
f γ * p (x) = α πE p {[1 − x][ϕ( Q 2 max Q 2 0 ) − ϕ( Q 2 min Q 2 0 )],(17)
where x = E γ * p /E p and Q 2 max is maximum virtuality of the photon. The minimum value of the Q 2 min is given by
Q 2 min = m 2 p x 2 1 − x .(18)
The function ϕ is given by
ϕ(θ) = (1 + ay) −In(1 + 1 θ ) + 3 k=1 1 k(1 + θ) k + y(1 − b) 4θ(1 + θ) 3 +c(1 + y 4 ) In 1 − b + θ 1 + θ + 3 k=1 b k k(1 + θ) k .(19)
with y =
x 2 (1 − x) ,(20)a = 1 + µ 2 p 4 + 4m 2 p Q 2 0 ≈ 7.16,(21)b = 1 − 4m 2 p Q 2 0 ≈ −3.96,(22)c = µ 2 p − 1 b 4 ≈ 0.028.(23)
Therefore, in the EPA the total cross-section of the pp → pγ * γ * p → pτ + τ − γp signal is given by
σ pp→pγ * γ * p→pτ + τ − γp = f γ * p (x)f γ * p (x)dσ γ * γ * →τ + τ − γ dE 1 dE 2 .(24)
With all the elements considered in subsection A, that is to say the CalcHEP package, selected cuts, χ 2 fitting and with 13 and 14 TeV at the LHC, the determination of the total cross-section in terms of the anomalous parameters κ andκ, translate in the following results:
i) For √ s = 13 T eV : σ(κ) = 1.28 × 10 7 κ 6 + 2.61 × 10 3 κ 5 + 4.71 × 10 3 κ 4 + 1.99κ 3
+ 1.39κ 2 + 2.34 × 10 −4 κ + 1.03 × 10 −4 (pb),(25)
σ(κ) = 1.28 × 10 7κ6 + 4.71 × 10 3κ4 + 1.39κ 2 + 1.03 × 10 −4 (pb).
ii) For √ s = 14 T eV :
σ(κ) = 1.78 × 10 7 κ 6 + 3.52 × 10 3 κ 5 + 5.18 × 10 3 κ 4 + 2.21κ 3 + 1.50κ 2 + 2.40 × 10 −4 κ + 1.05 × 10 −4 (pb),
σ(κ) = 1.78 × 10 7κ6 + 5.18 × 10 3κ4 + 1.50κ 2 + 1.05 × 10 −4 (pb).
In these expressions the independent terms of κ andκ correspond to the cross-section of the SM, that is κ =κ = 0. In the next section, the calculated cross-sections in Eqs. (25)- (28) are used to sensitivity estimates on the anomalous MDM and EDM of the τ -lepton.
C. The total cross-section of the e − p → e − γ * γ * p → e − τ + τ − γp signal at FCC-he
To determine the total cross-section of the e − p → e − γ * γ * p → e − τ + τ − γp signal at FCChe, we must take into account that in the EPA approach, the spectrum of first photon emitted by electron is given by [91,92]:
f γ * e (x 1 ) = α πE e {[ 1 − x 1 + x 2 1 /2 x 1 ]log( Q 2 max Q 2 min ) − m 2 e x 1 Q 2 min (1 − Q 2 min Q 2 max ) − 1 x 1 [1 − x 1 2 ] 2 log( x 2 1 E 2 e + Q 2 max x 2 1 E 2 e + Q 2 min )},(29)
where x 1 = E γ * e /E e and Q 2 max is maximum virtuality of the photon. The minimum value of the Q 2 min is given by
Q 2 min = m 2 e x 2 1 1 − x 1 .(30)
For the spectrum f γ * p (x 2 ) of the second photon emitted by proton we consider the expression given by Eq. (17). Therefore, the total cross-section of the reaction e − p → e − γ * γ * p → e − τ + τ − γp is obtained from
σ e − p→e − γ * γ * p→e − τ + τ − γp = f γ * e (x 1 )f γ * p (x 2 )dσ γ * γ * →τ + τ − γ dx 1 dx 2 .(31)
We have performed a global fit (and apply the cuts given in Eq. (15)), as a function of the two independent anomalous couplings κ andκ, with 7.07 and 10 TeV at FCC-he to the following studied observables:
i) For √ s = 7.07 T eV :
σ(κ) = 2.85 × 10 7 κ 6 + 2.33 × 10 3 κ 5 + 1.87 × 10 4 κ 4 + 11.14κ 3 + 7.30κ 2 + 1.65 × 10 −3 κ + 6.09 × 10 −4 (pb),
σ(κ) = 2.85 × 10 7κ6 + 1.87 × 10 4κ4 + 7.30κ 2 + 6.09 × 10 −4 (pb).
ii) For √ s = 10 T eV : σ(κ) = 2.10 × 10 8 κ 6 + 3.26 × 10 4 κ 5 + 6.40 × 10 4 κ 4 + 13.85κ 3 + 12.11κ 2 + 2.46 × 10 −3 κ + 8.50 × 10 −4 (pb), (34) σ(κ) = 2.10 × 10 8κ6 + 6.40 × 10 4κ4 + 12.11κ 2 + 8.50 × 10 −4 (pb).
D. The total cross-section of the e + e − → e + γ * γ * e − → e + τ + τ − γe − signal at CLIC
The total cross-section for the elementary e + e − → e + γ * γ * e − → e + τ + τ − γe − processes at CLIC is determined in the context of EPA, where the quasi-real photons emitted from both lepton beams collide with each other and produce the subprocess γ * γ * → τ + τ − γ.
The form of the spectrum in two-photon collision energy f γ * (x) is a very important ingredient in the EPA. In this approach, the photon energy spectrum is given by Eqs. (29) and (30).
The elementary γ * γ * → τ + τ − γ process participates as a subprocess in the main process e + e − → e + γ * γ * e − → e + τ + τ − γe − , and the total cross-section is given by
σ e + e − →e + γ * γ * →e + τ + τ − γe − = f γ * e − (x)f γ * e + (x)dσ γ * γ * →τ + τ − γ dE 1 dE 2 .(36)
We presented results for the dependence of the total cross-section of the process γ * γ * → τ + τ − γ on κ andκ. We consider the following cases at CLIC:
i) For √ s = 1.5 T eV : σ(κ) = 2.09 × 10 7 κ 6 + 5.09 × 10 4 κ 5 + 5.86 × 10 4 κ 4 + 63.89κ 3 + 60κ 2 + 2.10 × 10 −2 κ + 6.9 × 10 −3 (pb) (37) σ(κ) = 2.09 × 10 7κ6 + 5.86 × 10 4κ4 + 60κ 2 + 6.9 × 10 −3 (pb). (38) ii) For Table III for √ s and L as in Eq. (42):
a τ = (−0.0067, 0.0065), 95% C.L.,
|d τ | = 3.692 × 10 −17 ecm, 95% C.L..(42)
Our results are an order of magnitude better than the best existing limit for the τ -lepton anomalous MDM and EDM comes from the process e + e − → e + e − τ + τ − as measured by DELPHI Collaboration [20] at LEP2 (see Table I), as well as of the study of e + e − → τ + τ − by BELLE Collaboration [23] (see Table I).
We next consider the sensibility estimated for the anomalous observablesã τ andd τ ,
considering different values of √ s and L at 95% C.L.. We consider both cases: pure-leptonic and semi-leptonic. Our results for these cases are shown in Table III, where the semi-leptonic case provides more sensitive results onã τ andd τ .
B. Sensitivity on the dipole moments of the τ -lepton from e − p → e − ττ γp at FCC-he
We now turn our attention to the associated production of a photon with a τ -lepton pair, via the e − p → e − τ + τ − γp signal, as is show in Figs. 9-12. The motivation to study this process is simple and already mentioned above, the gauge invariance of the effective Lagrangian relates the dipole couplings of the τ -lepton to couplings involving the photon. At the same time a similar study of the total cross-section as a function of τ -lepton dipole couplings κ andκ are realized. Our results show that the total cross-section depends significantly on κ andκ, in addition to √ s. We find that the difference with respect to the SM is of the order of O(10 3 − 10 5 ), which is several orders of magnitude best than the result of the SM.
Here, it was studied using data collected by the DELPHI experiment at LEP2 during the years 1997-2000. The corresponding integrated luminosity is 650 pb −1 . However, the corresponding integrated luminosity related to BELLE is 29.5 f b −1 .
The comparison with the limits of the present DELPHI and BELLE Collaborations with the corresponding ones obtained by the FCC-he on the anomalous couplings searches, indicates that the sensitivity estimates of the FCC-he at 95% C.L are still stronger that for both experiments.
In Table IV, we list the 95% C.L. sensitivity estimates on the observablesã τ andd τ , based on di-tau production cross-section via the process e − p → e − ττ γp at FCC-he. At present, DELPHI and BELLE experimental measurements on tau pair production e + e − τ + τ − and τ + τ − give the most stringent bounds onã τ andd τ [20,23]. However, note that our sensitivity estimates onã τ andd τ are about ten times better than those for DELPHI and
with √ s = 10 T eV and L = 1000 f b −1 .
C. Sensitivity on the dipole moments of the τ -lepton from e + e − → e + ττ γe − at CLIC Before beginning with the study of the sensitivity on the dipole moments of the τ -lepton through the process e + e − → e + ττ γe − at CLIC, it should be noted that experimentally, the processes that involving single-photon in the final state τ + τ − γ can potentially distinguish from background associated with the process under consideration. Besides, the anomalous τ + τ − γ coupling can be analyzed through the process e + e − → τ + τ − at the linear colliders.
This process receives contributions from both anomalous τ + τ − γ and τ + τ − Z couplings. But, the subprocess γ * γ * → τ + τ − γ isolate τ + τ − γ coupling which provides the possibility to analyze the τ + τ − γ coupling separately from the τ + τ − Z coupling. Generally, anomalous parametersã τ andd τ tend to increase the cross-section for the subprocess γ * γ * → τ + τ − γ, especially for photons with high energy which are well isolated from the decay products of the taus [21]. Furthermore, the single-photon in the final state has the advantage of being identified with high efficiency and purity.
To assess future CLIC sensitivity to the dipole moments, as well as for the total crosssection from searches for the e + e − → e + ττ γe − signal, we perform several figures, as well as a table that illustrates the sensitivity on the dipole moments.
In Figs Table I. What is more, there is also a significant improvement in the cross-section
where the results obtained in Eq. (45) are for √ s = 3 T eV and L = 3000 f b −1 .
Our final results are summarised in Table V below and agree with the experimental determinations of the τ -lepton dipole moments which are given in Table I for the DELPHI, L3, OPAL, BELLE and ARGUS Collaborations. From Table V, our best sensitivity projected correspond to:ã τ = (−0.00128, 0.00105), 95% C.L., |d τ | = 6.439 × 10 −18 ecm, 95% C.L.., (46) and the results obtained in Eq. (46) are for √ s = 3 T eV and L = 3000 f b −1 at CLIC.
It is worth mentioning that, the above sensitivity estimates are completely modelindependent and no assumption has been made on the anomalous couplings in the effective Lagrangian given by (8). For the sake of comparison with published data for the DELPHI, L3, OPAL, BELLE and ARGUS Collaborations [20][21][22][23][24], we have presented the limits that can be found by considering separately only operatorã τ or only operatord τ in Eqs. (8).
V. CONCLUSIONS
The sensitivity estimates of the τ + τ − γ vertex at the LHC, FCC-he and CLIC at CERN are discussed in this paper. We propose to measure this vertex in the pp → pττ γp, e − p → e − ττ γp and e + e − → e + ττ γe − channels at the γ * γ * → τ + τ − γ mode. Furthermore, to the total cross-section measurement with the EPA, χ 2 method provides powerful tools to probe the anomalous structure of the τ + τ − γ coupling. Additionally, in order to select the events we implementing the standard isolation cuts, compatibly with the detector resolution expected at LHC, FCC-he and CLIC to reduce the background and to optimize the signal sensitivity.
A very important aspect in our study and worth mentioning is the following, in most of with the large amount of data collected at current and future colliders can constrain BSM much better than before. In summary, the future CLIC at high energy and high luminosity should provide the best sensitivity on the MDM and EDM of the τ -lepton, and shows a potential advantage compared to those from LHC and FCC-he. Pure-leptonic Semi-leptonic Fig. 19, but for √ s = 3 T eV .
L (f b −1 )ã τ |d τ (ecm)|ã τ |d τ (ecm)|
100, 200 f b −1 . Another scenario is the FCC-he with 7.07, 10 TeV and L = 100, 300, 500, 700, 1000 f b −1 . The CLIC at CERN is another option with 1.5, 3 TeV and luminosities 100, 300, 500, 1000, 1500, 2000, 3000 f b −1 have been assumed. With a large amount of data and collisions at the TeV scale, LHC, FCC-he and CLIC provide excellent opportunities to model-independent sensitivity estimates on the total cross-section of the production channels pp → pττ γp, e − p → e − ττ γp and e + e − → e + ττ γe − , as well as of the Magnetic Dipole Moment (MDM) and Electric Dipole Moment (EDM) of the τ -leptonã τ andd τ .
F 1 ,
12,3,4 (q 2 ) appearing in Eq. (3) are functions of q 2 and m 2 τ only, and have the following interpretations for q 2 = 0.
i) F 1
1(0) parameterize the vector part of the electromagnetic current and it is identified with the electric charge:
with σ BSM ( √ s,ã τ ,d τ ) is the total cross-section incorporating contributions from the SM and new physics, δ st = 1 √ N SM is the statistical error and δ sys is the systematic error. The number of events is given by N SM = L int × σ SM × BR, where L int is the integrated luminosity of the pp, e − p and e + e − colliders. The τ -lepton decays almost 17.8% of the time into an electron and into two neutrinos, 17.4% of the time, it decays in a muon and in two neutrinos. While, in the remaining 64.8% of the occasions, it decays in the form of hadrons and a neutrino.
22 × 10 2 κ 2 + 2.79 × 10 −2 κ + 9.74 × 10 −3 (pb)(39) σ(κ) = 4.22 × 10 8κ6 + 2.89 × 10 5κ4 + 1.22 × 10 2κ2 + 9.74 × 10 −3 (pb).(40) In the next section, the calculated cross-sections in Eqs. (25)-(28), (32)-(35) and(37)-(40) are used to sensibility estimates on the anomalous MDM and EDM of the τ -lepton.IV. SENSITIVITY ESTIMATES ON THE DIPOLE MOMENTS OF THE τ -LEPTON AT THE LHC, FCC-HE AND CLIC A. Sensibility on the dipole moments of the τ -lepton from pp → pτ + τ − γp at LHC In this subsection phenomenological projections on the total cross-section and on the dipolar moments κ andκ of the τ -lepton though the pp → pτ + τ − γp signal at LHC are presented. For our numerical analysis we starting from the expressions given by Eqs. (25)-(28) and we obtained the total cross-sections plots of Figs. 3-6. These four figures represent the same observable, but just expressed in terms of different anomalous parameters, that is κ,κ and (κ,κ), respectively. From these figures, a strong dependence of the total cross-section with respect to the anomalous parameters κ,κ, as well as with the center-of-mass energies of the LHC is clearly observed. Furthermore, a direct comparison between the results for the SM, that is to say with κ =κ = 0 (see Eqs. (25)-(28)) and the corresponding ones obtained in Figs. 3-6, show a great difference of the order of O(10 3 − 10 4 ) on the total cross-section. To estimate the sensitivity of the LHC to the anomalous couplings κ andκ we consider √ s = 13 and 14 TeV and integrated luminosities L = 10, 50, 200 f b −1 . To this effect, in Figs. 7 and 8, we use Eq. (25)-(28) to illustrate the region of parameter space allowed at 95% C.L.. The best sensitivity estimated from Figs. 7 and 8, taken one coupling at a time are given by: s = 14 T eV and L = 200 f b −1 . These results are consistent with those reported in
Figs. 13
13and 14 show the sensitivity contour bands in the plane ofκ vs κ for the FCC-he with center-of-mass energies √ s = 7.07, 10 T eV and luminosities L = 100, 500, 1000 f b −1 . The sensitivity estimates at 95% C.L. on the anomalous parameters are found to be: κ = (−0.0035, 0.0025), 95% C.L., κ = (−0.0025, 0.0030), 95% C.L..
BELLE
Collaborations, corroborating the impact of the e − p → e − ττ γp signal, in addition of the parameters of the FCC-he: a τ = (−0.00265, 0.00246), 95% C.L., |d τ | = 1.437 × 10 −17 ecm, 95% C.L..
. 15-18 we show the expected σ vs κ, σ vsκ and σ vs (κ,κ) cross-sections for the signal with √ s = 1.5, 3 T eV . All analysis cuts given in Eq. (15) are applied. Obviously of the plots we observed that the cross-section depends strongly on κ,κ and (κ, κ), throughout the range defined for these observables, as well as of √ s. An improvement of the order of O(10 3 − 10 2 ) with respect to the SM is obtained. In Figs. 19 and 20, we show the exclusion contours on the two-parameter κ andκ. For comparison, we also include several energies and luminosities. The results of the CLIC improve the sensitivity of the existing limits for the MDM and the EDM of the τ -lepton given in
constraining κ andκ parameters because our observables are sensitive to the parameters of the collider. Thus, from Fig. 20, it is straightforward to obtain that the sensitivity estimates on the anomalous dipole moment are: κ = (−0.00125, 0.00115), 95% C.L., κ = (−0.00125, 0.00125), 95% C.L.,
the above mentioned experiments some of the particles in the anomalous τ + τ − γ coupling are off-shell. The off-shell form factors are problematic since they can hardly be isolated from other contributions and gauge invariance can be a difficulty. However, in the effective Lagrangian approach which we use in this paper, all those difficulties are solved because form factors are directly related to couplings, which are gauge invariant. Therefore, stringent and clean sensitivity estimates on the anomalous MDM and EDM of the τ -lepton are obtained.In conclusion, new mechanism are proposed in this paper obtain the anomalous MDM (ã τ ) and EDM (d τ ) of the τ -lepton produced in the high energy pp, e − p and e + e − colliders. With the information from the effective Lagrangian formalism, a significant improvement can be achieved as shown in Figs. 3-20 and Tables III-V. Under this framework, it is predicted that with 200 f b −1 of data that will be collected by LHC, a sensitivity ofã τ = (−0.0067, 0.0065), 95% C.L., can be achieved for the τ -lepton, and |d τ | = 3.692 × 10 −17 ecm, 95% C.L. can be achieved for the EDM. In the case of the FCC-he, it is feasibility that with 1000 f b −1 it is possible to obtain a sensitivity ofã τ = (0.00265, 0.00246) and |d τ | = 1.437 × 10 −17 ecm, at 95% C.L.. While for the CLIC, the projections with 3000 f b −1 of data that will be collected by CLIC areã τ = (0.00128, 0.00105) and |d τ | = 6.439 × 10 −18 ecm, at 95% C.L.. The precision of the τ -lepton is about 39% of the SM prediction, therefore in this framework and
LFIG. 3 :FIG. 4 :FIG. 7 :FIG. 8 :FIG. 9 :FIG. 10 :FIG. 13 :FIG. 14 :FIG. 15 :FIG. 16 :FIG. 19 :FIG. 20 :
3478910131415161920(f b −1 )ã τ |d τ (ecm)|ã τ |d τ |(ecm) The total cross-sections of the process pp → pττ γp as a function of κ for center-of-mass energies of √ s = 13, 14 T eV at the LHC. Same as in Fig. 3, but forκ. FIG. 5: The total cross-sections of the process pp → pττ γp as a function of κ andκ for center-ofmass energy of √ s = 13 T eV at the LHC. FIG. 6: Same as in Fig. 5, but for center-of-mass energy of √ s = 14 T eV . Sensitivity contours at the 95% C.L. in the κ −κ plane through the process pp → pττ γp for √ s = 13 T eV at the LHC. Same as in Fig. 7, but for √ s = 14 T eV . The total cross-sections of the process e − p → e − ττ γp as a function of κ for center-of-mass energies of √ s = 7.07, 10 T eV at the FCC-he. Same as in Fig. 9, but forκ. FIG. 11: The total cross-sections of the process e − p → e − ττ γp as a function of κ andκ for center-of-mass energy of √ s = 7.07 T eV at the FCC-he. FIG. 12: Same as in Fig. 11, but for √ s = 10 T eV . Sensitivity contours at the 95% C.L. in the κ−κ plane through the process e − p → e − ττ γp for √ s = 7.07 T eV at the FCC-he. Same as in Fig. 13, but for √ s = 10 T eV . The total cross-sections of the process e + e − → e + ττ γe − as a function of κ for centerof-mass energies of √ s = 1.5, 3 T eV at the CLIC. Same as in Fig. 15, but forκ. FIG. 17: The total cross-sections of the process e + e − → e + ττ γe − as a function of κ andκ for center-of-mass energy of √ s = 1.5 T eV at the CLIC. FIG. 18: Same as in Fig. 17, but for √ s = 3 T eV . Sensitivity contours at the 95% C.L. in the κ −κ plane through the process e + e − → e + ττ γe − for √ s = 1.5 T eV at the CLIC. Same as in
TABLE I :
IExperimental results for the magnetic and electric dipole moment of the τ -lepton. Collaboration Best present experimental bounds on a τ C. L. Reference Collaboration Best present experimental bounds on d τ C. L. ReferenceDELPHI
−0.052 < a τ < 0.013
95%
[20]
L3
−0.052 < a τ < 0.058
95%
[21]
OPAL
−0.068 < a τ < 0.065
95%
[22]
BELLE
−2.2 < Re(d τ (10 −17 ecm)) < 4.5
95%
[23]
−2.5 < Im(d τ (10 −17 ecm)) < 0.8
95%
DELPHI
−0.22 < d τ (10 −16 ecm) < 0.45
95%
[20]
L3
|Re(d τ (10 −16 ecm))| < 3.1
95%
[21]
TABLE II :
IIBenchmark parameters of the LHC, FCC-he and CLIC based γ * γ * colliders[11, 13, 17, 18, 82-85].
LHC
√
s (T eV )
L(f b −1 )
Phase I
7, 8
10, 20, 30, 40, 50
Phase II
13
10, 30, 50, 100, 200
Phase III
14
10, 30, 50, 100, 200, 300, 3000
FCC-he
√
s (T eV )
L(f b −1 )
Phase I
3.5
20, 50, 100, 300, 500
Phase II
7.07
100, 300, 500, 700, 1000
Phase III
10
100, 300, 500, 700, 1000
CLIC
√
s (T eV )
L(f b −1 )
Phase I
0.350
10, 50, 100, 200, 500
Phase II
1.4
10, 50, 100, 200, 500, 1000, 1500
Phase III
3
10, 100, 500, 1000, 2000, 3000
TABLE III :
IIIModel-independent sensitivity estimate for theã τ magnetic moment and thed τ electric dipole moment through the process p p → p τ + τ − γ p at LHC.√
s = 13 TeV,
95% C.L.
Pure-leptonic
Semi-leptonic
L (f b −1 )ã τ
|d τ (ecm)|ã τ
|d τ (ecm)|
10
[-0.02051, 0.02038]
1.1361×10 −16
[-0.01406, 0.01391]
7.7757×10 −17
TABLE IV :
IVModel-independent sensitivity estimate for theã τ magnetic moment and thed τ electric dipole moment through the process e − p → e − τ + τ − γ p at FCC-he.√
s = 7.07 TeV,
95% C.L.
TABLE V :
VModel-independent sensitivity estimate for theã τ magnetic moment and thed τ electric dipole moment through the process e − e + → e + τ + τ − γe − at CLIC.
0.01488, 0.01473] 8.2331×10 −17 [-0.00952, 0.00935] 5.2471×109451×10 −. 17100-0.01139, 0.01122] 6.2865×10 −17 [-0.00845, 0.008289451×10 −17 50 [-0.01488, 0.01473] 8.2331×10 −17 [-0.00952, 0.00935] 5.2471×10 −17 100 [-0.01139, 0.01122] 6.2865×10 −17 [-0.00845, 0.00828]
. 95% C L Tev, 10 [-0.01959, 0.01948TeV, 95% C.L. 10 [-0.01959, 0.01948]
0.01449, 0.014364504×10 −. 17304504×10 −17 30 [-0.01449, 0.01436]
0.01424, 0.014116889×10 −. 17506889×10 −17 50 [-0.01424, 0.01411]
ATL-PHYS-PUB- 2015-045Reconstruction, Energy Calibration, and Identification of Hadronically Decaying Tau Leptons in the ATLAS Experiment for Run-2 of the LHC. ATLAS Collaboration, Reconstruction, Energy Calibration, and Identification of Hadronically Decaying Tau Leptons in the ATLAS Experiment for Run-2 of the LHC, ATL-PHYS-PUB- 2015-045, (2015) http://cds.cern.ch/record/2064383.
. Georges Aad, ATLAS Collaboration; , ATLAS CollaborationEur. Phys. J. 75303Georges Aad, [ATLAS Collaboration], et al., Eur. Phys. J. C75, 303 (2015).
. M Tanabashi, Phys. Rev. D. 9830001Particle Data GroupM. Tanabashi, et al., [Particle Data Group], Phys. Rev. D 98, 030001 (2018).
. F Englert, R Brout, Phys. Rev. Lett. 13321F. Englert and R. Brout, Phys. Rev. Lett. 13, 321 (1964).
. P W Higgs, Phys. Lett. 12132P. W. Higgs, Phys. Lett. 12, 132 (1964).
. P W Higgs, Phys. Rev. Lett. 13508P. W. Higgs, Phys. Rev. Lett. 13, 508 (1964).
. G S Guralnik, C R Hagen, T W B Kibble, -0.00505, 0.00470] 2.7222×10 −17 [-0.00370, 0.00335Phys. Rev. Lett. 13100G. S. Guralnik, C. R. Hagen and T. W. B. Kibble, Phys. Rev. Lett. 13, 585 (1964). 100 [-0.00505, 0.00470] 2.7222×10 −17 [-0.00370, 0.00335]
. G Aad, ATLAS CollaborationPhys. Lett. 7161G. Aad, et al., [ATLAS Collaboration], Phys. Lett. B716, 1 (2012).
. S Chatrchyan, CMS CollaborationPhys. Lett. 71630S. Chatrchyan, et al., [CMS Collaboration], Phys. Lett. B716, 30 (2012).
M Klein, arXiv:0908.2877Proceedings, 17th International Workshop on Deep-Inelastic Scattering and Related Subjects. 17th International Workshop on Deep-Inelastic Scattering and Related SubjectsMadrid, Spainhep-exM. Klein, in Proceedings, 17th International Workshop on Deep-Inelastic Scattering and Re- lated Subjects (DIS 2009): Madrid, Spain, April 26-30, 2009 (2009) arXiv:0908.2877 [hep-ex].
. J L Fernandez, J. Phys. 3975001LHeC Study GroupJ. L. Abelleira Fernandez, et al. [LHeC Study Group], J. Phys. G39, 075001 (2012).
. O Bruening, M Klein, Mod. Phys. Lett. 281330011O. Bruening and M. Klein, Mod. Phys. Lett. A28, 1330011 (2013).
. Oliver Brüning, John Jowett, Max Klein, Dario Pellegrini, Daniel Schulte, Frank Zimmermann, V1.0Oliver Brüning, John Jowett, Max Klein, Dario Pellegrini, Daniel Schulte and Frank Zimmermann, EDMS 17979910 FCC-ACC-RPT-0012, V1.0, 6 April, 2017. https://fcc.web.cern.ch/Documents/FCCheBaselineParameters.pdf.
. J L A Fernandez, arXiv:1211.5102LHeC Study GroupJ. L. A. Fernandez, et al., [LHeC Study Group], arXiv:1211.5102.
. J L A Fernandez, arXiv:1211.4831J. L. A. Fernandez, et al., arXiv:1211.4831.
. Huan-Yu, Ren-You Bi, Xing-Gang Zhang, Wen-Gan Wu, Xiao-Zhou Ma, Samuel Li, Owusu, Phys. Rev. 9574020Huan-Yu, Bi, Ren-You Zhang, Xing-Gang Wu, Wen-Gan Ma, Xiao-Zhou Li and Samuel Owusu, Phys. Rev. D95, 074020 (2017).
. Y C Acar, A N Akay, S Beser, H Karadeniz, U Kaya, B B Oner, S Sultansoy, Nuclear Inst. and Methods in Physics Research. 87147Y. C. Acar, A. N. Akay, S. Beser, H. Karadeniz, U. Kaya, B. B. Oner, S. Sultansoy, Nuclear Inst. and Methods in Physics Research A871, 47 (2017).
. H Abramowicz, Eur. Phys. J. 77475H. Abramowicz, et al., Eur. Phys. J. C77, 475 (2017).
. S Eidelman, M Passera, Mod. Phys. Lett. 22159S. Eidelman and M. Passera, Mod. Phys. Lett. A22, 159 (2007).
. J Abdallah, DELPHI CollaborationEur. Phys. J. 35159J. Abdallah, et al., [DELPHI Collaboration], Eur. Phys. J. C35, 159 (2004).
. M Acciarri, L3 CollaborationPhys. Lett. 434169M. Acciarri, et al. [L3 Collaboration], Phys. Lett. B434, 169 (1998).
. K Ackerstaff, OPAL CollaborationPhys. Lett. 431188K. Ackerstaff, et al. [OPAL Collaboration], Phys. Lett. B431, 188 (1998).
. K Inami, BELLE CollaborationPhys. Lett. 55116K. Inami, et al., [BELLE Collaboration], Phys. Lett. B551, 16 (2003).
. H Albrecht, ARGUS CollaborationPhys. Lett. 48537H. Albrecht, et al., [ARGUS Collaboration], Phys. Lett. B485, 37 (2000).
. N Yamanaka, Int. J. Mod. Phys. 261730002N. Yamanaka, Int. J. Mod. Phys. E26, 1730002 (2017).
. N Yamanaka, B Sahoo, N Yoshinaga, T Sato, K Asahi, B Das, Eur. Phys. J. A53. 54N. Yamanaka, B. Sahoo, N. Yoshinaga, T. Sato, K. Asahi, and B. Das, Eur. Phys. J. A53, 54 (2017).
. J Engel, M J Ramsey-Musolf, U Van Kolck, Prog. Part. Nucl. Phys. 7121J. Engel, M. J. Ramsey-Musolf, and U. van Kolck, Prog. Part. Nucl. Phys. 71, 21 (2013).
. W Bernreuther, A Brandenburg, P Overmann, Erratum: Phys. Lett. 391425Phys. Lett.W. Bernreuther, A. Brandenburg and P. Overmann, Phys. Lett. B391, 413 (1997), Erratum: Phys. Lett. B412, 425 (1997).
. E O Iltan, Eur. Phys. J. 44411E. O. Iltan, Eur. Phys. J. C44, 411 (2005).
. B Dutta, R N Mohapatra, Phys. Rev. 68113008B. Dutta, R. N. Mohapatra, Phys. Rev. D68, 113008 (2003).
. E Iltan, Phys. Rev. 6413013E. Iltan, Phys. Rev. D64, 013013 (2001).
. E Iltan, JHEP. 065305E. Iltan, JHEP 065, 0305 (2003).
. E Iltan, JHEP. 040418E. Iltan, JHEP 0404, 018 (2004).
. L Tabares, O A Sampayo, Phys. Rev. 6553012L. Tabares, O. A. Sampayo, Phys. Rev. D65, 053012 (2002).
. S Eidelman, D Epifanov, M Fael, L Mercolli, M Passera, JHEP. 1603140S. Eidelman, D. Epifanov, M. Fael, L. Mercolli, M. Passera, JHEP 1603, 140 (2016).
. M , arXiv:1809.01963hep-phM. Köksal, arXiv:1809.01963 [hep-ph].
. M A Arroyo-Ureña, Eur. Phys. J. 77227M. A. Arroyo-Ureña, et al., Eur. Phys. J. C77, 227 (2017).
. M A Arroyo-Ureña, Int. J. Mod. Phys. 321750195M. A. Arroyo-Ureña, et al., Int. J. Mod. Phys. A32, 1750195 (2017).
. Xin Chen, arXiv:1803.00501hep-phXin Chen, et al., arXiv:1803.00501 [hep-ph].
. Antonio Pich, Prog. Part. Nucl. Phys. 75Antonio Pich, Prog. Part. Nucl. Phys. 75, 41-85 (2014).
. S Atag, E Gurkanli, JHEP. 1606118S. Atag and E. Gurkanli, JHEP 1606, 118 (2016).
. Lucas Taylor, Nucl. Phys. Proc. Suppl. 76237Lucas Taylor, Nucl. Phys. Proc. Suppl. 76, 237 (1999).
. M Passera, Nucl. Phys. Proc. Suppl. 169213M. Passera, Nucl. Phys. Proc. Suppl. 169, 213 (2007).
. M Passera, Phys. Rev. 7513002M. Passera, Phys. Rev. D75, 013002 (2007).
. J Bernabeu, G A González-Sprinberg, J Papavassiliou, J Vidal, Nucl. Phys. 790160J. Bernabeu, G. A. González-Sprinberg, J. Papavassiliou, J. Vidal, Nucl. Phys. B790, 160 (2008).
. Y Özgüven, S C Inan, A A Billur, M Köksal, M K Bahar, Nucl. Phys. 923475Y.Özgüven, S. C. Inan, A. A. Billur, M. Köksal, M. K. Bahar, Nucl. Phys. B923, 475 (2017).
. A Gutiérrez-Rodríguez, M A Hernández-Ruíz, L N Luis-Noriega, Mod. Phys. Lett. 192227A. Gutiérrez-Rodríguez, M. A. Hernández-Ruíz and L.N. Luis-Noriega, Mod. Phys. Lett. A19, 2227 (2004).
. A Gutiérrez-Rodríguez, M A Hernández-Ruíz, M A Pérez, Int. J. Mod. Phys. 223493A. Gutiérrez-Rodríguez, M. A. Hernández-Ruíz and M. A. Pérez, Int. J. Mod. Phys. A22, 3493 (2007).
. A Gutiérrez-Rodríguez, Mod. Phys. Lett. 25703A. Gutiérrez-Rodríguez, Mod. Phys. Lett. A25, 703 (2010).
. A Gutiérrez-Rodríguez, M A Hernández-Ruíz, C P Castañeda-Almanza, J. Phys. 4035001A. Gutiérrez-Rodríguez, M. A. Hernández-Ruíz, C. P. Castañeda-Almanza, J. Phys. G40, 035001 (2013).
. A A Billur, M , Phys. Rev. 8937301A. A. Billur, M. Köksal, Phys. Rev. D89, 037301 (2014).
. W Bernreuther, O Nachtmann, P Overmann, Phys. Rev. 4878W. Bernreuther, O. Nachtmann, P. Overmann, Phys. Rev. D48, 78 (1993).
. M Köksal, A A Billur, A Gutiérrez-Rodríguez, M A Hernández-Ruíz, Phys. Rev. 9815017M. Köksal, A. A. Billur, A. Gutiérrez-Rodríguez and M. A. Hernández-Ruíz, Phys. Rev. D98, 015017 (2018).
. J I Aranda, D Espinosa-Gómez, J Montaño, B Quezadas-Vivian, F Ramírez-Zavaleta, E S Tututi, Phys. Rev. 98116003J. I. Aranda, D. Espinosa-Gómez, J. Montaño, B. Quezadas-Vivian, F. Ramírez-Zavaleta, E. S. Tututi, Phys. Rev. D98, 116003 (2018).
. A S Fomin, A Yu, A Korchin, S Stocchi, P Barsuk, Robbe, arXiv:1810.06699hep-phA. S. Fomin, A. Yu. Korchin, A. Stocchi, S. Barsuk and P. Robbe, arXiv:1810.06699 [hep-ph].
. R Escribano, E Massó, Phys. Lett. 301419R. Escribano and E. Massó, Phys. Lett. B301, 419 (1993).
. R Escribano, E Massó, Nulc. Phys. 42919R. Escribano and E. Massó, Nulc. Phys. B429, 19 (1994).
. J A Grifols, A Méndez, Phys. Lett. 255611J. A. Grifols and A. Méndez, Phys. Lett. B255, 611 (1991);
Erratum ibid. 259512Erratum ibid. B259, 512 (1991).
. R Escribano, E Massó, Phys. Lett. 395369R. Escribano and E. Massó, Phys. Lett. B395, 369 (1997).
. C Giunti, A Studenikin, Phys. Atom. Nucl. 722089C. Giunti and A. Studenikin, Phys. Atom. Nucl. 72, 2089 (2009).
. C Giunti, A Studenkin, Rev. Mod. Phys. 87531C. Giunti and A. Studenkin, Rev. Mod. Phys. 87, 531 (2015).
. W Buchmuller, D Wyler, Nucl. Phys. 268621W. Buchmuller and D. Wyler, Nucl. Phys. B268, 621 (1986).
. B Grzadkowski, M Iskrzynski, M Misiak, J Rosiek, JHEP. 1085B. Grzadkowski, M. Iskrzynski, M. Misiak, and J. Rosiek, JHEP 10, 085 (2010).
Electromagnetic dipole moments of fermions. M Fael, PhD. ThesisM. Fael, Electromagnetic dipole moments of fermions, PhD. Thesis, (2014).
. S Eidelman, D Epifanov, M Fael, L Mercolli, M Passera, JHEP. 1603140S. Eidelman, D. Epifanov, M. Fael, L. Mercolli and M. Passera, JHEP 1603, 140 (2016).
. F Del Aguila, M Sher, Phys. Lett. 252116F. del Aguila and M. Sher, Phys. Lett. B252, 116 (1990).
. I Galon, A Rajaraman, R Riley, Tim M P Tait, JHEP. 1612111I. Galon, A. Rajaraman, R. Riley, and Tim M. P. Tait, JHEP 1612, 111 (2016).
. C Weizsacker, Z. Phys. 88612C. von Weizsacker, Z. Phys. 88, 612 (1934).
. E Williams, Phys. Rev. 45729E. Williams, Phys. Rev. 45, 729 (1934).
. M Albrow, CMS CollaborationarXiv:0811.0120JINST. 410001hep-exM. Albrow, et al., [CMS Collaboration], JINST 4, P10001 (2009); arXiv:0811.0120 [hep-ex].
. A Abulencia, CDF CollaborationPhys. Rev. Lett. 98112001A. Abulencia, et al., [CDF Collaboration], Phys. Rev. Lett. 98, 112001 (2007).
. T Aaltonen, CDF CollaborationPhys. Rev. Lett. 102222002T. Aaltonen, et al., [CDF Collaboration], Phys. Rev. Lett. 102, 222002 (2009).
. T Aaltonen, CDF CollaborationPhys. Rev. Lett. 102242001T. Aaltonen, et al., [CDF Collaboration], Phys. Rev. Lett. 102, 242001 (2009).
. S Chatrchyan, CMS CollaborationJHEP. 120152S. Chatrchyan, et al., [CMS Collaboration], JHEP 1201, 052 (2012).
. S Chatrchyan, CMS CollaborationJHEP. 121180S. Chatrchyan, et al., [CMS Collaboration], JHEP 1211, 080 (2012).
. V M Abazov, D0 CollaborationPhys. Rev. 8812005V. M. Abazov, et al., [D0 Collaboration], Phys. Rev. D88, 012005 (2013).
. S Chatrchyan, CMS CollaborationJHEP. 07116S. Chatrchyan, et al., [CMS Collaboration], JHEP 07, 116 (2013).
. M G Albrow, T D Coughlin, J R Forshaw, Prog. Part. Nucl. Phys. 65149M. G. Albrow, T. D. Coughlin and J. R. Forshaw, Prog. Part. Nucl. Phys. 65, 149 (2010).
. M G Albrow, FP420 R and D CollaborationJINST. 410001M. G. Albrow, et al., [FP420 R and D Collaboration], JINST 4, T10001 (2009);
. M Tasevsky, Nucl. Phys. Proc. 187Suppl. 179-180M. Tasevsky, Nucl. Phys. Proc. Suppl. 179-180, 187 (2008).
. M Tasevsky, arXiv:1407.8332hep-phM. Tasevsky, arXiv:1407.8332 [hep-ph].
Technical Proposal for the Phase-II Upgrade of the CMS Detector. CERN-LHCC- 2015-010. LHCC-P-008GenevaCERNTech. Rep.Technical Proposal for the Phase-II Upgrade of the CMS Detector, Tech. Rep. CERN-LHCC- 2015-010. LHCC-P-008, CERN, Geneva, Jun 2015.
CERN-LHCC-2015-019CMS Phase II Upgrade Scope Document. GenevaCERNLHCC-G-165CMS Phase II Upgrade Scope Document, Tech. Rep. CERN-LHCC-2015-019. LHCC-G-165, CERN, Geneva, Sep 2015.
. Thomas Barber, Ulrich Parzefall, Nuclear Instruments and Methods in Physics Research. 730191Thomas Barber, Ulrich Parzefall, Nuclear Instruments and Methods in Physics Research A730, 191 (2013).
. A M Sirunyan, CMS and TOTEM CollaborationsJHEP. 1807153A. M. Sirunyan, et al., [CMS and TOTEM Collaborations], JHEP 1807, 153 (2018).
P N Burrows, CERN-2018-005-MThe Compact Linear e + e − Collider (CLIC) -2018 Summary Report. Geneva2CERNThe Compact Linear e + e − Collider (CLIC) -2018 Summary Report, edited by P. N. Burrows, et al.. CERN Yellow Report: Monograph Vol. 2/2018, CERN-2018-005-M (CERN, Geneva, 2018).
. S Atag, A A Billur, JHEP. 101160S. Atag and A.A. Billur, JHEP 1011, 060 (2010).
. J N Howard, A Rajaraman, R Riley, Tim M P Tait, arXiv:1810.09570v1J. N. Howard, A. Rajaraman, R. Riley, and Tim M. P. Tait, arXiv:1810.09570v1.
. M Köksal, A A Billur, A Gutiérrez-Rodríguez, Adv. High Energy Phys. 20176738409M. Köksal, A. A. Billur and A. Gutiérrez-Rodríguez, Adv. High Energy Phys. 2017, 6738409 (2017).
. A Belyaev, N D Christensen, A Pukhov, Comput. Phys. Commun. 1841729A. Belyaev, N. D. Christensen and A. Pukhov, Comput. Phys. Commun. 184, 1729 (2013).
. V M Budnev, I F Ginzburg, G V Meledin, V G Serbo, Phys. Rep. 15181V. M. Budnev, I. F. Ginzburg, G. V. Meledin and V. G. Serbo, Phys. Rep. 15, 181 (1975).
A schematic diagram for the processes pp(e − p, e + e − ) → pττ γp(e − ττ γp, e + ττ γe − ). FIG. 1: A schematic diagram for the processes pp(e − p, e + e − ) → pττ γp(e − ττ γp, e + ττ γe − ).
|
[] |
[
"HAUMEA'S SHAPE, COMPOSITION, AND INTERNAL STRUCTURE",
"HAUMEA'S SHAPE, COMPOSITION, AND INTERNAL STRUCTURE"
] |
[
"E T Dunham ",
"S J Desch \nSchool of Earth and Space Exploration\nArizona State University\n\n",
"L Probst \nSchool of Earth and Space Exploration\nArizona State University\n\n\nSan Francisco University High School\n\n\nDunham, E. T. et al\n\n"
] |
[
"School of Earth and Space Exploration\nArizona State University\n",
"School of Earth and Space Exploration\nArizona State University\n",
"San Francisco University High School\n",
"Dunham, E. T. et al\n"
] |
[] |
We have calculated the figure of equilibrium of a rapidly rotating, differentiated body to determine the shape, structure, and composition of the dwarf planet Haumea. Previous studies of Haumea's light curve have suggested Haumea is a uniform triaxial ellipsoid consistent with a Jacobi ellipsoid with axes ≈ 960 × 774 × 513 km, and bulk density ≈ 2600 kg m −3 . In contrast, observations of a recent stellar occultation by Haumea indicate its axes are ≈ 1161 × 852 × 523 km and its bulk density ≈ 1885 kg m −3 ; these results suggest that Haumea cannot be a fluid in hydrostatic equilibrium and must be partially supported by interparticle forces. We have written a code to reconcile these contradictory results and to determine if Haumea is in fact a fluid in hydrostatic equilibrium. The code calculates the equilibrium shape, density, and ice crust thickness of a differentiated Haumea after imposing (semi-) axes lengths a and b. We find Haumea is consistent with a differentiated triaxial ellipsoid fluid in hydrostatic equilibrium with axes of best fit a = 1050 km, b = 840 km, and c = 537 km. This solution for Haumea has ρ avg = 2018 kg m −3 , ρ core = 2680 kg m −3 , and core axes a c = 883 km, b c = 723 km, and c c = 470 km, which equates to an ice mantle comprising ∼ 17% of Haumea's volume and ranging from 67 to 167 km in thickness. The thick ice crust we infer allows for Haumea's collisional family to represent only a small fraction of Haumea's pre-collisional ice crust. For a wide range of parameters, the core density we calculate for Haumea suggests that today the core is composed of hydrated silicates and likely underwent serpentinization in the past.
|
10.3847/1538-4357/ab13b3
|
[
"https://arxiv.org/pdf/1904.00522v1.pdf"
] | 90,262,114 |
1904.00522
|
a6c60419b6d3c6804a6b088b9e6888f0b317556f
|
HAUMEA'S SHAPE, COMPOSITION, AND INTERNAL STRUCTURE
April 2, 2019 1 Apr 2019
E T Dunham
S J Desch
School of Earth and Space Exploration
Arizona State University
L Probst
School of Earth and Space Exploration
Arizona State University
San Francisco University High School
Dunham, E. T. et al
HAUMEA'S SHAPE, COMPOSITION, AND INTERNAL STRUCTURE
April 2, 2019 1 Apr 2019Draft version Typeset using L A T E X twocolumn style in AASTeX61Kuiper belt objects: individual (Haumea)planets and satellites: interiorsplanets and satellites: compositionplanets and satellites: formation
We have calculated the figure of equilibrium of a rapidly rotating, differentiated body to determine the shape, structure, and composition of the dwarf planet Haumea. Previous studies of Haumea's light curve have suggested Haumea is a uniform triaxial ellipsoid consistent with a Jacobi ellipsoid with axes ≈ 960 × 774 × 513 km, and bulk density ≈ 2600 kg m −3 . In contrast, observations of a recent stellar occultation by Haumea indicate its axes are ≈ 1161 × 852 × 523 km and its bulk density ≈ 1885 kg m −3 ; these results suggest that Haumea cannot be a fluid in hydrostatic equilibrium and must be partially supported by interparticle forces. We have written a code to reconcile these contradictory results and to determine if Haumea is in fact a fluid in hydrostatic equilibrium. The code calculates the equilibrium shape, density, and ice crust thickness of a differentiated Haumea after imposing (semi-) axes lengths a and b. We find Haumea is consistent with a differentiated triaxial ellipsoid fluid in hydrostatic equilibrium with axes of best fit a = 1050 km, b = 840 km, and c = 537 km. This solution for Haumea has ρ avg = 2018 kg m −3 , ρ core = 2680 kg m −3 , and core axes a c = 883 km, b c = 723 km, and c c = 470 km, which equates to an ice mantle comprising ∼ 17% of Haumea's volume and ranging from 67 to 167 km in thickness. The thick ice crust we infer allows for Haumea's collisional family to represent only a small fraction of Haumea's pre-collisional ice crust. For a wide range of parameters, the core density we calculate for Haumea suggests that today the core is composed of hydrated silicates and likely underwent serpentinization in the past.
INTRODUCTION
The Kuiper Belt Object (KBO) and dwarf planet Haumea is one of the most intriguing and puzzling objects in the outer Solar System. Haumea orbits beyond Pluto, with a semi-major axis of 43.2 AU, and is currently near its aphelion distance ≈ 51.5 AU, but is relatively bright at magnitude V = 17.3, due to its large size and icy surface. Haumea's mean radius is estimated to be ≈ 720 km (Lockwood et al. 2014) to ≈ 795 km (Ortiz et al. 2017), and its reflectance spectra indicate that Haumea's surface is uniformly covered by close to 100% water ice (Trujillo et al. 2007;Pinilla-Alonso et al. 2009). Haumea is the third-brightest KBO, after the dwarf planets Pluto and Makemake . Haumea has two small satellites, Hi'iaka and Namaka, which enable a determination of its mass, M H = 4.006 × 10 21 kg (Ragozzine & Brown 2009); it is the third or fourth most massive known KBO (after Pluto, Eris, and possibly Makemake). Despite its large size, it is a rapid rotator; from its light curve Haumea's rotation rate is found to be 3.91531±0.00005 hours (Lellouch et al. 2010). This means Haumea is the fastestrotating KBO (Sheppard & Jewitt 2002), and is in fact the fastest-rotating large (> 100 km) object in the Solar System (Rabinowitz et al. 2006). Haumea is also associated with a collisional family ) and is known to have a ring (Ortiz et al. 2017). Based on its rapid rotation and its collisional family, Haumea is inferred to have suffered a large collision , > 3 Gyr ago, based on the orbital dispersion of the family members (Volk & Malhotra 2012).
Haumea is larger than other dwarf planets such as Ceres (radius 473 km), or satellites such as Dione (radius 561 km) or Ariel (radius 579 km), all of which are nominally round. Despite this, Haumea exhibits a reflectance light curve with a very large peak-to-trough amplitude, ∆m ≈ 0.28 in 2005(Rabinowitz et al. 2006, ∆m = 0.29 in 2007 (Lacerda et al. 2008), and ∆m = 0.32 in 2009 (Lockwood et al. 2014). Since Haumea's surface is spectrally uniform, such an extreme change in brightness can only be attributed to a difference in the area presented to the observer. Haumea has been modeled as a triaxial ellipsoid with axes a > b > c, the c-axis being aligned with the rotation axis. In that case, the change in brightness from peak to trough would be given by ∆m = 2.5 log 10 a b − log 10 r 1 r 2 ,
where r 1 = a 2 cos 2 φ + c 2 sin 2 φ 1/2
(2) and r 2 = b 2 cos 2 φ + c 2 sin 2 φ 1/2 ,
φ being the angle between the rotation axis and the line of sight (Binzel et al. 1989). If φ = 0 • , then ∆m = 0, because the same a × b ellipse would be presented to the observer. Instead, ∆m is maximized when φ = 90 • , because then the ellipse presented to the observer would vary between a × c and b × c. In that case, ∆m = 2.5 log 10 (a/b) and the axis ratio is related directly to ∆m. Assuming φ = 90 • in 2009, when ∆m = 0.32, one would derive b/a = 0.75. Taking into account the scattering properties of an icy surface, Lockwood et al. (2014) refined this to b/a = 0.80±0.01. Thus, Haumea is distinctly non-spherical, and is not even axisymmetric. Haumea appears to be unique among large Solar System objects in having such a distinctly nonaxisymmetric, triaxial ellipsoid shape. Based on its rapid rotation (angular velocity ω = 4.457 × 10 −4 s −1 ), Haumea is inferred to have assumed a particular shape known as a Jacobi ellipsoid. This is a class of equilibrium shapes assumed by (shearless) fluids in hydrostatic equilibrium when they rotate faster than a certain threshold (Chandrasekhar 1969(Chandrasekhar , 1987. For a body with angular velocity ω and uniform density ρ, the axis ratios b/a and c/a of the ellipsoid are completely determined, and the axis ratios are single-valued functions of ω 2 /(πGρ). For a Jacobi ellipsoid with b/a = 0.806 and Haumea's rotation rate, the density must be ρ = 2580 kg m −3 , and c/a = 0.520. Assuming a semi-axis of a = 960 km then yields b = 774 km and then c = 499 km, (4π/3)abc ρ exactly matches Haumea's mass. The mean radius of Haumea would be 718 km. Moreover, the cross-sectional area of Haumea would then imply a surface albedo p V ≈ 0.71 − 0.84 (Rabinowitz et al. 2006;Lacerda & Jewitt 2006;Lellouch et al. 2010;Lockwood et al. 2014), consistent with the albedo of a water ice surface.
To explain Haumea's icy surface and ρ = 2580 kg m −3 , one would have to assume that the interior of Haumea was close to 2600 kg m −3 in density (an interior of hydrated silicates (Desch & Turner 2015)) while its surface was a very thin ice layer ( §2). This structure implies that Haumea suffered a giant collision in its past that may have stripped its ice mantle. For these reasons, the above axes and axis ratios were strongly favored in the literature. Other groups derived similar axes and bulk densities (Rabinowitz et al. 2006;Lacerda et al. 2008;Lellouch et al. 2010) This model was upended by the observations of Ortiz et al. (2017) following the occultation of an 18 th magnitude star by Haumea in January 2017. The shadow of Haumea traced out an ellipse, as expected for the shadow of a triaxial ellipsoid; but the (semi-)axes of the shadow ellipse were much larger than expected: b = 569 ± 13 km by a = 852 ± 2 km. Ortiz et al. (2017) used the shadow axes and other assumptions to derive the axes of Haumea to be a = 1161 ± 30, b = 852 ± 4, and c = 513 ± 16 km (6).
This new shape causes Haumea to look significantly different than previous models: the mean radius of Haumea is larger at 798 km, the albedo is a smaller p V ≈ 0.51 (and would require a darkening agent in addition to water ice), and the bulk density a lower 1885 ± 80 kg m −3 . Moreover, the axis ratio c/a ≈ 0.44 is significantly lower than previous estimates ≈ 0.52, and is inconsistent with a Jacobi ellipsoid or a fluid in hydrostatic equilibrium. Ortiz et al. (2017) point out the possibility that shear stresses may be supported on Haumea by granular interparticle forces (Holsapple 2001).
In either case, Haumea is likely to have a rocky core surrounded by ice, but no analytical solution exists for the figure of equilibrium of a rapidly rotating, differentiated body. Therefore it is not known whether or not Haumea is a fluid in hydrostatic equilibrium. In this paper we attempt to reconcile the existing data from Haumea's light curve and occultation shadow, with the goal of deriving its true shape and internal structure. Besides its axes, important quantities to constrain are the ice fraction on Haumea today, and the size, shape, and density of its core. A central question we can solve using these quantities is whether Haumea is a fluid in hydrostatic equilibrium or demands granular physics to support it against shear stresses. In addition, we can use the core density and ice fraction to constrain the geochemical evolution of Haumea and, by extension, other KBOs, as well as models of the origin of Haumea's collisional family.
In §2, we examine whether it is possible for Haumea to be a differentiated (rocky core, icy mantle) body with a Jacobi ellipsoid shape. We show that only homogeneous bodies are consistent with a Jacobi ellipsoid shape. In §3 we describe a code we have written to calculate the equilibrium figure of a rapidly rotating, differentiated body. In §4 we present the results, showing that entire families of solutions exist that allow Haumea to be a differentiated body in hydrostatic equilibrium. Some of these solutions appear consistent with observations of Haumea, particularly ≈ 1050 × 840 × 537 km with bulk density ≈ 2018 kg m −3 . In §5 we discuss the implications of this solution for Haumea's structure, for the collision that created the collisional family, and for the astrobiological potential of Haumea.
CAN A DIFFERENTIATED HAUMEA HAVE JACOBI ELLIPSOID AXES?
Observations have suggested that Haumea has axes consistent with a Jacobi ellipsoid: for example, Lockwood et al. (2014) inferred axes of a = 960 km, b = 770 km, and c = 495 km (yielding axis ratios b/a = 0.802, c/a = 0.516) and a uniform density ≈ 2614 kg m −3 . These axes are within 1% of the Jacobi ellipsoid solution: a Jacobi ellipsoid with Haumea's mass, rotation rate, and a = 960 km, has axes b = 774.2 km and c = 498.8 km (yielding axis ratios b/a = 0.806 and c/a = 0.520), and uniform density 2580 kg m −3 . Uniform density is a central assumption of the Jacobi ellipsoid solution, and yet Haumea is manifestly not uniform in density. Its reflectance spectra robustly show the existence of a uniform water-ice surface (Trujillo et al. 2007;Pinilla-Alonso et al. 2009). The density of this ice, which has structure Ih at 40K, is about ≈ 921 kg m −3 (Desch et al. 2009), much lower than Haumea's mean density. Haumea, therefore, is certainly differentiated. A basic question, then, is whether a differentiated Haumea can be consistent with axes that match a Jacobi ellipsoid.
To answer this question, we have calculated the gravitational potential of a differentiated Haumea, as described in Probst (2015). We model Haumea as two nested, aligned triaxial ellipsoids. We assume Haumea's outer surface is an ellipsoid with axes a = 960 km, b = 770 km, and c = 495 km, and we allow its core to have arbitrary density ρ core , and arbitrary axis ratios p c = b c /a c and q c = c c /a c . For a given density ρ core and axis ratios p c and q c , the core axis a c is chosen so that the mass of the core plus the mass of the ice mantle, with density ρ ice = 921 kg m −3 , equal the mass of Haumea. We then calculate whether the surface and the core-mantle boundary (CMB) are equipotential surfaces.
An equilibrium solution must have the equipotential surfaces coincident with the surface and CMB, or else vortical flows will be generated. In the absence of external and viscous forces and large internal flows, the vorticity − → ω follows the equation
D − → ω Dt = 1 ρ 2 − → ∇ρ × − → ∇P (4)
where P is the pressure and ρ is the density. If the gradients of ρ and P are misaligned by an angle Θ, then vorticity will be generated at a rate ∼ (P/ρ)/R 2 (sin Θ), where R is comparable to the mean radius. In a time τ , the vortical flows will circulate at rates comparable to the rotation rate, ω H , where τ ∼ ω H ρR 2 /P (sin Θ) −1 .
Assuming ω H = 4.46 × 10 −4 s −1 , ρ = 921 kg m −3 , P = 18 MPa, and R ∼ 725 km, the timescale τ ∼ 4/ sin Θ years. Even a small mismatch, with Θ ∼ 1 • , would lead to significant vortical flows within hundreds of years. Figure 1. The "fit angle" Θ on the surface (a) and the core-mantle boundary (b), as functions of the core axis ratios pc = bc/ac and qc = cc/ac (note that pc ≥ qc). A core density 2700 kg m −3 has been assumed. The core axis ratios that minimize the surface fit angle and are most consistent with equilibrium are pc ≈ 0.80 and qc ≈ 0.51 (denoted by the white star in b). The core axis ratios that minimize the fit angle on the CMB are pc ≈ 0.82 and qc ≈ 0.55 (denoted by the yellow star in b). On either surface the fit angle is at least a few degrees, and it is not possible to minimize Θ on both surfaces with the same core ratios.
Equilibrium solutions demand Θ = 0 • , i.e., that − → ∇ρ and − → ∇P are parallel. In hydrostatic equilibrium, the net force is
− → F = − − → ∇P + ρ − → g eff ,(5)
where − → g eff measures the acceleration due to gravity as well as centrifugal support due to Haumea's rotation. If the net force is zero, then it must be the case that − → ∇ρ is parallel to − → g eff . In other words, the gradient in the effective gravitational potential must be parallel to the gradient in density, and surfaces of density discontinuity must be equipotential surfaces. We solve for the effective gravitational potential by discretizing an octant of Haumea on a Cartesian grid with 60 evenly spaced zones along each axis. If a rectangular zone is entirely inside the triaxial ellipsoid defined by the core, its density is set to ρ core ; if it is entirely outside the triaxial ellipsoid defined by the surface, its density is set to zero; and if it is entirely between these two ellipsoids, its density is set to ρ ice . For zones straddling the core and ice mantle, or straddling the ice mantle and the exterior, the density is found using a Monte Carlo method. An array of about 100 points on the surface is then generated, by generating a grid of N θ ≈ 10 angles θ from 0 to π radians, and of N φ ≈ 10 angles φ from 0 to 2π radians. The points on the surface are defined by x * = a sin θ cos φ, y * = b sin θ sin φ, z * = c cos θ.
At each point we find the vector normal to the surface, − → n = (2x * /a 2 )ê x + (2y * /b 2 )ê y + (2z * /c 2 )ê z , as well as the gravitational acceleration − → g , found by summing the gravitational acceleration vectors from each zone's contribution. To this we add an additional contribution due to centrifugal support:
− → g eff = − → g + ω 2 x * ê x + ω 2 y * ê y .(6)
Once these vectors are found at each of the surface points defined by θ and φ, we find the following quantity by summing over all points:
M = 1 N θ N φ N θ N φ − → n · − → g eff | − → n | | − → g eff | .(7)
We also define "fit angle," the mean angular deviation between the surface normal and the effective gravitational field:
Θ = cos −1 M.(8)
If the equipotential surfaces are coincident with the surface, then M = 1 and Θ = 0 • . In a similar fashion we define an identical metric M on the core-mantle boundary as well.
In Figure 1, we plot the fit angle on the surface and core-mantle boundary of Haumea, as a function of the assumed core axis ratios p c and q c . A core density 2700 kg m −3 is assumed. The core axis ratios that minimize Θ on the outer surface are p c = b c /a c ≈ 0.80 and q c = c c /a c ≈ 0.51. For this combination, Θ ≈ 1 • . Meanwhile, the core axis ratios that minimize Θ on the core-mantle boundary are p c ≈ 0.82 and q c ≈ 0.55. For this combination, Θ ≈ 2 • . Significantly, the parameters that minimize Θ on the outer surface are not those that minimize Θ on the core-mantle boundary. Given the change in Θ with changes in p c and q c , either Θ on the surface or core-mantle boundary must be at least a few degrees. Figure 1 shows numerous patches of higherthan-expected angles. These patches are caused by the random nature in which the grid cells straddling the CMB are populated. The coarseness of the grid leads to the code creating a bumpiness in the CMB surface which then upwardly skews the calculation of the average fit angle in places. This is because where the surface is bumpy, the surface normal vector can be significantly different from the gravitational acceleration vector. The patches become more numerous, but smaller in magnitude, with increasing numerical resolution.
We have repeated the analysis for other core densities of 3000 kg m −3 and 3300 kg m −3 . In those cases the discrepancy between what parameters p c and q c minimize Θ on the surface vs. what parameters minimize it on the CMB grows even larger. If Haumea has axes a = 960 km, b = 770 km, and c = 495 km, and is divided into a rocky core and icy mantle, the only way to maintain hydrostatic equilibrium on the surface and core-mantle boundary is for the core density to be as close as possible to the inferred bulk density of Haumea, ≈ 2600 kg m −3 , and for the core and surface axis ratios to converge. The solution is naturally driven to one of uniform density. In this case, the core comprises over 96% of the mass of Haumea, and the ice thickness is < 10 km on the a and b axes, and < 5 km on the c axis. Even for this case, though, the effective equipotential surfaces fail to coincide with the surface or core-mantle boundary, by several degrees. This suggests that the only way for Haumea to have axes consistent with a Jacobi ellipsoid is for it to have essentially no ice mantle, less than a few km thick.
These investigations reveal two facts. First, a Haumea divided into a rocky core and icy mantle cannot have axes equal to those of a Jacobi ellipsoid. This lends some support to the finding by Ortiz et al. (2017) that Haumea's axes deviate significantly from a Jacobi ellipsoid's. Second, if Haumea cannot conform to a Jacobi ellipsoid, then it is not possible to use analytical formulas to describe its shape, and a more powerful technique must be used to derive its internal structure.
METHODS
To calculate the shape of a rapidly rotating Haumea with a rocky core and icy mantle, we have written a code, named kyushu, that calculates the internal structure and figure of equilibrium of a differentiated body undergoing rapid, uniform rotation. Our algorithm is adapted from that of Hachisu (1986a,b;hereafter H86a,H86b), for calculating the structure of stars orbiting each other in binary systems, as follows.
The Hachisu (1986a,b) algorithm relies on using a governing equation derived from the Bernoulli equation, at each location on a three-dimensional grid:
ρ −1 dP + Φ − Ω 2 r ⊥ dr ⊥ = C,(9)
where the first term is the enthalpy, H, the second term is the gravitational potential energy, the third term is the rotational energy, and C is a constant. Here r ⊥ is the distance from the rotation axis. The grid is defined in spherical polar coordinates, the variables being distance from the origin, r, the cosine of the polar angle, µ, and the azimuthal angle, φ. A discretized grid of r i , with i = 1, 2, ...N r is defined, with r uniformly spaced between r 1 = 0 and r N r = R. Likewise, a discretized grid of µ j , with j = 1, 2, ...N µ is defined, with µ uniformly spaced between µ 1 = 0 and µ N φ = 1, and a discretized grid of φ k , with k = 1, 2, ...N φ , is defined, with φ uniformly spaced between φ 1 = 0 and φ N φ = π/2. Symmetries across the equatorial plane and n = 2 symmetry about the polar axis are assumed. Quantities in the above equation, including density ρ ijk , gravitational potential Φ ijk , etc., are defined on the intersections of grid lines. Typical values in our calculation are R = 1300 km, N r = 391, N µ = 33, and N φ = 33, meaning that quantities are calculated at 33 × 33 × 391 = 425, 799 locations.
The enthalpy term can be calculated if the density structure and equation of state are provided. For example, if P = Kρ γ (as for an adiabatic gas), then H = (γ)/(γ − 1)P/ρ, and is immediately known as a function of the local pressure and density. For planetary materials such as olivine, clays, or water ice, it would be more appropriate to use a Vinet (Vinet et al. 1987) or Birch-Murnaghan (Birch 1947) equation of state, with the bulk modulus and the pressure derivative of the bulk modulus specified. This equation of state can then be integrated to yield the enthalpy. The recent paper by Price & Rogers (2019) provides formulas for this. For our purposes, we neglect the self-compression of the planetary materials inside Haumea. The bulk moduli of planetary materials are typically 10s of GPa, while the maximum pressure inside Haumea is < 0.4 GPa, so self-compression can be ignored. For ease of calculation we therefore assume uniform densities in the ice mantle and in the rocky core, and compute the enthalpy accordingly.
The gravitational potential term is found by numerical integration of an expansion of the gravitational potential in spherical harmonics, using equations 2, 3, 33, 34, 35 and 36 of H86b, using n = 2 symmetry, and typically N l = 16 terms in the expansion. The rotational term is defined to be Ω 2 Ψ, where Ψ = −r 2 ⊥ /2, r ⊥ again being the distance from the axis. The terms H, Φ and Ω 2 Ψ all spatially vary, but their sum is a constant C at all locations. The Hachisu algorithm exploits this fact by fixing two spatial points "A" and "B" to be on the boundary of the body. Point "A" lies at r = r A , µ = 0 (in the equatorial plane), φ = 0, or At these two locations, H = 0, and the Hachisu algorithm then solves for the only two values of C and Ω that allow H = 0 at both these boundary points. With C and Ω defined, H is found at all locations, and the enthalpy integral H = ρ −1 dP is inverted to find the density ρ at each location. Locations with H < 0 are assigned zero density. The farthest point with non-zero density along the z axis can be equated with c of a triaxial ellipsoid, although of course the shape need not necessarily be a triaxial ellipsoid. After adjusting the density everywhere, the code then recalculates the gravitational potential and performs the same integrations, solving iteratively until the values of Ω and the densities ρ at all locations converge. Because the densities and the volume of the body are changed with each iteration, the mass of the object is a varying output of the model.
x = r (1 − µ 2 ) 1/2 cos φ = r A , y = r (1 − µ 2 ) 1/2 sin φ = 0, z = r µ = 0. Point "B" lies at r = r B , µ = 0 (in the equatorial plane), φ = π/2, or x = 0, y = r B , z = 0. For a triaxial ellipsoid (x/a) 2 + (y/b) 2 + (z/c) 2 = 1,
We apply the H86a,b algorithms as part of a larger iterative procedure that introduces two new variables to an equation of state: P cmb and ρ core . We assume that at locations within the body with pressures P < P cmb , ρ = ρ ice ≡ 921 kg m −3 . At higher pressures P > P cmb , we assume ρ = ρ core . This divides the body into a core and an icy mantle, each with distinct densities, with the pressure equal to a uniform value P cmb everywhere on the core-mantle boundary. For P > P cmb , H = P cmb /ρ ice + (P − P cmb )/ρ core . This form of the equation of state ignores self-compression, as is appropriate in bodies of Haumea's size made of materials like water ice, olivine, or hydrated silicates. The bulk modulus of water ice is 9.2 GPa (Shaw 1986), far higher than the likely pressures in the ice shell, < 20 MPa (section 4). Likewise, the bulk moduli of olivine is 126 GPa (Núñez-Valdez et al. 2013), and that of the hydrated silicate clay antigorite is 65 GPa (Capitani & Stixrude 2012), far higher than the maximum pressure inside Haumea (< 300 MPa). We therefore expect selfcompression to change the densities by <1%, and we are justified in assuming uniform densities. This equation of state allows a very simple inversion to find the density: for H < P cmb /ρ ice , the density is simply ρ ice , and for H > P cmb /ρ ice , ρ = ρ core .
With this definition, we iterate as follows to find P cmb and ρ core . In each application of the Hachisu algorithms, we initialize with a density distribution with ρ = 0 for (x/a) 2 + (y/b) 2 + (z/c) 2 > 1, ρ = ρ ice for (x/a) 2 + (y/b) 2 + (z/c) 2 < 1, but ρ = ρ core for (x/a) 2 + (y/b) 2 + (z/c) 2 < ξ 2 . That is, we define Haumea's surface to be a triaxial ellipsoid, and its core to be a similar triaxial ellipsoid with aligned axes, smaller in size by a factor of ξ. The value of ξ is chosen so that the total mass of the configuration equals M H , the mass of Haumea:
ξ = 3M H /(4πabc) − ρ ice ρ rock − ρ ice 1/3 .(10)
The value of c is unknown and an output of the code, so we initialize our configuration with c = b. We apply the Hachisu algorithms several times. First we define P cmb ≈ 0 MPa and define ρ core = 2700 kg m −3 , apply the Hachisu algorithms, and calculate the mass of the body, M . Because this will not match Haumea's mass, M H , we multiply ρ core by a factor M H /M , and reapply the Hachisu algorithms. We do this until we have found an equilibrium configuration with Haumea's mass. Outputs of the code include ρ core and Ω, and in general Ω will be smaller than Haumea's true angular frequency Ω H = 2π/P rot for P cmb = 0 MPa (P rot is the rotational period). We then repeat the procedure, finding an equilibrium configuration with Haumea's mass, having P cmb = 40 MPa. An output of the code will be a different ρ core and a different Ω, which in general will be > Ω H . If the solution is bracketed, we use standard bisection techniques to find P cmb that yields an equilibrium configuration consistent not just with Haumea's mass, but with its period as well. Thus, inputs of the code include r A = a and r B = b, and Haumea's mass M H and angular velocity Ω H . The outputs of the code include the density ρ at all locations, the calculated mass M (which should comply with M = M H ), the angular frequency Ω (which should equal 2π/P rot ), and the values of c and P cmb and ρ core .
To benchmark the kyushu code, we ensure that it reproduces a Jacobi ellipsoid when a homogeneous den-sity is assumed. A Jacobi ellipsoid with Haumea's mass and rotation period, and an axis a = 960 km, would have axes b = 774.2 km, c = 498.8 km, and uniform density 2579.7 kg m −3 : this is similar to, but does not exactly equal, the solution favored by Lockwood et al. (2014), who fit Haumea's light curve assuming a = 960 km, b = 770 km, c = 495 km, and uniform density of 2600 kg m −3 . Running kyushu and assuming axes of a = 960 km and b = 774.2 km (b/a = 0.806), the code finds an acceptable solution after about 7 bracketing iterations. The mass matches Haumea's mass to within 0.03% and the rotation period to within 0.04%. The solution found is one with a very low value of P cmb = 0.42 MPa, so that the body is essentially uniform in density, with an ice layer < 5 km in thickness (the resolution of the code). The interior of the body has uniform density 2580.4 kg m −3 , and the short axis has length c = 499.5 km (c/a = 0.520). The density and caxis match a uniform-density Jacobi ellipsoid to within 0.03% and 0.14%, respectively. The code is therefore capable of finding the analytical solution of a uniformdensity Jacobi ellipsoid, if the imposed a and b axes are consistent with such a solution.
RESULTS
We have run the kyushu code for 30 different combinations of a and b axes, with a varying from 950 km to 1075 km in increments of ∼25 km, and b varying from 800 km to 900 km in increments of ∼25 km. In comparing a subset of these runs with runs performed with 10 km increments, we found that convergence of key outputs (axes, densities) is 0.3% or less, so we consider 25 km to be numerically converged. We find families of solutions that can conform to Haumea's mass (M = 4.006 × 10 21 kg) and rotation period (P rot = 3.9155 hr). Output quantities include the core density, the average (bulk) density, the outer c axis, the shape of the core-mantle boundary (CMB), and the thickness of the ice layer above the core.
In Figure 2 we plot the following quantities as functions of imposed a and b axes: the average (bulk) density, ρ avg ; the (semi-)axis c; and the thickness of the ice layer along the a, b and c axes. The density of the ice layer was imposed to be 921 kg m −3 . In Figure 3 we plot as functions of a and b the following: the core density, ρ core ; the pressure at the core-mantle boundary, P cmb ; and the semi-axes a c , b c and c c of the core. Entire families of solutions are found across the range of a and b that we explored, with the exception of simultaneous combinations of large a and large b. The input a × b combinations of 1025 × 900, 1050 × 900, 1075 × 900, and 1075 × 800 did not yield solutions because when initializing with these parameters, kyushu was not able converge at Haumea's mass and rotation rate. Simultaneously imposing large a and b yields a low bulk density ρ avg and large parameter ω 2 /(πGρ avg ) that cannot yield a c axis consistent with Haumea's mass.
Across the explored parameter space that yielded solutions, the average density of Haumea ranges from 1905 to 2495 kg m −3 . As might be expected, the average (bulk) density of Haumea is equally sensitive to both a and b, being inversely proportional to the volume and therefore to the product ab. As an example solution, we take a = 1050 km and b = 840 km, for which the average density is 2018 kg m −3 .
For allowed solutions, the shortest (semi-)axis c ranges from 504 to 546 km. The c axis is equally sensitive to both a and b, being large when the product ab is large. For the case with a = 1050 km and b = 840 km, we find c = 537 km.
Likewise, the thickness of the ice layer increases with increasing a and b (and therefore c). The ice is always thickest along the a axis, ranging from 15 to 210 km across the explored range; it is intermediate in thickness along the b axis, ranging from 15 to 150 km; and it is thinnest along the c axis, ranging from 5 to 80 km. For the case with a = 1050 km and b = 840 km, we find ice thicknesses of 167 km, 117 km, and 67 km along the a, b, and c axes.
Across the explored parameter space that yielded solutions, the density of Haumea's core ranges from 2560 to 2740 kg m −3 . Core density is much more sensitive to the a axis than the b axis, and tends to be greater when a is greater. For the case with a = 1050 km and b = 840 km, we find a core density 2680 kg m −3 .
Across the explored range, the pressure at the core mantle boundary ranges from 3.2 to 36.6 MPa, with the lowest values corresponding to the thinnest ice layers and smallest values of a and b. For the case with a = 1050 km and b = 840 km, P cmb = 30.4 MPa. Finally, the size of the core ranges considerably. In general, the core approximates a triaxial ellipsoid, with the longest axis parallel to the a axis. For the extreme case with a = 950 km, b = 800 km, and c = 504 km (a small Haumea), we find core axes of a c = 935 km, b c = 785 km, and c c = 499 km. For this configuration, b c /a c = 0.840 and c c /a c = 0.534, very close to the axis ratios for the surface, b/a = 0.842 and c/a = 0.531. The mean size of the core, relative to the mean size of the surface, is ξ = 0.991. That is, the ice layer thickness is only ∼ 1% of the radius of Haumea, and comprises 1.5% of Haumea's volume. For the opposite extreme case of a large Haumea, with a = 1050 km, b = 875 km, and c = 546 km, we find core axes of a c = 840 km, b c = 725 km, and c c = 466 km. For this configuration, b c /a c = 0.86 and c c /a c = 0.55, close to the axis ratios for the surface, b/a = 0.83 and c/a = 0.52. The mean size of the core, relative to the mean size of the surface, is ξ = 0.888, meaning the ice layer thickness is 11% the radius of Haumea and comprises 22% of its volume. For the case we consider typical, with a = 1050 km and b = 840 km, the core semi-axes are ≈ 883 × 723 × 470 km, which yields b c /a c = 0.82 and c c /a c = 0.53. The mean size of the core, relative to the mean size of the surface, is ξ = 0.909, and the ice layer comprises 17.2% of Haumea's volume.
In general, for larger assumed sizes of Haumea, the core becomes somewhat denser as Haumea itself becomes lower in density. The core takes up a smaller fraction of the volume of Haumea. While the core remains roughly similar in shape to the ellipsoid defined by the surface, there is a tendency for the core to be-come slightly more spherical as Haumea's assumed size increases.
To quantitatively test if Haumea's core and surface are both triaxial ellipsoids in our typical case, we calculated the maximum deviation from 1 of (x/a c ) 2 + (y/b c ) 2 + (z/c c ) 2 , where x, y, and z are computed for each angle combination θ and φ and radius defined by r = (r(ir core − 1)+r(ir core ))/2. Here ir core is the index of the first radial zone outside the core at that θ and φ. We find that a triaxial ellipsoid shape is consistent with the core to within 1.5% and the surface to within 0.5%. Both are within the code's resolution error. It is remarkable that the core mantle boundary solution is driven to the shape of a triaxial ellipsoid. This justifies the assumption in §2 that the surface and core would be triaxial ellipsoids if Haumea is differentiated. Figure 3. Contour plots of quantities in the equilibrium shape models of a differentiated Haumea, as functions of imposed axes a and b. The panels depict: the density of the core, ρcore; the pressure at the core-mantle boundary, P cmb , in MPa; the a (semi-)axis of the core; the b (semi-)axis of the core; and the c (semi-)axis of the core. As in Figure 2, solutions were not found for simultaneous combinations of large a and large b, but otherwise entire families of fluid hydrostatic equilibrium solutions exist.
Reconciling the light curve and occultation datasets
Under the assumption of uniform density, there is only one Jacobi ellipsoid solution that can match Haumea's mass and rotation rate, and its inferred b/a axis ratio. Once M , ω and b/a are specified, the average density ρ avg is fixed, which determines c/a and all the axes. In contrast, our modeling demonstrates that once the assumption of uniform density is dropped, a wide range of solutions exists with different semi-axes a and b, and even with the same b/a axis ratios. These would make linear cuts from the lower left to the upper right through the contour plots of Figures 2 and 3, and would yield a range of ρ avg and other properties. This additional freedom suggests it may be possible to have a differentiated Haumea be a fluid in hydrostatic equilibrium, and simul-taneously fit the shadow observed by Ortiz et al. (2017) during the occultation.
We find one solution, the example case considered above, to be quite favorable. This solution has outer semi-axes a = 1050 km, b = 840 km, and c = 537 km. The core-mantle boundary is defined to lie at P c = 30.4 MPa, and this surface is well approximated by a triaxial ellipsoid with semi-axes a c = 883 km, b c = 723 km, and c c = 470 km. The core density is ρ core = 2680 kg m −3 , and the average density of Haumea in this case is ρ avg = 2018 kg m −3 . The ice mantle in this case comprises 17.2% of Haumea's volume and ranges in thickness from 170 km on the a axis, to 120 km on the b axis, to 71 km on the c axis. The albedo is p V ≈ 0.66, slightly lower than the range 0.71 − 0.84 estimated by previous studies, but higher than the value ≈ 0.51 calculated by Ortiz et al. (2017). Likewise, the axes and average density we favor are intermediate be-tween the previous solutions assuming a Jacobi ellipsoid with ρ avg = 2600 kg m −3 , and the Ortiz et al. (2017) case with ρ avg = 1885 kg m −3 .
The projection of this triaxial ellipsoid onto the Earth (its shadow) is a complicated function that depends on its orientation relative to the line of sight. It is more difficult to invert the shadow axes to find the axes of Haumea's surface, but it is possible. We outline our methods in Appendix A (6).
If the tilt of Haumea's rotation axis out of the plane of the sky is ι = 13.8 • and the rotational phase were ψ = 0 • , we concur with Ortiz et al. (2017) that only a triaxial ellipsoid with axes 1161 × 852 × 513 km would be consistent with shadow axes 852 × 569 km. However, small changes in Haumea's rotational phase have a large impact on the shadow size. We find that if Haumea's rotational phase during the occultation were ψ = 13.3 • , then the shadow axes would be a = 853.1 km and b = 576.8 km, consistent with the observations by Ortiz et al. (2017) of a = 852 ± 2 km and b = 569 ± 13 km. Ortiz et al. (2017) favored ψ = 0 • , but inspection of their light curve (their Extended Data Figure 6) shows that the rotational phase at minimum brightness was at least 0.04 (14.4 • ), and would not be inconsistent with a value of 0.06 (21.6 • ), relative to the phase of 0.00 at the time of the occultation. Finally we note that the axis ratio b/a = 0.80 for this case is the same as previous estimates (b/a ≈ 0.80; Lockwood et al. (2014)), and yields a light curve with ∆m ≈ 0.23 during the epoch of the occultation (6). This approximates the actual light curve amplitude of ∆m = 0.26 observed by Lockwood et al. (2014).
A much more extensive parameter study must be undertaken to simultaneously fit all the data. Astrometry of Haumea's moons can better constrain the moons' orbital poles and, if Hi'iaka's orbit is aligned with Haumea's equator, Haumea's rotational pole and ι. More and consistent analyses of the now 15 years of light curve data, especially considering different reflectance functions, can better constrain the rotational phase ψ during the occultation. Further exploration of parameter space may yield a shape for Haumea that is exactly consistent with the light curve data and the occultation shadow. As shown here, though, Haumea can be a fluid in hydrostatic equilibrium and can conform to the occultation shadow.
Aqueous alteration of Haumea's core and its astrobiological potential
A large range of axes a (from 950 to 1075 km) and b (from 800 to 900 km) are consistent with bodies with Haumea's mass and rotation rate, and are fluid configu-rations in hydrostatic equilibrium. Our favored solution with a = 1050 km and b = 840 km has a mass fraction of ice of 17.2%, but this value could range from 1% to 22% across the range we explored. Across this range, however, the allowable core density varies only slightly, from ρ core = 2560 kg m −3 to 2740 kg m −3 , deviating by only a few percent from our favored density of ρ core = 2680 kg m −3 . This is very close to the average density previously inferred for Haumea, but this appears to be coincidental. A robust result of our analysis is that Haumea's core has a density ≈ 2600 kg m −3 , overlain by an ice mantle.
Comparison of the density of Haumea's core to other planetary materials provides strong clues to Haumea's history. Grain densities of ordinary and enstatite chondrites are typically > 3600 kg m −3 , and their bulk densities typically ≈ 3300 kg m −3 because of ∼ 10% porosity (Consolmagno et al. 2008;Wilkison et al. 2003). Carbonaceous chondrites are marked by lower grain densities, average 3400 kg m −3 (range from 2400 − 5700 kg m −3 depending on the type of chondrite), higher porosities ≈ 15 − 35%, and bulk densities closer to 2000 kg m −3 (Macke et al. 2011;Consolmagno et al. 2008). The difference is that carbonaceous chondrites are largely composed of products of aqueous alteration. In fact, the more oxidized groups of carbonaceous chondrites have higher porosity (Macke et al. 2011). Hydrated silicates typically have densities in this range. Clays such as montmorillonite, kaolinite, illite, and mica typically have densities between 2600 kg m −3 and 2940 kg m −3 (Osipov 2012). This strongly suggests that Haumea's core is composed of hydrated silicates, and that Haumea's core was aqueously altered in its past.
Serpentinization is the process by which silicates typical of the dust in the solar nebula react with water on an asteroid or planet, producing new phyllosilicate minerals. A archetypal reaction would be: In this reaction, 1 kg of olivine with fayalite content typical of carbonaceous chondrites may react with 0.129 kg of water to produce 0.826 kg of chrysotile, 0.281 kg of magnetite, 0.020 kg of silica, and 0.002 kg of hydrogen gas, which escapes the system. The total density of olivine (density 3589 kg m −3 ) plus ice (density 921 kg m −3 ) before the reaction is 2697 kg m −3 . After the reaction, the mixture of chrysotile (density 2503 kg m −3 ) plus magnetite (density 5150 kg m −3 ) plus silica (density 2620 kg m −3 ) has a total density of 2874 kg m −3 (Coleman 1971). Including a 10% porosity typical of carbonaceous chondrites, the density of the aqueously altered system would be 2612 kg m −3 , remarkably close to the inferred density of Haumea's core.
If Haumea's core underwent significant aqueous alteration, some of this material may have dissolved in the water and ultimately found its way into the ice mantle of Haumea. In fits to Haumea's reflectance spectrum, Pinilla-Alonso et al. (2009) found the most probable surface composition was an intimate mixture of half crystalline and half amorphous water ice, with other components comprising < 10% of the surface; but similar modeling by Trujillo et al. (2007) found that Haumea's surface is best fit by a mixture of roughly 81% crystalline water ice and 19% kaolinite. Kaolinite was added to the fit to provide a spectrally neutral but blueish absorber; few other planetary materials contribute to the reflectance spectrum in this way. Kaolinite is a common clay mineral [Al 2 Si 2 O 5 (OH) 4 ] very similar in structure to chrysotile, produced by weathering of aluminum silicate minerals like feldspars.
A variety of phyllosilicates have been observed by the Dawn mission on the surface of Ceres (Ammannito et al. 2016), strongly suggesting aqueous alteration of silicates within a porous, permeable core or a convecting mudball (Bland et al. 2006;Travis 2017). If kaolinite can be confirmed in Haumea's mantle, this would provide strong support for the aqueous alteration of Haumea's core.
Preliminary modeling by Desch & Neveu (2015) suggests that aqueous alteration of Haumea's core is a very likely outcome. Haumea, or its pre-collision progenitor, could have been differentiated into a rocky olivine core and icy mantle. Desch & Neveu (2015) show that many factors can lead to cracking of a rocky core on small bodies. Microcracking by thermal expansion mismatch of mineral grains or by thermal expansion of pore water during heating (as the core heats by radioactive decay over the first < 1 Gyr), will almost certainly introduce microfractures. These would be widened by chemical reactions and water pressurization, etc., leading to macrofractures. Cracks can heal by ductile flow of rock, but the rate is highly sensitive to temperature; below about 750 K, healing of cracks takes longer than the age of the Solar System. Therefore it is highly likely that hydrothermal circulation of water through a cracked core would ensue. Thermal modeling by Desch & Neveu (2015) suggests Haumea's interior could be effectively fully convective, allowing water and olivine to fully react and produce phyllosilicates. Circulation of water also would help cool the core, preventing temperatures from exceeding 750 K, ensuring that fractures remain open, and that the hydrated silicates would not dehydrate. Liquid water is predicted to have existed for ∼ 10 8 yr, although further geochemical modeling is needed to test more proposed scenarios for Haumea's structure and evolution.
A long (∼ 10 8 yr) duration of aqueous alteration suggests a period of habitability within Haumea. To develop and survive, life as we know it requires water and a long-lasting environment with little temperature variability (Davis & McKay 1996). With central temperatures approaching 750 K, and surface temperatures near 40 K, a large fraction of Haumea's interior would have had intermediate temperatures consistent with liquid water (Castillo-Rogez & Lunine 2012). The origin of life is also thought to require a substrate to protect and localize biochemical reactions. Clays such as montmorillonite can act as this substrate because they can bind substantial water, and are soft and delaminate easily. Clays can also promote the assembly of RNA from nucleosides, and can stimulate micelles to form vesicles (Travis 2017). The interior of Haumea may have at one point resembled regions beneath the seafloor experiencing hydrothermal circulation. These regions are conducive to life: Czaja et al. (2016) discovered archaea anaerobically metabolizing H 2 S in such environments, and other studies have confirmed that microbes exist deep in fractures of hot environments (Jannasch & Mottl 1985).
Implications for the mass of the collisional family
An ongoing mystery is why Haumea's collisional family contains so little ice. The total masses of Hi'iaka and Namaka, plus 2002 TX 300 and the other collisional family members, amount to about 2.4% of Haumea's mass (Vilenius et al. 2018). This is much smaller than the amount of ice that has been presumed to have been ejected. As described in §2, if Haumea really were a Jacobi ellipsoid with uniform density ≈ 2600 kg m −3 , it would have to have a very thin ice layer comprising perhaps only ≈ 4% of Haumea's mass. This is much lower than the mass fraction of ice in typical KBOs. If the KBO has bulk density ρ 0 , the mass fraction of ice would be f ice = (ρ ice )/(ρ 0 ) × (ρ rock − ρ 0 )/(ρ rock − ρ ice ). A typical KBO may form from a mixture of pure olivine with 10% porosity and density ρ rock = 3300 kg m −3 , and non-porous ice with density ρ ice = 921 kg m −3 . The ρ 0 in such a KBO could range from 1500 kg m −3 to 2500 kg m −3 which equates to f ice ranging from 46% to 12%. If Haumea was comparable to these end member cases, it would need to lose 91% and 67% of its ice re-spectively to end up with a post-collisional ice fraction of 4%. It is difficult to explain why Haumea would lose 91% of its ice instead of 100%. Also, neither of these scenarios match with the 2.4% of ice thought to be ejected, which is also difficult to reconcile. This discrepancy is ameliorated by our results. Our modeling of Haumea's structure shows that it may retained a significant fraction of ice. Across the parameter space we explored, Haumea's present-day bulk density varies from 1900 kg m −3 to 2500 kg m −3 (core density 2550 kg m −3 to 2750 kg m −3 ), which corresponds to f ice ranging from 1% to 22%. The lower end of this range is unlikely from the standpoint of the occultation data. We favor that today, Haumea has a high ice fraction: f ice =17% is our favored case.
In addition to this argument, our model suggests that Haumea underwent serpentinization, meaning the core experienced pervasive aqueous alteration. This process would reduce the fraction of ice below that which Haumea started. As an example, if Haumea initially had a density ρ 0 = 2500 kg m −3 , like that of Eris , and original ρ rock = 3300 kg m −3 , it started with f ice =34%. Serpentinization would have then consumed ice into the rocky core to lower the core density to ρ rock = 2612 kg m −3 , which would alter Haumea's ice fraction to 23%. So, if the collision ejected 2.4% of the ice, Haumea's ice fraction today would be f ice ∼ 20%. This estimate is within the range of ice fractions we predict from our parameter study.
In conclusion, our modeling suggests both that Haumea may today retain a significant fraction of its original ice, and that some of the ice may have been lost to serpentinization of the core. Both of these factors imply that less ice needs to have been ejected for Haumea to have its present-day, observed ice fraction, possibly explaining the low total mass of the collisional family.
CONCLUSIONS
This paper presents numerical modeling designed to test three questions about the KBO Haumea: 1) Is Haumea a Jacobi ellipsoid? If it is differentiated, what is Haumea's shape? 2) Is Haumea a fluid in hydrostatic equilibrium? 3) Can Haumea's occultation and light curve data be reconciled? We aimed to address these questions with the goal of understanding the composition and structure of Haumea to learn about its collisional history and evolution.
We have written a code kyushu based on the algorithms of Hachisu (H86a,b) to calculate the internal structure of a rapidly rotating differentiated body based on input parameters such as the semi-axes a and b. Al-though we did not explore all parameter space, Haumea appears to be best approximated as a differentiated triaxial ellipsoid body in hydrostatic equilibrium with axes a = 1050 km, b = 840 km, and c = 537 km. This shape fits the Ortiz et al. (2017) occultation shadow and is close to light curve data. As this shape, Haumea has core axes a c = 883 km, b c = 723 km, c c = 470 km, ρ avg = 2018 kg m −3 , ρ core = 2680 kg m −3 which equates to an ice mantle comprising ∼ 17% of Haumea's mass and ranging from 71 to 170 km in thickness. Haumea's albedo is p v ∼ 0.66 in this case.
In contrast to previous studies (Lockwood et al. 2014;Rabinowitz et al. 2006), our results suggest that Haumea's ice crust amounts to a significant portion of the body. Due to the thickness of the ice, Haumea's core has a relatively high density indicating the composition of the core is a hydrated silicate (the closest match is kaolinite). For the core to be hydrated, a long period (∼ 10 8 yr) of serpentinization must have occured during which regions of the core were potentially habitable. The thick ice crust also suggests that Haumea's collisional family (icy objects a few percent the mass of Haumea) was produced from only a small portion of the ice Haumea started with, before Haumea suffered the collision. Insights into this type of mantle stripping collision could be applicable to modeling metal-rich, fast-rotating triaxial ellipsoid 16 Psyche, the focus of the upcoming NASA Psyche mission (Elkins-Tanton et al. 2016).
As this study continues, we would like to expand parameter space to obtain more precise results. We can explore how Haumea would change shape or composition if we use different ice densities, porosity, angles/orientations to better match the shadow in addition to matching the light curve amplitude/phase more precisely and using an appropriate equation of state to include the compressibility of materials. Haumea is a unique and interesting body worthy of study for its own sake, but understanding Haumea can provide insights into fundamental processes such as subsurface oceans/aqueous alteration on small bodies and dynamics of mantle-stripping collisions, acting across the Solar System. algorithm for exoplanets. We gratefully acknowledge partial support by the NASA Solar Systems Workings Program.
points A and B refer to the long and intermediate axes of the body in the equatorial plane, with r A = a and r B = b.
Figure 2 .
2Contour plots of quantities in the equilibrium shape models of a differentiated Haumea, as functions of imposed axes a and b. The panels depict: the average (bulk) density of Haumea, ρavg; the c (semi-)axis; the thickness of the ice layer along the a axis; the thickness of the ice layer along the b axis; and the thickness of the ice layer along the c axis. Solutions were not found for simultaneous combinations of large a and large b, but otherwise entire families of fluid hydrostatic equilibrium solutions exist.
. DISCUSSION
We thank Darin Ragozzine and Sarah Sonnett for helpful discussions about the collisional family and Haumea's light curve. We thank Steve Schwartz and Viranga Perera for useful discussions about how to model Haumea using smoothed particle hydrodynamics codes. We thank Leslie Rogers and Ellen Price for introducing us to the Hachisu (H86a,b) algorithm and for general discussions about how they implemented the HachisuAPPENDIXHere we derive the formulas needed to calculate the axes of Haumea's shadow as it occults a star. We assume Haumea's surface is a triaxial ellipsoid with long axis along the x direction, with axes a > b > c, defined by those points that satisfy f (x, y, z) =x 2 a 2 + y 2 b 2 + z 2 c 2 = 1. We assume the star lies in a direction e LOS = cos ψ sin φê x + sin ψ sin φê y + cos φê z Here φ is the angle between the line of sight (from us through Haumea to the star) and Haumea's pole (along the z axis). We can define two unit vectors in the plane of the sky: e 1 = − sin ψê x + cos ψê y , andê 2 =ê 1 ×ê LOS = + cos ψ cos φê x + sin ψ cos φê y − sin φê z .Haumea's limb is the locus of those points, defined by r, such that the line of sight is tangent to the surface, or perpendicular to the normal, so that ∇f ·ê LOS = 0.All of these points satisfywhich define a plane inclined to the sky. The intersection of the plane with the ellipsoid defines an ellipse, and the projection of this ellipse onto the plane of the sky-Haumea's shadow-also is an ellipse. We project the points on Haumea's limb onto the plane of the sky by recasting r in the coordinate system usingê 1 , e 2 , andê LOS : r = (r ·ê 1 )ê 1 + (r ·ê 2 )ê 2 + (r ·ê LOS )ê LOS = sê 1 + tê 2 + uê LOS , with s = r ·ê 1 = −x sin ψ + y cos ψ and t = r ·ê 2 = x cos ψ cos φ + y sin ψ cos π − z sin φ.All the points on the limb have z related to x and y as above, so the boundary of the shadow, which equals the projection of the limb onto the plane of the sky, is defined byInverting, we find x, y and z in terms of s and t for points along the limb:x = 1 ∆ − sin ψ 1 + c 2 b 2 tan 2 φ s + cos ψ t cos φ , y = 1 ∆ + cos ψ 1 + c 2 a 2 tan 2 φ s + sin ψ t cos φ ,These also define an ellipse, rotated in the s-t plane.After rotating the ellipse in the plane of the sky by an angle θ, defined by tan 2θ = Q/(R − P ), we find it has axes a and b defined by 1 (a ) 2 = P cos 2 θ + R sin 2 θ − Q sin θ cos θ and 1 (b ) 2 = P sin 2 θ + R cos 2 θ + Q sin θ cos θ .We have written a simple code that takes a, b, and c, and ψ and φ as inputs, and solves for θ and then the semi-axes a and b of Haumea's shadow.One end-member case includes φ = 0 • , in which Haumea's pole is pointed toward the star; we derive θ = −ψ and regardless of ψ, Haumea's shadow has axes a = b and b = a. Another end-member case is φ = 90 • , in which case the line of sight to the star is parallel to Haumea's equator. The shadow will have b = c regardless of ψ, and the other axis will be a = ab cos 2 ψ a 2 + sin 2 ψ b 2 1/2 , in which case a = b if ψ = 0 • (looking along the long a axis), or a = a if ψ = 90 • (looking along the b axis). One more end-member case is ψ = 0 • but arbitrary φ, in which case a = b and b = a cos φ 1 + c 2 a 2 tan 2 φ +1/2 . This is the case considered byOrtiz et al. (2017). Assuming a = 1161 km, b = 852 km, c = 513 km, ψ = 0 • and φ = 76.3 • (a tilt of Haumea's pole with respect to the plane of the sky of 13.7 • ), we find a = 852 km and b = 584 km, similar to the solution found byOrtiz et al. (2017).
. E Ammannito, M C Desanctis, M Ciarniello, Science. 353Ammannito, E., Desanctis, M. C., Ciarniello, M., et al. 2016, Science, 353
. R P Binzel, T Gehrels, M S Matthews, Asteroids II. 1258University of Arizona PressBinzel, R. P., Gehrels, T., & Matthews, M. S. 1989, Asteroids II (University of Arizona Press), 1258
. F Birch, Physical Review. 71809Birch, F. 1947, Physical Review, 71, 809
P Bland, M Zolensky, G Benedix, M Sephton, Meteorites and the Early Solar System. II853Bland, P., Zolensky, M., Benedix, G., & Sephton, M. 2006, Meteorites and the Early Solar System II, 853
. M E Brown, K M Barkume, G A Blake, The Astronomical Journal. 133284Brown, M. E., Barkume, K. M., Blake, G. A., et al. 2006, The Astronomical Journal, 133, 284
. M E Brown, K M Barkume, D Ragozzine, E L Schaller, Nature. 446294Brown, M. E., Barkume, K. M., Ragozzine, D., & Schaller, E. L. 2007, Nature, 446, 294
. M E Brown, E L Schaller, Science. 3161585Brown, M. E., & Schaller, E. L. 2007, Science, 316, 1585
. G C Capitani, L Stixrude, American Mineralogist. 971177Capitani, G. C., & Stixrude, L. 2012, American Mineralogist, 97, 1177
J C Castillo-Rogez, J Lunine, Frontiers of Astrobiology. C. Impey, J. Lunine, & J. FunesCambridge University Press201Castillo-Rogez, J. C., & Lunine, J. 2012, in Frontiers of Astrobiology, ed. C. Impey, J. Lunine, & J. Funes (Cambridge University Press), 201
. R G Coleman, Bulletin of the Geological Society of America. 82897Coleman, R. G. 1971, Bulletin of the Geological Society of America, 82, 897
. G J Consolmagno, D T Britt, R J Macke, Chemie der Erde. 681Consolmagno, G. J., Britt, D. T., & Macke, R. J. 2008, Chemie der Erde, 68, 1
. A D Czaja, N J Beukes, J T Osterhout, 44983Czaja, A. D., Beukes, N. J., & Osterhout, J. T. 2016, 44, 983
Origins of life and evolution of the biosphere. W L Davis, C P Mckay, 2661Davis, W. L., & McKay, C. P. 1996, Origins of life and evolution of the biosphere, 26, 61
S Desch, M Neveu, Lunar and Planetary Science Conference. 2082Desch, S., & Neveu, M. 2015, Lunar and Planetary Science Conference, #2082
. S J Desch, J C Cook, T Doggett, S B Porter, Icarus. 202694Desch, S. J., Cook, J. C., Doggett, T., & Porter, S. B. 2009, Icarus, 202, 694
. S J Desch, N J Turner, The Astrophysical Journal. 811156Desch, S. J., & Turner, N. J. 2015, The Astrophysical Journal, 811, 156
L T Elkins-Tanton, E Asphaug, J Bell, Lunar and Planetary Science Conference. 1631Elkins-Tanton, L. T., Asphaug, E., Bell, J., et al. 2016, Lunar and Planetary Science Conference, #1631
. I Hachisu, The Astrophysical Journal Supplement Series. 61479Hachisu, I. 1986a, The Astrophysical Journal Supplement Series, 61, 479
. The Astrophysical Journal Supplement Series. 62461-. 1986b, The Astrophysical Journal Supplement Series, 62, 461
. K A Holsapple, Icarus. 154432Holsapple, K. A. 2001, Icarus, 154, 432
. H Jannasch, M Mottl, Science. 717Jannasch, H., & Mottl, M. 1985, Science, 717
. P Lacerda, D Jewitt, The Astronomical Journal. 13313Lacerda, P., & Jewitt, D. 2006, The Astronomical Journal, 133, 13
. P Lacerda, D Jewitt, N Peixinho, The Astronomical Journal. 1351749Lacerda, P., Jewitt, D., & Peixinho, N. 2008, The Astronomical Journal, 135, 1749
. E Lellouch, C Kiss, P Santos-Sanz, Astronomy and Astrophysics. 518147Lellouch, E., Kiss, C., Santos-Sanz, P., et al. 2010, Astronomy and Astrophysics, 518, L147
. A C Lockwood, M E Brown, J Stansberry, Earth, Moon, and Planets. 111127Lockwood, A. C., Brown, M. E., & Stansberry, J. 2014, Earth, Moon, and Planets, 111, 127
. R J Macke, G J Consolmagno, D T Britt, Meteoritics and Planetary Science. 461842Macke, R. J., Consolmagno, G. J., & Britt, D. T. 2011, Meteoritics and Planetary Science, 46, 1842
. M Núñez-Valdez, Z Wu, Y G Yu, R M Wentzcovitch, Geophysical Research Letters. 40290Núñez-Valdez, M., Wu, Z., Yu, Y. G., & Wentzcovitch, R. M. 2013, Geophysical Research Letters, 40, 290
. J L Ortiz, P Santos-Sanz, B Sicardy, Nature Publishing Group550219Ortiz, J. L., Santos-Sanz, P., Sicardy, B., et al. 2017, Nature Publishing Group, 550, 219
. V I Osipov, Soil Mechanics and Foundation Engineering. 488Osipov, V. I. 2012, Soil Mechanics and Foundation Engineering, 48, 8
. N Pinilla-Alonso, R Brunetto, J Licandro, Astronomy and Astrophysics. 496547Pinilla-Alonso, N., Brunetto, R., Licandro, J., et al. 2009, Astronomy and Astrophysics, 496, 547
. E M Price, L A Rogers, The Astrophysical Journal. SubmittedPrice, E. M., & Rogers, L. A. 2019, The Astrophysical Journal, Submitted
. L ; D L Probst, K M Barkume, M E Brown, The Astrophysical Journal. 6391238Arizona State University Masters Thesis Rabinowitz,Probst, L. 2015, Arizona State University Masters Thesis Rabinowitz, D. L., Barkume, K. M., Brown, M. E., et al. 2006, The Astrophysical Journal, 639, 1238
. D Ragozzine, M E Brown, The Astronomical Journal. 1374766Ragozzine, D., & Brown, M. E. 2009, The Astronomical Journal, 137, 4766
. G H Shaw, The Journal of Chemical Physics. 845862Shaw, G. H. 1986, The Journal of Chemical Physics, 84, 5862
. S S Sheppard, D C Jewitt, The Astronomical Journal. 1241757Sheppard, S. S., & Jewitt, D. C. 2002, The Astronomical Journal, 124, 1757
. B Travis, Astrobiology Science Conference. 3620Travis, B. 2017, Astrobiology Science Conference, #3620
. C A Trujillo, M E Brown, K M Barkume, E L Schaller, D L Rabinowitz, The Astrophysical Journal. 6551172Trujillo, C. A., Brown, M. E., Barkume, K. M., Schaller, E. L., & Rabinowitz, D. L. 2007, The Astrophysical Journal, 655, 1172
. E Vilenius, J Stansberry, T Muller, Astronomy & Astrophysics. 1361Vilenius, E., Stansberry, J., Muller, T., et al. 2018, Astronomy & Astrophysics, 136, 1
. P Vinet, J Ferrante, J H Rose, J R Smith, Geophysical Research Letters. 929319Vinet, P., Ferrante, J., Rose, J. H., & Smith, J. R. 1987, Geophysical Research Letters, 92, 9319
. K Volk, R Malhotra, Icarus. 221106Volk, K., & Malhotra, R. 2012, Icarus, 221, 106
. S L Wilkison, T J Mccoy, J E Mccamant, M S Robinson, D Britt, Meteoritics & Planetary Science. 381533Wilkison, S. L., McCoy, T. J., McCamant, J. E., Robinson, M. S., & Britt, D. 2003, Meteoritics & Planetary Science, 38, 1533
|
[] |
[
"Steady-State Neutronic Analysis of Converting the UK CONSORT Reactor for ADS Experiments",
"Steady-State Neutronic Analysis of Converting the UK CONSORT Reactor for ADS Experiments"
] |
[
"Hywel Owen \nSchool of Physics and Astronomy\nCockcroft Accelerator Group\nUniversity of Manchester\nM13 9PLManchesterUK\n",
"Matthew Gill \nSchool of Physics and Astronomy\nNuclear Physics Group\nUniversity of Manchester\nM13 9PLManchesterUK\n",
"Trevor Chambers \nImperial College London\nSW7 2AZLondonUK\n"
] |
[
"School of Physics and Astronomy\nCockcroft Accelerator Group\nUniversity of Manchester\nM13 9PLManchesterUK",
"School of Physics and Astronomy\nNuclear Physics Group\nUniversity of Manchester\nM13 9PLManchesterUK",
"Imperial College London\nSW7 2AZLondonUK"
] |
[] |
CONSORT is the UK's last remaining civilian research reactor, and its present core is soon to be removed. This study examines the feasibility of re-using the reactor facility for accelerator-driven systems research by replacing the fuel and installing a spallation neutron target driven by an external proton accelerator. MCNP5/MCNPX were used to model alternative, high-density fuels and their coupling to the neutrons generated by 230 MeV protons from a cyclotron striking a solid tungsten spallation target side-on to the core. Low-enriched U 3 Si 2 and U-9Mo were considered as candidates, with only U-9Mo found to be feasible in the compact core; fuel element size and arrangement were kept the same as the original core layout to minimise thermal hydraulic and other changes. Reactor thermal power up to 2.5 kW is predicted for a k ef f of 0.995, large enough to carry out reactor kinetic experiments.
|
10.1016/j.anucene.2011.08.019
|
[
"https://arxiv.org/pdf/1107.0287v1.pdf"
] | 119,122,447 |
1107.0287
|
11a31f74b66b5f074cf3aac1918d110cad45b8fb
|
Steady-State Neutronic Analysis of Converting the UK CONSORT Reactor for ADS Experiments
1 Jul 2011
Hywel Owen
School of Physics and Astronomy
Cockcroft Accelerator Group
University of Manchester
M13 9PLManchesterUK
Matthew Gill
School of Physics and Astronomy
Nuclear Physics Group
University of Manchester
M13 9PLManchesterUK
Trevor Chambers
Imperial College London
SW7 2AZLondonUK
Steady-State Neutronic Analysis of Converting the UK CONSORT Reactor for ADS Experiments
1 Jul 2011arXiv:1107.0287v1 [physics.acc-ph]ADSNeutronicsAnalysis
CONSORT is the UK's last remaining civilian research reactor, and its present core is soon to be removed. This study examines the feasibility of re-using the reactor facility for accelerator-driven systems research by replacing the fuel and installing a spallation neutron target driven by an external proton accelerator. MCNP5/MCNPX were used to model alternative, high-density fuels and their coupling to the neutrons generated by 230 MeV protons from a cyclotron striking a solid tungsten spallation target side-on to the core. Low-enriched U 3 Si 2 and U-9Mo were considered as candidates, with only U-9Mo found to be feasible in the compact core; fuel element size and arrangement were kept the same as the original core layout to minimise thermal hydraulic and other changes. Reactor thermal power up to 2.5 kW is predicted for a k ef f of 0.995, large enough to carry out reactor kinetic experiments.
breeding (particularly for Thorium-based fuels) and for actinide management to reduce long-lived waste (Wilson (1976), Bowman (1992), Daniel and Petrov (1996), Nifenecker et al. (1999)). The so-called 'Energy Amplifier' has been proposed as a combination of all these features in a subcritical lead-cooled fast reactor with solid fuel elements, in which fast neutrons are generated in a single central spallation target and delivered to a core-blanket arrangement within which actinide burning is achieved via slow crossing of absorption resonances during small-lethargy scatters (Carminati et al. (1993), Rubbia et al. (1995)).
Whilst system ADS designs such as MYRRHA at SCK-CEN are wellprogressed (Abderrahim et al. (2001)), and several key components such as the central lead spallation target have been proven in scaled-down prototypes (Groeschel et al. (2004)), there is as yet little practical experience with accelerator-reactor coupling experiments. The first true accelerator-coupled experiment has only recently been carried out at KURRI with a very low current (1 nA) of protons into the KUCA A core (Shiroya et al. (2000), Shiroya et al. (2001), Shiroya et al. (2002), Tanigaki et al. (2004)), and there have been several ADS studies incorporating D-D or D-T neutron generators, particularly the MUSE (Soule et al. (2004), Billebaud et al. (2007)), GUIN-EVERE (Mercatali et al. (2010), Uyttenhove et al. (2011)) and YALINA experiments (Persson et al. (2005)). Also, there are the well-developed TRADE studies of using the Rome TRIGA reactor (Naberejnev et al. (2003)). However, there has been as yet no demonstration of ADS operation at intermediate powers between these initial experiments at only a few W(th) power, and the 100-3000 MW(th) powers envisaged for full ADS operation. In this paper we consider how to produce an intermediate power output of up to 3 kW(th) in an existing light water research reactor, using a commercial cyclotron as a proton driver for a spallation target. This would enable studies of reactor kinetics relevant to high power fast systems (Naberejnev et al. (2003), Herrera-Martínez (2004), Herrera-Martinez et al. (2007)) and the development of neutron diagnostic methods required to safely operate at these powers.
Conceptual ADS designs often consider reactor powers of up to 1 GW(e) output, in which a 1 GeV proton beam drives spallation efficiently in a lead target, with ∼20 neutrons per proton (n/p) being a typical production ratio (Lone and Wong (1995)). Safety considerations mean that subcritical k ef f values between 0.95 and 0.98 are typically considered (Nifenecker et al. (1999)), which combined with the reactor output implies beam powers in ex-cess of 5 MW. Whilst designs for proton linacs exist that deliver such powers (in particular the designs considered for the European Spallation Source, see for example Lengeler (1998) and Lindroos et al. (2011)), to date there has been no demonstration of the reliability required for ADS operation, which is limited both by worries about the robustness of the target and core under rapid thermal cycling (see for example Takei et al. (2009)), and about the economic feasibility of power interruptions from a nuclear power plant based on ADS (Steer et al. (2011)).
The realisation that reliable, high power accelerators are a limiting factor in constructing a viable ADS power plant has resulted in three responses. The first response has been the significant research into replacement technologies for high power linacs, in particular the use of Fixed-Field, Alternating Gradient (FFAG) accelerators as a way of overcoming the energy limitation of cyclotrons (Tanigaki et al. (2004)). The recent success of the proof-of-principle EMMA accelerator (Barlow et al. (2010)) indicates that this non-scaling variant of the FFAG could potentially enable highpower (10 MW) low-cost protons at the required 1 GeV energy, but we should also note that researchers involved with the DAEDALUS neutrino project (Alonso (2010), Conrad and Shaevitz (2010)) are examining novel cyclotron designs (Calanna et al. (2011)) as an alternative method to meet the twin requirements of power and reliability, and which may also be applied to ADS (Kim et al. (2001)). Even with lower-cost cyclotron or FFAG designs, space charge effects in the accelerated proton bunches limit the beam current and the required high reliability is considered difficult to achieve (Craddock and Symon (2010)). Multiply-redundant designs have therefore been considered in which three accelerators deliver protons to independent targets within the ADS core (Takizuka (1998), Broeders andBroeders (2000)), the idea being that a failure in one accelerator can be made up for by increasing the power of the other two.
The second response is to consider operation at higher k ef f values, perhaps as high as k ef f = 0.998, which alleviates the beam power requirements but of course raises questions over whether the reactor can remain safe kinetically, or as fuel burnup proceeds. Aker/Jacobs have proposed the 'ADTR' as a potential design (Fuller and Ashworth (2010)), but the safety will be determined by whether the k ef f can be measured reliably at full power (using effectively a source-jerk method). However, even when operating close to k ef f = 1 the envisaged power is still well over the c. 1 MW level where costs become high and reliability becomes difficult to achieve.
The third response is to lower the output power of the reactor from c. 1 GW(e) to values of 100 MW(e) or less. Whilst this alleviates the power requirement from a single accelerator, it does not solve the lack of reliability. But, if k ef f can be increased in such a reactor to values closer to unity, there is the possibility to use multiple, 'off-the-shelf' acceleratorssuch as high power superconducting electron linacs driving photofission via a gamma-producing Bremsstrahlung target -that can deliver the required neutron flux in the core (Diamond (1999)). The use of multiple driver accelerators provides the redundancy required to achieve high reliability, but at a sufficiently low k ef f the number of required accelerators and their overall wall-plug power may rise to a level where there is no net energy gain in the complete ADS system. Again, a very high k ef f close to one will probably be required. With present technology it is also contested whether providing a particular electrical capacity with multiple small modular reactors (SMRs) can compete with the traditional large LWR plants, but there is considerable current interest in replacing the economies of scale of PWRs with the economies of mass-production that may be offered by SMRs (IAEA (2007)).
The multiple-target accelerator-driven SMR (ADSMR) may hold some potential, and the use of multiple targets in addition may allow selective control of burnup without needing to shuffle fuel elements, perhaps also facilitating a 'sealed battery' operation which could help with proliferation resistance. But, as indicated above, there are a number of issues that should be addressed before such a scheme can be validated. The main questions are: can a high k ef f ADS system be safely operated, using suitable monitoring of the subcriticality level; can multiple accelerators provide useful modification of the flux profile in a reactor core, over and above the efficiency advantages offered by burnable poisons. Fuel utilisation may be improved by the use of accelerators, but their additional cost should be considered.
To help answer the above questions, we have considered a modification of the existing CONSORT reactor to incorporate a spallation target driven by an external moderate-power (∼ 1µA current) accelerator, which will enable an increase in the available external neutron flux at the core by several orders of magnitude compared to previous ADS experiments.
The CONSORT Reactor
CONSORT is a low-flux, 100 kW(th) civilian pool-type light-water research reactor that has been operated by Imperial College London since 1965 showing the arrangement of the fuel elements with respect to the large irradiation tube that penetrates into the surrounding water volume. There are two other such irradiation tubes each above and below the central one, and the inner face of the tubes lies approximately 8.5 cm from the closest fuel element, meaning that significant neutron thermalisation occurs from the proposed 100 mm-diameter Tungsten spallation target in the irradiation tube. The much smaller ∼ 25 mm ICIS irradiation tube penetrates the centre of the core, but is too small for a target and associated beam transport over the 6 m distance to the outside of the reactor. (Grant, 1965); it is now the only remaining operating civilian research reactor in the UK, and there are plans to decommission the present core over the next few years. The present core consists of an optimised arrangement of 24 fuel elements (each approximately 3" square in cross-section) of roll-bonded U-Al MTR type. Three types of element are presently used: MARK I, II and III. In MARK I/II there are 12 curved plates per element, whilst in MARK III elements there are 16 flat plates; there is typically a 4 mm water gap between each plate. The reactor contains four control rods: 3 'coarse' Cadmium rods clad in stainless steel (one used as a safety rod); and one 'fine' Stainless Steel rod. The complete core assembly is approximately 400 x 400 x 600 mm, is located in an Aluminium reactor vessel approximately 6 m deep, and is surrounded by a concrete enclosure with graphite reflectors with several penetrations to enable insertion of samples for neutron irradiation (see Figure 1). Of particular interest for potential ADS studies are the central ∼ 25 mm ICIS irradiation tube, and the three larger side-on tubes that penetrate into the reactor vessel to lie quite close to one side of the core (see Figure 2). We have considered the re-use of as much of the present facility as possible (buildings, shielding and vessel) to allow an ADS experiment at modest cost. We examined replacement fuel with the same disposition as the present core arrangement so that thermal management in the core would be expected to be similar; whilst the control rod mechanisms would have to be replaced, we envisage them to retain their original arrangement. Although the central ICIS irradiation tube can deliver protons to the centre of the core (similar to the scheme proposed for TRADE in Naberejnev et al. (2003)), we consider that the narrow ∼ 25 mm diameter and long length 6 m throw to the outside of the reactor vessel makes beam transport difficult to a target. Also, the small reactor vessel diameter means there is insufficient space, both between the control mechanisms and above the reactor (1.94 m to the bottom of the fuel-handling crane), to insert a beam transport system; the associated shielding at high level in the reactor building is not considered to be practicable. Instead we propose the use of the three side-on irradiation tubes which penetrate close to the core face, which allow for a much larger c. 100 mm-diameter target to be used. We envisage a beam transport/target plug that may be conveniently inserted initially into the central tube, the upstream beam transport connecting to a proton source to be located in an adjacent, non-nuclear-licensed, building. This arrangement does not require any substantive changes to the existing building and reactor infrastructure, and as such should be simpler to license. The overall arrangement is similar to the side-on coupling adopted at KURRI/KUCA (Shahbunder et al. (2010)), and we believe that the shielding arrangement from proton source to reactor can be fitted within the existing building.
Fuel Choice and Core Modelling
To enable use of the current core geometry, we considered U 3 Si 2 and U-9Mo as candidate fuels, using a similar fuel meat/Al cladding to the present core and adopting the same plate arrangement and separation (see Figure 3). Modelling of the core was performed using MCNP5 (LANL (2003)), and the spallation and core-target coupling using MCNPX (v2.6.0) (Pelowitz (2008)); both codes were used with ENDF/B-VII.0 cross-section data (Chadwick et al. (2006)), and spallation reactions were modelled using the Bertini intra-nuclear cascade method (see discussion below), which we believe is sufficiently accurate for the present study. The core and control rods, water moderator, Aluminium vessel, Graphite reflector, irradiation tubes and simplified support mechanisms were included in the model, which was found in previous studies to give k ef f estimates sufficiently close to measured values (Jiang et al. (2006)).
It is thought that the lowest feasible volume of Aluminium in the fuel meat is 55% when U 3 Si 2 is used as a fuel dispersant (Keiser et al. (2003)). Even using 19.9% enrichment, it is not possible to obtain sufficient reactivity in the core unless the Aluminium volume fraction is unfeasibly low (see Figure 4). U 3 Si 2 is therefore unsuitable as a fuel. U-9Mo has a high Uranium density and has been shown to have good performance under irradiation even at high burnup (Snelgrove (1997)). Tests of U-9Mo (under the RERTR programme) have been carried out at Idaho National Laboratory using the same thickness plates as the ones we consider here (Wight et al. (2008)). In the present CONSORT application the fuel temperature and burnup will remain low, so fuel-cladding interaction, swelling and failure are not thought to pose any significant problems, although must be considered in more detail (Van den Berghe et al. (2010)). We varied Uranium enrichment and determined the required percentage of Aluminium in the fuel meat to give criticality with the control rods in their present operational position: a single coarse rod and the fine rod half-way in (30 cm), and the safety and other coarse rod fully withdrawn. There is a wide range of feasible enrichments when using U-9Mo; for later results we have chosen an Aluminium volume fraction of 75%, corresponding to a fuel enrichment near the maximum of 19.5%: a summary of the proposed new core properties is given in Table 1. The variation of reactivity when withdrawing each control rod is shown in Figure 6, and indicates that the core behaviour with U-9Mo fuel is sufficiently similar to the original U-Al fuel. The change in reactivity is also consistent with a different model of the core in MCNP developed separately Jiang et al. (2006). We therefore have confidence that a new core can, in principle, be operated similarly to the present core.
Accelerator Proton Source
A number of options are possible for generating neutrons in the core. Spallation becomes very efficient at higher proton energies above a few hundred MeV and, depending upon the (high-Z) target material and dimensions, a broad optimum exists around 1 GeV incident proton energy where there is an optimal trade-off between the energy expended to accelerate protons and the number of neutrons generated per proton. A 1 GeV proton beam generates typically over 20 neutrons per proton (n/p) from a lead target of sufficient length and radius (Lone and Wong (1995), Hilscher et al. (1998)). However, whilst this efficiency is required to generate the copious number of neutrons for a high-power, low k ef f subcritical system, for a lower-power demonstrator it is much more cost-effective to reduce the proton energy. At very low energies nuclear reactions such as p on 7 Li ( (Kononov et al. (2006), Liskien:1975wx)) or p on 9 Be (Guzek et al. (1998)) may be used with high-current (several mA), low energy (several MeV) electrostatic or radiofrequency quadrupole (RFQ) proton sources to give neutron fluxes in excess of those delivered by D-T generators, and with better reliability and lifespan. Higher proton energies between 100-250 MeV -deliverable by commercial cyclotrons at currents up to 1 µA -allow spallation reactions to be used, albeit with rather poor efficiencies ∼ 1 n/p. In general terms, larger proton currents are available at lower energies, but this must be traded against the conversion efficiency to neutrons at those lower energies and the greater target power for a given output neutron flux.
For the CONSORT application we consider that a proton energy of 230-250 MeV with an associated current of 1 µA is sufficient. Whilst significantly lower power than the 110 MeV proton beam to be delivered for the TRADE experiment, it may be supplied by an already-demonstrated commercial cyclotron such as those supplied by IBA (230 MeV normal-conducting, see Jongen et al. (2004)) or Varian (250 MeV superconducting, see Klein et al. (2005) and Kim (2007)). This also limits the target heating that must be dissipated in the confined (and therefore hard-to-cool) target plug region.
It is worth noting that several alternatives to protons exist. Deuterons have been widely used, particularly in D-D RFQ neutron sources and electrostatic D-T generators. Whilst these generators deliver nearly-monoenergetic neutrons (which are beneficial in some applications) production rates are not superior to Li or Be targets, and lost deuterons and tritium use give rise to more difficult activation and radiological issues. However, for very low power initial experiments we would consider the use of a D-T generator (as has been used in initial experiments at KURRI and GENEPI) as such sources are at least an order of magnitude cheaper than higher-flux alternatives. At lower powers one may also consider the use of Bremsstrahlung-based targets, as mentioned earlier. In this case electrons are used to create broad-spectrum gamma rays peaked at an energy suitable for stimulating photofission, i.e. E γ ∼ 15 MeV (Wilke:1990gj), which requires electron energies of at least 30 MeV (Berger:1970fu). The advantage of such an approach in ADS systems is that that the gammas are more forward-directed than an equivalent neutron source. The efficiency production in a thick target (typically Tungsten or Tantalum) is perhaps as high as 1 gamma of suitable energy for every 3 electrons, but of course the photofission cross-sections for both 235 U and 238 U are rather low (peaking at approximately 400 mb and 160 mb respectively). Whilst very high electron currents up to 100 mA are possible at 50 MeV using superconducting cavities, the target power is also extremely high. Such sources are used to provide radioactive beams using Uranium target fission (Koscielniak et al. (2008)), and have been proposed for the production of isotopes and for ADS systems (Liu et al. (2005)), but in our application where Table 2: Neutron yields from differing targets using incident protons, scaled from existing data/simulations to deliver 1.125 × 10 13 n/s as desired for CONSORT. The current required for CONSORT is compared to that achieved (or proposed in the case of TRADE). Lower-energy protons result in an impractical target power, whilst the use of 1 GeV protons is discounted because of the size and capital cost of the accelerator, irrespective of the achievable current. A 140 MeV high current proton cyclotron is under development by ENEA/IBA, but is not yet demonstrated. We note that there is significant (×4) discrepancy in the expected 7 Li(p,n) 7 Be yield between Culbertson et al. (2004) and Kononov et al. (2006). Hilscher et al. (1998) cooling is difficult, we consider that a medium-energy proton source of c. 230-250 MeV is better. The beam current of c. 1 µA commercially available at this energy results in a target power of 230-250 W which is manageable with air cooling, although there is space to provide a water circuit if required. We compare the overall expected neutron production yields with alternative candidate proton sources in Table 2.
Target Neutron Production and Core Coupling
Following the selection of source/target combination we must determine the characteristics of neutron production in the target. Tungsten was chosen as a target material as it is robust, widely used, and well-understood. An air-cooled target at the moderate power level of 230-250 W should not experience significant damage, but there is the option to coat it with Tantalum if required, as is regularly done for high-power solid spallation targets (Broome (1996), Nio et al. (2005), Findlay (2007)). Similarly, we have as yet only considered a preliminary bare unreflected target, as the water surrounding the irradiation tube acts to scatter some side-going neutrons toward the core. Later designs can in principle incorporate energy filtering and reflection, but we consider this a minor issue at present as the neutrons are in any event well-thermalised by the intervening water layer in this first scheme.
We validated our MCNPX simulations near to the chosen proton energy by comparing the predicted neutron multiplicity against measured data in a lead target at 197 MeV ); this is shown in Figure 7. The stopping distance for protons of 230/250 MeV is approximately 3.4/3.8 cm in Tungsten including straggling and was cross-checked in PSTAR (Berger (1993)) and SRIM2010/2011 (Ziegler and Biersack (2010)). Whereas selfabsorption in a Lead target is not significant, in Tungsten it can be: although longer targets give greater total neutron production, the increase is mostly from side-going neutrons which will be multiply-scattered in the water moderator and are less likely to reach the fuel. A tally sphere of radius 30 cm subdivided according to Figure 8 was used to determine the proportion of forward-going neutrons (past a surface co-planar with the target end), i.e. those neutrons most likely to contribute to fission in the core. The maximum forward-going component is achieved for target lengths slightly less than the stopping distance for the protons. We therefore chose a target thickness of 3.5 cm to ensure complete proton stopping in the target: roughly half the neutrons in this case are forward-going (see Figure 9). Note that in these simulations the proton beam at the target was assumed to be Gaussian with a diameter of 3 cm. MCNPX was used to calculate the coupling of the spallation target to the core: significant thermalisation occurs over the 8.5 cm between the target and the core face (Figure 10), that distance being greater than the typical slowing-down length in water of 3 cm for the spallation neutrons (Nellis Figure 8, also showing the total neutron multiplicity. Error bars for the small backward-going component are suppressed for clarity. Self-absorption and scattering in the target results in a peak in the forward-going neutron production for thicknesses slightly less than that required to stop the protons.
(1977)); approximately 0.18 neutrons per proton make it into the fuel, a little more than 10% of those generated in the target. The resulting neutron multiplication of those neutrons is shown in Figure 11, and differs from that calculated using KCODE (LANL (2003)) since in the side-on target configuration the neutron distribution through the core is very asymmetric ( Figure 12).
The predicted power generated in the core under ADS operation is given in Figure 13. Thermal powers up to a few kW are possible even with a relatively low-efficiency coupling between the target and the core. Several improvements would in principle be possible, but have yet to be studied. Firstly (as mentioned above) reflectors could be placed around the target to improve the guiding of neutrons to the fuel; secondly, some voiding of the core region and intermediate water moderation could be made, analogous to the scheme adopted at KUCA for their ADS tests (Shahbunder et al. (2010)). Together these could increase the neutron flux within the core and make the distribution more symmetric, increasing the available thermal power. However, the latter would require much more careful analysis of the thermal hydraulic issues.
Discussion and Further Work
The present steady-state analysis indicates that significant thermal power increases can be obtained if CONSORT were re-configured for ADS operation than has yet been demonstrated elsewhere; this enables medium-power feedback experiments to be carried out that will help determine whether operation closer to k ef f = 1 is possible than previously considered. In common with other proposals we would also need a careful analysis of both the thermal hydraulic behaviour of the overall system under input changes and transients, and to accurately map the neutron flux through the core. We envisage the installation of small Silicon-based (Da Via et al. (2008), Caruso (2010)) or scintillator-based neutron monitors (Yamane et al. (1999), Yagi et al. (2011)) to carry out flux measurements to characterise the differences in behaviour between critical and subcritical operation, and to measure the degree of subcriticality. These areas are subjects of ongoing study.
The overall power attainable in the experiment proposed here is less than that proposed for other experiments, for example at Dubna (Shvetsov et al. (2006, but is still several orders of magnitude greater than that achieved to date. The combination of re-use of existing reactor 10 -10 10 -9 10 -8 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 0.00 infrastructure and the use of a commercial cyclotron as proton driver reduces the complexity of delivering an experiment, and therefore should be significantly cheaper than other approaches.
Acknowledgements
We greatly appreciate the advice and expertise provided by David Bond (Imperial College Reactor Centre), and Matthew Eaton (Imperial College London, Department of Earth Sciences and Engineering). Matthew Gill was supported by a grant from the UK Nuclear FiRST Doctoral Training Centre, funded by Engineering and Physical Sciences Research Council. Calanna, A., Calabretta, L., Maggiore, M., Piazza, L.A.C., Rifuggiato, D., 2011. A multi megawatt ring cyclotron to search for CP violation in the neutrino sector. arXiv preprint 1104.4985 .
Carminati, F., Klapisch, R., Revol, J.P., Roche, C., Rubio, J.A., Rubbia, C., 1993. An energy amplifier for cleaner and inexhaustible nuclear energy production driven by a particle beam accelerator. CERN/AT/93-47 (ET) .
Caruso
Figure 1 :
1Schematic cross-section of the CONSORT reactor at the vertical core centre-line,
Figure 2 :
2Schematic cross-section through the side-on irradiation tubes showing how the central tube may be initially used with a solid Tungsten spallation target.
Figure 3 :
3Fuel assembly cross-section as used in the MCNP/MCNPX modelling. The fuel meat dimensions vary from plate to plate, but we consider average values of 0.5 mm meat thickness and 1.5 mm overall plate thickness to be adequate.
Figure 4 :
4Variation in k ef f with fuel meat Aluminium fraction for U 3 Si 2 fuel with 19.9% enrichment, when all core control rods are inserted. The present core k ef f under the same conditions is shown as a blue dotted line.
Figure 5 :
5Required Aluminium percentage in the fuel meat of a U-9Mo dispersed fuel element as a function of fuel enrichment. The horizontal blue dashed line shows the minimum possible Aluminium percentage (55%), whilst the vertical blue dashed line shows the maximum permitted enrichment of 19.9%.
Figure 6 :
6Effect on reactivity of withdrawing each control rod, comparing the present core (U-Al) with the proposed core (U-9Mo). The vertical blue dashed lines indicate full withdrawal of the control rods.
Figure 7 :
7Neutron multiplicity from 197 MeV protons in a Lead target of 12 cm diameter of varying thickness, comparing data taken fromLott et al. (dashed, red) with MCNPX simulations (solid, black).
Figure 8 :Figure 9 :
89Definition of regions of forward-going (1), side-going (2) and backward-going (3) neutrons as used for the Tungsten target analysis. Predicted neutron multiplicity in a Tungsten target with 230 MeV incident protons subdivided into forward-going (solid, black), side-going (dashed, red) and backwardgoing (blue, dashed) components as defined in
Figure 10 :Figure 11 :Figure 12 :
101112Thermalisation of neutrons from the spallation target to the core face due to the intervening water. Example neutron multiplication under subcritical operation for differing k ef f values as all four control rods are withdrawn. The range of k ef f shown is from all rods fully inserted to 16 cm withdrawn (k ef f = 0.995). The multiplication is shown per target neutron reaching the fuel region (individual data points), and is compared to the expected multiplication for an equilibrium critical neutron distribution using KCODE (solid line). Lateral flux profile through the core under different operating scenarios. Comparing the original U-Al fuel with the proposed U-9Mo core we see no significant difference in flux profile, indicating that the core behaviour in critical operation will be similar. In subcritical operation there is a significant modification to the flux profile. The vertical dashed lines indicate proposed locations for neutron flux measurement using compact detectors.
Figure 13 :
13Reactor power variation with k ef f for input 230 MeV proton beam current of 1µA, assuming prompt energy release of 181 MeV per fission.
, A.N., 2010. The physics of solid-state neutron detector materials and geometries. Journal of Physics: Condensed Matter 22, 443201. Chadwick, M., Oblozinsky, P., Herman, M., Greene, N., McKnight, R., Smith, D., Young, P., MacFarlane, R., Hale, G., Frankle, S., 2006. ENDF/B-VII.0: Next Generation Evaluated Nuclear Data Library for Nuclear Science and Technology. Nuclear Data Sheets 107, 2931-3060.
Table 1 :
1Reference core design with U-9Mo fuel used at 19.5% enrichment.Fuel
U-9Mo/Al
Al-Fuel Mixing (by Volume) ∼ 75 % Al
Fuel Cladding
(Pure) Al
Number of Fuel Assemblies 24
Plates per Fuel Assembly
12
Uranium Enrichment
19.5%
Power
Max 100 kW(th)
Core Height
0.63 m
Coolant/Moderator
Light Water
Reflector
Graphite
Berger, M.J., 1993. ESTAR, PSTAR, and ASTAR: Computer programs for calculating stopping-power and range tables for electrons, protons, and helium ions. NISTIR 4999, National Institute of Standards and Technology, Gaithersburg, MD . Van den Berghe, S., Leenaers, A., Koonen, E., Sannen, L., 2010. From High to Low Enriched Uranium Fuel in Research Reactors. Advances in Science and Technology 73, 78-90. Billebaud, A., Brissot, R., Le Brun, C., Liatard, E., Vollaire, J., 2007. Prompt multiplication factor measurements in subcritical systems: From MUSE experiment to a demonstration ADS. Progress in Nuclear Energy 49, 142-160. Bowman, C., 1992. Nuclear energy generation and waste transmutation using an accelerator-driven intense thermal neutron source. Nuclear Instruments and Methods in Physics Research A 320, 336-367. Broeders, C., Broeders, I., 2000. Neutron physics analyses of acceleratordriven subcritical assemblies. Nuclear Engineering and Design 202, 209-218. Broome, T., 1996. High Power Targets for Spallation Sources. Proceedings of the 5th European Particle Accelerator Conference, Sitges .
Accelerator-driven systems (ADS) are subcritical reactors in which the external neutron source may allow operation with inherent safety(Degweker et al. (2007)), and the improved neutron economy enables applications in fuel
Development of an accelerator driven neutron activator for medical radioisotope production. K Abbas, S Buono, N Burgio, G Cotogno, N Gibson, L Maciocco, G Mercurio, A Santagata, F Simonelli, H Tagziria, Nuclear Instruments and Methods in Physics Research A. 601Abbas, K., Buono, S., Burgio, N., Cotogno, G., Gibson, N., Maciocco, L., Mercurio, G., Santagata, A., Simonelli, F., Tagziria, H., 2009. Develop- ment of an accelerator driven neutron activator for medical radioisotope production. Nuclear Instruments and Methods in Physics Research A 601, 223-228.
MYRRHA: A multipurpose accelerator driven system for research & development. H Abderrahim, P Kupschus, E Malambu, P Benoit, K Van Tichelen, B Arien, F Vermeersch, P D'hondt, Y Jongen, S Ternier, Nuclear Instruments and Methods in Physics Research A. 463Abderrahim, H., Kupschus, P., Malambu, E., Benoit, P., Van Tichelen, K., Arien, B., Vermeersch, F., D'hondt, P., Jongen, Y., Ternier, S., 2001. MYRRHA: A multipurpose accelerator driven system for research & de- velopment. Nuclear Instruments and Methods in Physics Research A 463, 487-494.
The DAEDALUS Project: Rationale and Beam Requirements. J R Alonso, 1010.0971arXiv preprintAlonso, J.R., 2010. The DAEDALUS Project: Rationale and Beam Require- ments. arXiv preprint 1010.0971 .
EMMA: The world's first non-scaling FFAG. R Barlow, J S Berg, C Beard, N Bliss, J Clarke, M K Craddock, J Crisp, R Edgecock, Y Giboudot, P Goudket, S Griffiths, C Hill, S Jamison, C Johnstone, A Kalinin, E Keil, D Kelliher, S Koscielniak, S Machida, K Marinov, N Marks, B Martlew, P Mcintosh, F Meot, A Moss, B Muratori, H Owen, Y N Rao, Y Saveliev, S Sheehy, B Shepherd, R Smith, S Smith, S Tzenov, A Wheelhouse, C White, T Yokoi, Nuclear Instruments and Methods in Physics Research A. 624Barlow, R., Berg, J.S., Beard, C., Bliss, N., Clarke, J., Craddock, M.K., Crisp, J., Edgecock, R., Giboudot, Y., Goudket, P., Griffiths, S., Hill, C., Jamison, S., Johnstone, C., Kalinin, A., Keil, E., Kelliher, D., Koscielniak, S., Machida, S., Marinov, K., Marks, N., Martlew, B., McIntosh, P., Meot, F., Moss, A., Muratori, B., Owen, H., Rao, Y.N., Saveliev, Y., Sheehy, S., Shepherd, B., Smith, R., Smith, S., Tzenov, S., Wheelhouse, A., White, C., Yokoi, T., 2010. EMMA: The world's first non-scaling FFAG. Nuclear Instruments and Methods in Physics Research A 624, 1-19.
Multiple Cyclotron Method to Search for CP Violation in the Neutrino Sector. J M Conrad, M H Shaevitz, Physical Review Letters. 104141802Conrad, J.M., Shaevitz, M.H., 2010. Multiple Cyclotron Method to Search for CP Violation in the Neutrino Sector. Physical Review Letters 104, 141802.
Cyclotrons and Fixed-Field Alternating-Gradient Accelerators. M Craddock, K Symon, Reviews of Accelerator Science and Technology. 1Craddock, M., Symon, K., 2010. Cyclotrons and Fixed-Field Alternating- Gradient Accelerators. Reviews of Accelerator Science and Technology 1, 65-97.
In-phantom characterisation studies at the Birmingham Accelerator-Generated epIthermal Neutron Source (BAGINS) BNCT facility. C Culbertson, S Green, A Mason, D Picton, G Baugh, R Hugtenburg, Z Yin, M Scott, J Nelson, Applied Radiation and Isotopes. 61Culbertson, C., Green, S., Mason, A., Picton, D., Baugh, G., Hugtenburg, R., Yin, Z., Scott, M., Nelson, J., 2004. In-phantom characterisation stud- ies at the Birmingham Accelerator-Generated epIthermal Neutron Source (BAGINS) BNCT facility. Applied Radiation and Isotopes 61, 733-738.
Radiation hardness properties of full-3D active edge silicon sensors. C Da Via, J Hasi, C Kenney, V Linhart, S Parker, Nucl.Instrum.Meth. 587Da Via, C., Hasi, J., Kenney, C., Linhart, V., Parker, S., et al., 2008. Radiation hardness properties of full-3D active edge silicon sensors. Nucl.Instrum.Meth. A587, 243-249.
Subcritical fission reactor driven by the low power accelerator. H Daniel, Y V Petrov, Nuclear Instruments and Methods in Physics Research A. 373Daniel, H., Petrov, Y.V., 1996. Subcritical fission reactor driven by the low power accelerator. Nuclear Instruments and Methods in Physics Research A 373, 131-134.
The physics of accelerator driven sub-critical reactors. S B Degweker, B Ghosh, A Bajpai, S D Paranjape, Pramana. 68Degweker, S.B., Ghosh, B., Bajpai, A., Paranjape, S.D., 2007. The physics of accelerator driven sub-critical reactors. Pramana 68, 161-171.
A radioactive ion beam facility using photofission. W Diamond, Nuclear Instruments and Methods in Physics Research A. 432Diamond, W., 1999. A radioactive ion beam facility using photofission. Nu- clear Instruments and Methods in Physics Research A 432, 471-482.
ISIS-pulsed neutron and muon source. D Findlay, Proceedings of the 22nd Particle Accelerator Conference. the 22nd Particle Accelerator ConferenceAlbuquerqueFindlay, D., 2007. ISIS-pulsed neutron and muon source. Proceedings of the 22nd Particle Accelerator Conference, Albuquerque .
Conceptual Design of a Commercial Accelerator Driven Thorium Reactor. C G Fuller, R W Ashworth, Proceedings of ICAPP'10, International Congress on Advances in Nuclear Power Plants. ICAPP'10, International Congress on Advances in Nuclear Power PlantsSan DiegoFuller, C.G., Ashworth, R.W., 2010. Conceptual Design of a Commercial Accelerator Driven Thorium Reactor. Proceedings of ICAPP'10, Interna- tional Congress on Advances in Nuclear Power Plants, San Diego , 1-10.
The London University Nuclear Reactor CONSORT. P Grant, Nature. 207Grant, P., 1965. The London University Nuclear Reactor CONSORT. Nature 207, 911-913.
The MEGAPIE 1 MW target in support to ADS development: status of R&D and design. F Groeschel, C Fazio, J Knebel, C Perret, A Janett, G Laffont, L Cachon, T Kirchner, A Cadiou, A Guertin, Journal of Nuclear Materials. 335Groeschel, F., Fazio, C., Knebel, J., Perret, C., Janett, A., Laffont, G., Cachon, L., Kirchner, T., Cadiou, A., Guertin, A., 2004. The MEGAPIE 1 MW target in support to ADS development: status of R&D and design. Journal of Nuclear Materials 335, 156-162.
Characterisation of neutron and gamma-ray emission from thick target Be (p, n) reaction for boron neutron capture therapy. J Guzek, W Mcmurray, T Mateva, C Franklyn, U Tapper, Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms. 139Guzek, J., McMurray, W., Mateva, T., Franklyn, C., Tapper, U., 1998. Char- acterisation of neutron and gamma-ray emission from thick target Be (p, n) reaction for boron neutron capture therapy. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 139, 471-475.
Transmutation of Nuclear Waste in Accelerator-Driven Systems. Thesis. A Herrera-Martínez, University of CambridgeHerrera-Martínez, A., 2004. Transmutation of Nuclear Waste in Accelerator- Driven Systems. Thesis, University of Cambridge .
Transmutation of nuclear waste in accelerator-driven systems: Thermal spectrum. A Herrera-Martinez, Y Kadi, G Parks, Annals of Nuclear Energy. 34Herrera-Martinez, A., Kadi, Y., Parks, G., 2007. Transmutation of nuclear waste in accelerator-driven systems: Thermal spectrum. Annals of Nuclear Energy 34, 550-563.
Neutron production by hadron-induced spallation reactions in thin and thick Pb and U targets from 1 to 5 GeV. D Hilscher, U Jahnke, F Goldenbaum, L Pienkowski, J Galin, B Lott, Nucl Instrum Meth A. 414Hilscher, D., Jahnke, U., Goldenbaum, F., Pienkowski, L., Galin, J., Lott, B., 1998. Neutron production by hadron-induced spallation reactions in thin and thick Pb and U targets from 1 to 5 GeV. Nucl Instrum Meth A 414, 100-116.
Status of Small Reactor Designs Without On-Site Refuelling. IAEA. IAEA-TECDOC-1536IAEA, 2007. Status of Small Reactor Designs Without On-Site Refuelling. IAEA-TECDOC-1536 , 1-870.
Estimation of distribution algorithms for nuclear reactor fuel management optimisation. S Jiang, A Ziver, J Carter, C Pain, A Goddard, S Franklin, H Phillips, Annals of Nuclear Energy. 33Jiang, S., Ziver, A., Carter, J., Pain, C., Goddard, A., Franklin, S., Phillips, H., 2006. Estimation of distribution algorithms for nuclear reactor fuel management optimisation. Annals of Nuclear Energy 33, 1039-1057.
New Cyclotron Developments at IBA. Y Jongen, W Kleeven, S Zaremba, Proceedings of the 9th European Particle Accelerator Conference. the 9th European Particle Accelerator ConferenceLucerneJongen, Y., Kleeven, W., Zaremba, S., 2004. New Cyclotron Developments at IBA. Proceedings of the 9th European Particle Accelerator Conference, Lucerne .
High-density, low-enriched uranium fuel for nuclear research reactors. D D Keiser, S L Hayes, M K Meyer, C R Clark, JOM Journal of the Minerals, Metals and Materials Society. 55Keiser, D.D., Hayes, S.L., Meyer, M.K., Clark, C.R., 2003. High-density, low-enriched uranium fuel for nuclear research reactors. JOM Journal of the Minerals, Metals and Materials Society 55, 55-58.
A superconducting isochronous cyclotron stack as a driver for a thorium-cycle power reactor. G Kim, D May, P Mcintyre, A Sattarov, Proceedings of the 19th Particle Accelerator Conference. the 19th Particle Accelerator ConferenceChicagoKim, G., May, D., McIntyre, P., Sattarov, A., 2001. A superconducting isochronous cyclotron stack as a driver for a thorium-cycle power reactor. Proceedings of the 19th Particle Accelerator Conference, Chicago , 2593- 2595.
Magnetic fields and beam optics studies of a 250 MeV superconducting proton radiotherapy cyclotron. J Kim, Nuclear Instruments and Methods in Physics Research A. 582Kim, J., 2007. Magnetic fields and beam optics studies of a 250 MeV su- perconducting proton radiotherapy cyclotron. Nuclear Instruments and Methods in Physics Research A 582, 366-373.
New superconducting cyclotron driven scanning proton therapy systems. H U Klein, C Baumgarten, A Geisler, J Heese, A Hobl, D Krischel, M Schillo, S Schmidt, J Timmer, Nuclear Instruments and Methods in Physics Research B. 241Klein, H.U., Baumgarten, C., Geisler, A., Heese, J., Hobl, A., Krischel, D., Schillo, M., Schmidt, S., Timmer, J., 2005. New superconducting cyclotron driven scanning proton therapy systems. Nuclear Instruments and Methods in Physics Research B 241, 721-726.
Accelerator-based fast neutron sources for neutron therapy. V Kononov, M Bokhovko, O Kononov, N Soloviev, W Chu, D Nigg, Nuclear Instruments and Methods in Physics Research A. 564Kononov, V., Bokhovko, M., Kononov, O., Soloviev, N., Chu, W., Nigg, D., 2006. Accelerator-based fast neutron sources for neutron therapy. Nuclear Instruments and Methods in Physics Research A 564, 525-531.
Accelerator Design for a 1/2 MW Electron Linac for Rare Isotope Beam Production. S Koscielniak, F Ames, I Bylinskii, R Laxdal, M Marchetto, A K Mitra, I Sekachev, V Verzilov, Proceedings of the 11th European Particle Accelerator Conference. the 11th European Particle Accelerator ConferenceGenoaKoscielniak, S., Ames, F., Bylinskii, I., Laxdal, R., Marchetto, M., Mitra, A.K., Sekachev, I., Verzilov, V., 2008. Accelerator Design for a 1/2 MW Electron Linac for Rare Isotope Beam Production. Proceedings of the 11th European Particle Accelerator Conference, Genoa .
The European spallation source study. H Lengeler, Nuclear Instruments and Methods in Physics Research Section B. 139Lengeler, H., 1998. The European spallation source study. Nuclear Instru- ments and Methods in Physics Research Section B 139, 82-90.
The European Spallation Source. M Lindroos, S Bousson, R Calaga, H Danared, G Devanz, R Duperrier, J Eguia, M Eshraqi, S Gammino, H Hahn, A Jansson, C Oyon, S Pape-Møller, S Peggs, A Ponton, K Rathsman, R Ruber, T Satogata, G Trahern, Accepted for Publication in Nuclear Instruments and Methods in Physics Research Section BLindroos, M., Bousson, S., Calaga, R., Danared, H., Devanz, G., Duperrier, R., Eguia, J., Eshraqi, M., Gammino, S., Hahn, H., Jansson, A., Oyon, C., Pape-Møller, S., Peggs, S., Ponton, A., Rathsman, K., Ruber, R., Sato- gata, T., Trahern, G., 2011. The European Spallation Source. Accepted for Publication in Nuclear Instruments and Methods in Physics Research Section B .
Comparison of neutron yield characteristics between proton accelerator and electron accelerator for waste transmutation. Y Liu, M Yim, D Mcnelis, Transactions of the American Nuclear Society. 92Liu, Y., Yim, M., McNelis, D., 2005. Comparison of neutron yield char- acteristics between proton accelerator and electron accelerator for waste transmutation. Transactions of the American Nuclear Society 92, 252.
Neutron yields from proton-induced spallation reactions in thick targets of lead. M Lone, P Wong, Nuclear Instruments and Methods in Physics Research A. 362Lone, M., Wong, P., 1995. Neutron yields from proton-induced spallation reactions in thick targets of lead. Nuclear Instruments and Methods in Physics Research A 362, 499-505.
Neutron multiplicity distributions for 200 MeV proton-, deuteron-and He-induced spallation reactions in thick Pb targets. B Lott, F Cnigniet, J Galin, F Goldenbaum, D Hilscher, A Liénard, A Péghaire, Y Périer, X Qian, Nuclear Instruments and Methods in Physics Research A. 414Lott, B., Cnigniet, F., Galin, J., Goldenbaum, F., Hilscher, D., Liénard, A., Péghaire, A., Périer, Y., Qian, X., 1998. Neutron multiplicity distributions for 200 MeV proton-, deuteron-and He-induced spallation reactions in thick Pb targets. Nuclear Instruments and Methods in Physics Research A 414, 117-124.
Design study of a fast spectrum zero-power reactor dedicated to source driven sub-critical experiments. L Mercatali, A Serikov, P Baeten, W Uyttenhove, A Lafuente, P Teles, Energy Conversion and Management. 51Mercatali, L., Serikov, A., Baeten, P., Uyttenhove, W., Lafuente, A., Teles, P., 2010. Design study of a fast spectrum zero-power reactor dedicated to source driven sub-critical experiments. Energy Conversion and Manage- ment 51, 1818-1825.
Physics study of the TRADE: TRIGA Accelerator Driven Experiment. D G Naberejnev, G Imel, G Palmiotti, M Salvatores, ANL- AFCI-091Naberejnev, D.G., Imel, G., Palmiotti, G., Salvatores, M., 2003. Physics study of the TRADE: TRIGA Accelerator Driven Experiment. ANL- AFCI-091 , 1-67.
Slowing-down distances and times of 0.1-to 14-MeV neutrons in hydrogenous materials. W Nellis, American Journal of Physics. 45443Nellis, W., 1977. Slowing-down distances and times of 0.1-to 14-MeV neu- trons in hydrogenous materials. American Journal of Physics 45, 443.
Hybrid nuclear reactors. H Nifenecker, S David, J Loiseaux, A Giorni, Progress in Particle and Nuclear Physics. 43Nifenecker, H., David, S., Loiseaux, J., Giorni, A., 1999. Hybrid nuclear reactors. Progress in Particle and Nuclear Physics 43, 683-827.
Neutronics performance and decay heat calculation of a solid target for a spallation neutron source. D Nio, M Ooi, N Takenaka, M Furusaka, M Kawai, K Mishima, Y Kiyanagi, Journal of Nuclear Materials. 343Nio, D., Ooi, M., Takenaka, N., Furusaka, M., Kawai, M., Mishima, K., Kiyanagi, Y., 2005. Neutronics performance and decay heat calculation of a solid target for a spallation neutron source. Journal of Nuclear Materials 343, 163-168.
MCNPX User's Manual Version 2. D Pelowitz, 6.0. LANL LA-CP-07- 1473Pelowitz, D., 2008. MCNPX User's Manual Version 2.6.0. LANL LA-CP-07- 1473 .
Analysis of reactivity determination methods in the subcritical experiment Yalina. C Persson, P Seltborg, A Åhlander, W Gudowski, T Stummer, H Kiyavitskaya, V Bournos, Y Fokov, I Serafimovich, S Chigrinov, Nuclear Instruments and Methods in Physics Research A. 554Persson, C., Seltborg, P.,Åhlander, A., Gudowski, W., Stummer, T., Kiyav- itskaya, H., Bournos, V., Fokov, Y., Serafimovich, I., Chigrinov, S., 2005. Analysis of reactivity determination methods in the subcritical experiment Yalina. Nuclear Instruments and Methods in Physics Research A 554, 374- 383.
Conceptual Design of a Fast Neutron Operated High Power Energy Amplifier. C Rubbia, J Rubio, S Buono, F Carminati, N Fiétier, J Galvez, C Gelès, Y Kadi, R Klapisch, P Mandrillon, J Revol, C Roche, CERN/AT/95-44ETRubbia, C., Rubio, J., Buono, S., Carminati, F., Fiétier, N., Galvez, J., Gelès, C., Kadi, Y., Klapisch, R., Mandrillon, P., Revol, J., Roche, C., 1995. Conceptual Design of a Fast Neutron Operated High Power Energy Amplifier. CERN/AT/95-44 (ET) .
Experimental analysis for neutron multiplication by using reaction rate distribution in acceleratordriven system. H Shahbunder, C Pyeon, T Misawa, Annals of Nuclear Energy. 37Shahbunder, H., Pyeon, C., Misawa, T., 2010. Experimental analysis for neutron multiplication by using reaction rate distribution in accelerator- driven system. Annals of Nuclear Energy 37, 592-597.
Accelerator driven subcritical system as a future neutron source in Kyoto University Research Reactor Institute (KURRI) -Basic study on neutron multiplication in the accelerator driven subcritical reactor. S Shiroya, H Unesaki, Y Kawase, H Moriyama, M Inoue, 37Progress in Nuclear EnergyShiroya, S., Unesaki, H., Kawase, Y., Moriyama, H., Inoue, M., 2000. Ac- celerator driven subcritical system as a future neutron source in Kyoto University Research Reactor Institute (KURRI) -Basic study on neutron multiplication in the accelerator driven subcritical reactor. Progress in Nuclear Energy 37, 357-362.
Present Status of Neutron Factory Project at Kurri. S Shiroya, A Yamamoto, H , M Unesaki, H Inoue, M , T , M Kawase, Y , Proceedings of the Second Asian Particle Accelerator Conference. the Second Asian Particle Accelerator ConferenceBeijingShiroya, S., Yamamoto, A., H, M., Unesaki, H., Inoue, M., T, M., Kawase, Y., 2001. Present Status of Neutron Factory Project at Kurri. Proceedings of the Second Asian Particle Accelerator Conference, Beijing .
Basic study on neutronics of future neutron source based on accelerator driven subcritical reactor concept in Kyoto University Research Reactor Institute (KURRI). S Shiroya, A Yamamoto, K Shin, T Ikeda, S Nakano, H Unesaki, 40Progress in Nuclear EnergyShiroya, S., Yamamoto, A., Shin, K., Ikeda, T., Nakano, S., Unesaki, H., 2002. Basic study on neutronics of future neutron source based on ac- celerator driven subcritical reactor concept in Kyoto University Research Reactor Institute (KURRI). Progress in Nuclear Energy 40, 489-496.
The Subcritical Assembly in Dubna (SAD)-Part I: Coupling all major components of an Accelerator Driven System (ADS). Nuclear Instruments and Methods in. V Shvetsov, C Broeders, I Golovnin, E González, W Gudowski, F Mellier, B Ryabov, A Stanculescu, I Tretyakov, M Vorontsov, Physics Research A. 562Shvetsov, V., Broeders, C., Golovnin, I., González, E., Gudowski, W., Mel- lier, F., Ryabov, B., Stanculescu, A., Tretyakov, I., Vorontsov, M., 2006. The Subcritical Assembly in Dubna (SAD)-Part I: Coupling all major components of an Accelerator Driven System (ADS). Nuclear Instruments and Methods in Physics Research A 562, 883-886.
Development of very-high-density low-enriched-uranium fuels. J Snelgrove, Nuclear Engineering and Design. 178Snelgrove, J., 1997. Development of very-high-density low-enriched-uranium fuels. Nuclear Engineering and Design 178, 119-126.
Neutronic studies in support of accelerator-driven systems: The MUSE experiments in the MASURCA facility. R Soule, W Assal, P Chaussonnet, C Destouches, C Domergue, C Jammes, J M Laurens, J F Lebrat, F Mellier, G Perret, G Rimpault, H Serviere, G Imel, G M Thomas, D Villamarin, E Gonzalez-Romero, M Plaschy, R Chawla, J L Kloosterman, Y Rugama, A Billebaud, R Brissot, D Heuer, M Kerveno, C Le Brun, E Liatard, J M Loiseaux, O Meplan, E Merle, F Perdu, J Vollaire, P Baeten, Nuclear Science and Engineering. 148Soule, R., Assal, W., Chaussonnet, P., Destouches, C., Domergue, C., Jammes, C., Laurens, J.M., Lebrat, J.F., Mellier, F., Perret, G., Rim- pault, G., Serviere, H., Imel, G., Thomas, G.M., Villamarin, D., Gonzalez- Romero, E., Plaschy, M., Chawla, R., Kloosterman, J.L., Rugama, Y., Billebaud, A., Brissot, R., Heuer, D., Kerveno, M., Le Brun, C., Liatard, E., Loiseaux, J.M., Meplan, O., Merle, E., Perdu, F., Vollaire, J., Baeten, P., 2004. Neutronic studies in support of accelerator-driven systems: The MUSE experiments in the MASURCA facility. Nuclear Science and Engi- neering 148, 124-152.
Predicting the contractual cost of unplanned shutdowns of power stations: An accelerator-driven subcritical reactor case study. S J Steer, W J Nuttall, G T Parks, L V N Gonçalves, Electric Power Systems Research. 81Steer, S.J., Nuttall, W.J., Parks, G.T., Gonçalves, L.V.N., 2011. Predict- ing the contractual cost of unplanned shutdowns of power stations: An accelerator-driven subcritical reactor case study. Electric Power Systems Research 81, 1662-1671.
Estimation of acceptable beam trip frequencies of accelerators for ADS and comparison with experimental data of accelerators. H Takei, K Nishihara, K Tsujimoto, International Topical Meeting on Nuclear Research Applications and Utilization of Accelerators. ViennaTakei, H., Nishihara, K., Tsujimoto, K., 2009. Estimation of acceptable beam trip frequencies of accelerators for ADS and comparison with ex- perimental data of accelerators. International Topical Meeting on Nuclear Research Applications and Utilization of Accelerators, 4-8 May 2009, Vi- enna (ADS/P4-06) .
Effects of Accelerator Beam Trips on ADS Components. T Takizuka, Utilisation and Reliability of High Power Proton Accelerators: Workshop Proceedings. Mito, JapanOECDNuclear Energy AgencyTakizuka, T., 1998. Effects of Accelerator Beam Trips on ADS Components. Utilisation and Reliability of High Power Proton Accelerators: Workshop Proceedings, Mito, Japan, 13-15 October 1998 OECD, Nuclear Energy Agency.
Construction of FFAG Accelerators in KURRI for ADS Study. M Tanigaki, K Mishima, S Shiroya, Y Ishi, S Fukumoto, S Machida, Y Mori, M Inoue, Proceedings of the 9th European Particle Accelerator Conference. the 9th European Particle Accelerator ConferenceLucerneTanigaki, M., Mishima, K., Shiroya, S., Ishi, Y., Fukumoto, S., Machida, S., Mori, Y., Inoue, M., 2004. Construction of FFAG Accelerators in KURRI for ADS Study. Proceedings of the 9th European Particle Accelerator Conference, Lucerne .
MCNP -A General Monte Carlo N-Particle Transport Code. Monte Carlo Team, 5. LANL LA-CP-03-0245IIMonte Carlo Team, 2003. MCNP -A General Monte Carlo N-Particle Transport Code, Version 5. LANL LA-CP-03-0245 II, 1-508.
The neutronic design of a critical lead reflected zeropower reference core for on-line subcriticality measurements in Accelerator Driven Systems. W Uyttenhove, P Baeten, G Van Den Eynde, A Kochetkov, D Lathouwers, M Carta, Annals of Nuclear Energy. 38Uyttenhove, W., Baeten, P., Van den Eynde, G., Kochetkov, A., Lathouwers, D., Carta, M., 2011. The neutronic design of a critical lead reflected zero- power reference core for on-line subcriticality measurements in Accelerator Driven Systems. Annals of Nuclear Energy 38, 1519-1526.
Testing and Acceptance of Fuel Plates for RERTR Fuel Development Experiments. RERTR 2008 -30th International Meeting On Reduced Enrichment For Research And Test Reactors. J M Wight, G A Moore, S C Taylor, N E Woolstenhulme, Wight, J.M., Moore, G.A., Taylor, S.C., Woolstenhulme, N.E., 2008. Testing and Acceptance of Fuel Plates for RERTR Fuel Development Experiments. RERTR 2008 -30th International Meeting On Reduced Enrichment For Research And Test Reactors .
Very Big Accelerators as Energy Producers. FERMILAB-FN-2098. R Wilson, Wilson, R., 1976. Very Big Accelerators as Energy Producers. FERMILAB- FN-2098 .
A small high sensitivity neutron detector using a wavelength shifting fiber. T Yagi, T Misawa, C Pyeon, Applied Radiation and Isotopes. 69Yagi, T., Misawa, T., Pyeon, C., 2011. A small high sensitivity neutron detector using a wavelength shifting fiber. Applied Radiation and Isotopes 69, 176-179.
Measurement of the thermal and fast neutron flux in a research reactor with a Li and Th loaded optical fibre detector. Y Yamane, A Uritani, T Misawa, J H Karlsson, I Pázsit, Nuclear Instruments and Methods in Physics Research A. 432Yamane, Y., Uritani, A., Misawa, T., Karlsson, J.H., Pázsit, I., 1999. Mea- surement of the thermal and fast neutron flux in a research reactor with a Li and Th loaded optical fibre detector. Nuclear Instruments and Methods in Physics Research A 432, 403-409.
SRIM -The stopping and range of ions in matter. M Ziegler, J Biersack, Physics Research B. 268Nuclear Instruments and Methods inZiegler, M., Biersack, J., 2010. SRIM -The stopping and range of ions in matter (2010). Nuclear Instruments and Methods in Physics Research B 268, 1818-1823.
|
[] |
[
"Extended Thomas-Fermi Density Functional for the Unitary Fermi Gas",
"Extended Thomas-Fermi Density Functional for the Unitary Fermi Gas"
] |
[
"Luca Salasnich \nDepartment of Physics \"Galileo Galilei\"\nCNISM and CNR-INFM\nUniversity of Padua\nVia Marzolo 835131PaduaItaly\n",
"Flavio Toigo \nDepartment of Physics \"Galileo Galilei\"\nCNISM and CNR-INFM\nUniversity of Padua\nVia Marzolo 835131PaduaItaly\n"
] |
[
"Department of Physics \"Galileo Galilei\"\nCNISM and CNR-INFM\nUniversity of Padua\nVia Marzolo 835131PaduaItaly",
"Department of Physics \"Galileo Galilei\"\nCNISM and CNR-INFM\nUniversity of Padua\nVia Marzolo 835131PaduaItaly"
] |
[] |
We determine the energy density ξ(3/5)nεF and the gradient correction λh 2 (∇n) 2 /(8m n) of the extended Thomas-Fermi (ETF) density functional, where n is number density and εF is Fermi energy, for a trapped two-components Fermi gas with infinite scattering length (unitary Fermi gas) on the basis of recent diffusion Monte Carlo (DMC) calculations [Phys. Rev. Lett. 99, 233201 (2007)]. In particular we find that ξ = 0.455 and λ = 0.13 give the best fit of the DMC data with an even number N of particles. We also study the odd-even splitting γN 1/9h ω of the ground-state energy for the unitary gas in a harmonic trap of frequency ω determining the constant γ. Finally we investigate the effect of the gradient term in the time-dependent ETF model by introducing generalized Galilei-invariant hydrodynamics equations.
|
10.1103/physreva.78.053626
|
[
"https://arxiv.org/pdf/0809.1820v3.pdf"
] | 119,289,666 |
0809.1820
|
8c7d0b636d784ecd33d633b577b55ea0110e547f
|
Extended Thomas-Fermi Density Functional for the Unitary Fermi Gas
28 Oct 2008
Luca Salasnich
Department of Physics "Galileo Galilei"
CNISM and CNR-INFM
University of Padua
Via Marzolo 835131PaduaItaly
Flavio Toigo
Department of Physics "Galileo Galilei"
CNISM and CNR-INFM
University of Padua
Via Marzolo 835131PaduaItaly
Extended Thomas-Fermi Density Functional for the Unitary Fermi Gas
28 Oct 2008numbers: 0375Ss0530Fk7110Ay6785Lm
We determine the energy density ξ(3/5)nεF and the gradient correction λh 2 (∇n) 2 /(8m n) of the extended Thomas-Fermi (ETF) density functional, where n is number density and εF is Fermi energy, for a trapped two-components Fermi gas with infinite scattering length (unitary Fermi gas) on the basis of recent diffusion Monte Carlo (DMC) calculations [Phys. Rev. Lett. 99, 233201 (2007)]. In particular we find that ξ = 0.455 and λ = 0.13 give the best fit of the DMC data with an even number N of particles. We also study the odd-even splitting γN 1/9h ω of the ground-state energy for the unitary gas in a harmonic trap of frequency ω determining the constant γ. Finally we investigate the effect of the gradient term in the time-dependent ETF model by introducing generalized Galilei-invariant hydrodynamics equations.
I. INTRODUCTION
The crossover from the weakly paired Bardeen-Cooper-Schrieffer (BCS) state to the Bose-Einstein condensate (BEC) of molecular dimers with ultra-cold two-hyperfinecomponents Fermi vapors atoms has been investigated by several experimental groups with 40 K atoms [1,2,3] and 6 Li atoms [4,5]. In the unitary limit of infinite scattering length [6] obtained by tuning an external background magnetic field near a Feshbach resonance [7], the Fermi superfluid exhibits universal properties [7,8,9,10].
It has been suggested that at zero-temperature the unitary Fermi gas can be described by the density functional theory (DFT) [11,12,13,14,15,16,17,18]. Bulgac and Yu [11] introduced a superfluid DFT (SDFT) based on a Bogoliubov-de Gennes (BdG) approach to superfluid fermions, in the same spirit as the density functional formulation for superconductors [19]. Papenbrock and Bhattacharyya [14] have instead proposed a Kohn-Sham (KS) density functional with an effective mass to take into account nonlocality effects. To treat nonuniform systems, other authors [12,13,16,17,18] have added a gradient term to the leading Thomas-Fermi energy, since such a term is surely necessary when surfaces are present [20,21], at least in three spatial dimensions [22]. An energy functional for fermions not written in terms of single-particle orbitals, but only in terms of the density and its derivatives is usually called extended Thomas-Fermi (ETF) functional [23,24]. It may be seen as an effective field theory where the gradient correction λh 2 (∇n) 2 /(8m n) can be interpreted as the next-toleading term [16,18], with n(r) the local number density and m the atomic mass.
We wish to point out that both the energy functionals proposed by Bulgac and Yu [11] and Papenbrock and Bhattacharyya [14] are functionals of the density through single particle orbitals (the BdG or KS orbitals). Therefore they can be used in actual numerical calculations only when the number of fermions is small, since they re-quire a self consistent calculation of single-particle states whose number increases linearly with the number of particles.
On the contrary, one encounters no limitation in the number of particles which may be treated through ETF functionals, since in this case the functional depends only on a single function of the coordinate, i.e. the particle density. Of course one trades simplicity with accuracy: while the BdG and KS schemes are built to account for the main contribution to the kinetic energy, and treat it exactly in noninteracting systems even with a nonuniform density varying in space, the TF approach gives the exact kinetic energy only for a uniform system and even when extended with the addition of gradient and higher order derivatives of the density, the ETF functional is not able to reproduce shell effects in the density profile [23,24].
In spite of this limitation, but in the light of the great simplification introduced in numerical calculations, we believe that it is useful to analyze the ETF approximation and comment on its dynamical generalization, which amounts to introducing a quantum pressure term −λh 2 ∇ 2 √ n/(2m √ n) into the hydrodynamic equations of superfluids.
The value of the coefficient λ is debated. In the papers of Kim and Zubarev [12] and Manini and Salasnich [13] the authors set λ = 1 over the full BCS-BEC crossover. More recently we have suggested λ = 1/4 [17,25]. This suggestion is in good agreement with a theoretical estimate based on an epsilon expansion around d = 4 − ε spatial dimensions in the unitary regime [18].
In this paper we comment on the ETF for a twocomponents Fermi gas at unitarity and determine its parameters by fitting recent Monte Carlo results [26,27] for the energy of fermions confined in a spherical harmonic trap of frequency ω in this regime.
Since the interaction potential does not introduce any new length, the universal contribution to the energy density in an ETF functional appropriate to a spin balanced Fermi liquid at unitarity may be considered, in its simplest form, as the sum of a term proportional (ξ being the constant of proportionality) to the energy density of a uniform noninteracting system with the same density and of a term containing the gradient of the density with a coefficient λ meant to take into account phenomenologically also higher order derivatives [28].
By minimizing such ETF for a fixed number of particles, we find that the values ξ = 0.455 and λ = 0.13 give the closest results to the Monte Carlo energies of a fully superfluid system with an even number N of fermions confined in a harmonic well at unitarity, as calculated by Ref. [27]. When treating systems with an odd number of particles, we must correct the calculated value of the ETF ground state energies corresponding to these parameters to account for the presence of the unpaired particle. According to Son [29], for fermions confined by a harmonic potential, the correction depends on the number of particle and takes the form ∆E = γN 1/9 (in units ofhω). We then find that γ = 0.856 provides the best fit to the DMC data [27]. In section IV, we investigate the effect of the gradient term on the dynamics of the Fermi superfluid by introducing generalized hydrodynamics equations and a Galilei-invariant nonlinear Schrödinger equation of the Guerra-Pusterla type [30,31] which is fully equivalent to them.
II. EXTENDED THOMAS-FERMI FUNCTIONAL
Let us consider an interacting Fermi gas trapped by a potential U (r). Its TF energy functional is:
E T F = d 3 r n(r) ε(n(r)) + U (r) ,(1)
where ε(n) is the energy per particle of a uniform Fermi system with density n equal to the local density n(r) of fermions. The total number of fermions is
N = d 3 r n(r) .(2)
By minimizing E T F with respect to the density n(r), with the constraint of a fixed number of particles, one finds
µ(n(r)) + U (r) =μ ,(3)
where µ(n) = ∂(nε(n)) ∂n is the bulk chemical potential of a uniform system of density n andμ is the chemical potential of the non uniform system, i.e. the Lagrange multiplier fixed by the normalization (2).
In the unitary limit, no characteristic length is set by the interatomic-potential since its s-wave scattering length a F diverges (a F → ±∞). The energy per particle of a uniform two-spin components Fermi gas at unitarity must then depend only onh, on the mass of fermions m, and on the only length characterizing the system, i.e. the average distance between particles ∝ n −1/3 [7]. It is usually written as:
ε(n) = ξ 3 5h 2 2m (3π 2 ) 2/3 n 2/3 ,(4)
where ξ is a universal parameter which can be determined from ab-initio calculations. Notice that: (h 2 /2m)(3π 2 ) 2/3 n 2/3 = ε F , where ε F is the Fermi energy of the ideal fermionic gas. Thus ξ is simply the ratio between the energy per particle of the uniform interacting system at unitarity and the corresponding energy in a non-interacting system. Monte Carlo calculations for a uniform unpolarized two-spin components Fermi gas suggest ξ ≃ 0.45; in particular, ξ = 0.42 ± 0.01 according to [9] and ξ = 0.44 ± 0.02 according to [10]. The bulk chemical potential associated to Eq. (4) is
µ(n) = ξh 2 2m (3π 2 ) 2/3 n 2/3 .(5)
If the system is confined by a spherically-symmetric harmonic potential :
U (r) = 1 2 mω 2 r 2 ,(6)
its density profile n(r) obtained from Eq. (3) is :
n(r) = n(0) 1 − r 2 r 2 F 3/2 ,(7)
where n(0) = (2mμ) 3/2 /(3π 2h3 ξ 3/2 ), r F = 2μ/(mω 2 ) andμ =hω √ ξ(3N ) 1/3 . Obviously, the expression (7) for the TF density profile of the unitary Fermi gas in a harmonic potential coincides with that of an ideal Fermi gas [32], but its parameters are modified by the presence of ξ in the equation of state (4).
As previously stressed the TF functional must be extended to take into account other characteristic lengths related to the spatial variations of the density, besides the average particle separation. As a consequence, the energy per particle must contain additional terms, which scale as the square of the inverse of these various lengths. For this reason, as a simple approximation, we add to the energy per particle of Eq. (4), a term
λh 2 8m ∇n n 2 = λh 2 2m ∇ √ n √ n 2 ,(8)
which may be seen as the first term in a gradient expansion. We notice that, according to the Kirzhnits expansion of the quantum kinetic operator in powers ofh [21], λ must take the value λ = 1/9 [33,34] for an ideal, noninteracting, Fermi gas. Historically, a term of this form, but with λ = 1, was introduced in a pioneering paper [20] by von Weizsäcker to treat surface effects in nuclei. Instead than trying to determine it from first principles, we consider λ as a phenomenological parameter accounting for the increase of kinetic energy due the entire spatial variation of the density. We remark that this attitude is adopted in many applications of Density Functional Theory to atomic and molecular physics where the gradient term (8) takes into account phenomenologically, through λ, all the possible corrections of a gradient expansion [28].
The new energy functional reads [21,33,34,35]
E = d 3 r n(r) ε g (n(r), ∇n(r)) + U (r) ,(9)
where
ε g (n, ∇n) = ε(n) + λh 2 8m (∇n) 2 n 2(10)
is a generalized energy per particle which includes thē h-dependent gradient correction. We remark that in our ETF the constants ξ and λ are independent, implying that the ratios between the energies per particle corresponding to the two terms in a unitary and in a nonininteracting fermi systems may be different. Equal ratios imply λ = ξ/9 [21].
By minimizing the energy functional (9) with the constraint (2) one gets the partial differential equation obeyed by the ground state density:
λh 2 2m ∇ 2 + µ(n(r)) + U (r) n(r) =μ n(r) . (11)
For the study of hydrodynamics in Fermi superfluids Kim and Zubarev [12], and also Manini and Salasnich [13], used λ = 1 over the full BCS-BEC crossover. More recently we have suggested λ = 1/4 [17] on the basis of the correct relationiship between phase and superfluid velocity [25]. This suggestion is in agreement with the theoretical estimate of Rupak and Schäfer [18], obtained from an epsilon expansion around d = 4−ε spatial dimensions for a Fermi gas in the unitary regime. In particular, by using the expansion of Rupak and Schäfer [18] we obtain the ETF functional (9) with ξ = 0.475 and λ = 0.25.
Very recently two theoretical groups [26,27] have studied the two-components Fermi gas in the harmonic trap of Eq. (6) at unitarity by using Monte Carlo algorithms. Chang and Bertsch [26] have used a Greenfunction Monte Carlo (GFMC) method, while Blume, von Stecher and Greene [27] have applied a fixed-node diffusion Monte Carlo (FN-DMC) approach. They have obtained the ground-state energy for increasing values of the total number N of fermions.
Notice that by inserting the Thomas-Fermi profile (7) into Eq. (9) one gets the ground-state energy of N fermions in a harmonic potential with frequency ω in the form:
Ē hω = ξ (3N ) 4/3 4 + 9λ ξ (3N ) 2/3 8 .(12)
We will refer to this expression as the "beyond-TF" energy. The first term is the TF contribution to the groundstate energy, while the second term is the leading correction. Chang and Bertsch [26] and Blume, von Stecher and Greene [27] have determined ξ by fitting their MC data with Eq. (12) under the hypothesis that the relation
λ = ξ 9(13)
appropriate to a noninteracting system holds also in the unitary regime. They find respectively the values ξ = 0.50 [26] and ξ = 0.465 [27], which are compatible with previous determinations of the parameter ξ based on Monte Carlo calculations for the uniform system [9,10]. The corresponding value λ = ξ/9 ≃ 0.05 for the coefficient of the gradient is instead much smaller than previous suggestions [12,13,17,18,25].
It is important to stress that the formula in Eq. (12) does not give the minimum of the ETF functional (9), since it corresponds to the density profile of Eq. (7) and not to the true ground-state density, solution of Eq. (11).
In the upper panel of Fig. 1 we plot ETF density profiles (solid lines) and compare them with the TF ones (dashed lines). The ETF density profiles have been determined by solving Eq. (11) with ξ = 0.44 and λ = 1/4 with a finite-difference numerical code [36] As expected, there are visible differences, in particular near the surface. In the lower panel of Fig. 1 we plot the groundstate energy E for increasing values of the number N of fermions. Here the differences between TF (dashed line), beyond-TF (dot-dashed line) and ETF (solid line) are quite large. The figure clearly shows that the beyond-TF formula (12) is not very accurate. We stress here once again that the values of the parameters ξ and λ in the ETF functional should be universal i.e. independent on the confining potential U (r) [24,28]. Moreover we consider λ as taking into account phenomenologically all possible corrections of a gradient expansion in the unitary regime and treat ξ and λ as independent parameters. To determine them, instead than using the inaccurate Eq.(12), we look for the values of the two parameters which lead to the best fit of the FN-DMC ground-state energies [27] for even N . After a systematic analysis we find ξ = 0.455 and λ = 0.13 as the best parameters in the unitary regime. It is important to observe that the value ξ = 0.455 of our best fit coincides with that obtained by Perali, Pieri and Strinati [38] by using the extended BCS theory with beyond-mean-field pairing fluctuations.
In Fig. 2 we plot the ground-state energy E of the Fermi gas under harmonic confinement, comparing different results: the FN-DMC data for an even number N of atoms of [27] (diamonds with error bar), the best ETF results, with λ = 0.13 and ξ = 0.455 (solid line), and the ETF results obtained by using the values ξ = 0.475 and λ = 0.25 coming from the ε-expansion [18] (dot-dashed line).
We remark that fixing ξ to the value ξ = 0.44 and looking for the best fitting λ we have found λ = 0.18. In this case the curve of the energy will be practically superimposed to the solid one of Fig. 2.
For the sake of completeness in Table 1 we report the fixed node DMC energies of Blume, von Stecher and C. H. Greene [27], our optimized ETF results with ξ = 0.575 and λ = 0.13, and also the SDFT calculations of Bulgac [37]. Remarkably the ETF energies are slightly closer to the DMC values than the SDFT ones, reported in Ref. [37]. The optimized ETF energies for even number N of particles are obtained from our density functional (9), while the energies with odd number N of particles are calculated taking into account the odd-even splitting, as discussed in the next section.
N E DMC E ET F E SDF
III. ODD-EVEN SPLITTING
Up to now we have analyzed the unitary gas with an even number N of particles. The Monte Carlo calculations [26,27] show a clear odd-even effect, reminiscent of the behavior of the nuclear binding energy. In particular, denoting by E N the ground state energy of N particles in a isotropic harmonic trap, for odd N one finds
E N = 1 2 (E N −1 + E N +1 ) + ∆ N ,(14)
where the splitting ∆ N is always positive. This effect is related to pairing: given the superfluid cloud of even particles, the extra particle is localized where the energy gap is smallest, which is near the edge of the cloud [29,37]. On this basis, recently Son [29] has suggested that, for fermions at unitarity, confined by a harmonic potential with frequency ω, the odd-even splitting grows as
∆E N = γ N 1/9h ω ,(15)
where γ is an unknown dimensionless constant. After a systematic investigation we find that γ = 0.856 gives the odd-even splitting which best fit entire FN-DMC data, with both even and odd particles [27]. In Fig. 3 we report the FN-DMC data (diamonds) and the optimized ETF results (solid line). The figure, which displays the zig-zag behavior of the energy E as a function of N , shows that the optimized ETF functional plus the odd-even correction (15) (ξ = 0.455, λ = 0.13, γ = 0.856) is extremely good in reproducing all FN-DMC data.
IV. GENERALIZED HYDRODYNAMICS
Let us now analyze the effect of the gradient term (8) on the dynamics of the unitary Fermi gas. At zero temperature the low-energy collective dynamics of this fermionic gas can be described by the equations of generalized hydrodynamics [35,39,40], where the Hamiltonian of the classical hydrodynamics [40] is modified by including gradient corrections. In our case the generalized Hamiltonian reads:
H = d 3 r n 1 2 mv 2 + ε g (n, ∇n) + U (r) ,(16)
where the local density n(r, t) and the local velocity v(r, t) are the hydrodynamics variables [39,40]. By writing the Poisson brackets [40] of the hydrodynamics variables with the Hamiltonian (16), one gets the generalized hydrodynamics equations:
∂n ∂t + ∇ · (nv) = 0 , (17) m ∂ ∂t + v · ∇ v + ∇ µ g (n, ∇n) + U (r) = 0 . (18) where µ g (n, ∇n) = ∂ nε g (n, ∇n) ∂n = µ(n) − λh 2 2m ∇ 2 √ n √
n .
∇ × v = 0 .(20)
By using this condition and the identity
(v · ∇)v = ∇ 1 2 v 2 − v × (∇ × v) ,(21)
Eq. (18) can be simplified into:
m ∂v ∂t + ∇ 1 2 mv 2 + µ g (n, ∇n) + U (r) = 0 .(22)
The low-energy collective dynamics of the superfluid Fermi gas in the BCS-BEC crossover is usually described by the equations of classical superfluid hydrodynamics, which are the time-dependent version of the local density approximation with the Thomas-Fermi energy functional [7]. Equations (17) and (22) are a simple generalization of classical superfluid hydrodynamics which takes into account surface effects. The gradient term, i.e. the quantum correction, is necessary in a superfluid to avoid unphysical phenomena like the formation of wave front singularities in the dynamics of dispersive shock waves [41].
Combining Eqs. (17) and (22) one finds the dispersion relation of low-energy collective modes of the uniform unitary Fermi gas in the form
Ω q = ξc 0 1 + λ ξ hq 2mc 0 2 ,(23)
where Ω is the collective frequency, q is the wave number and c 0 is the speed of sound in a uniform, noninteracting Fermi gas.
For an irrotational fluid it is possible to write down [40,42] a Lagrangian by using as dynamical variables the scalar potential θ(r, t) of the velocity v(r, t) and the local density n(r, t). For a fermionic superfluid one has:
v(r, t) =h 2m ∇θ(r, t) ,(24)
where θ(r, t) is the phase of the condensate wavefunction of Cooper pairs [7]. In our case, the familiar Lagrangian of the Fermi superfluid [16,42] must be modified by including the gradient correction. The generalized Lagrangian density then reads:
L = −n h 2θ +h 2 8m (∇θ) 2 + U (r) + ε g (n, ∇n) . (25)
The Euler-Lagrange equations of this Lagrangian with respect of n and θ give the generalized hydrodynamics equations of superfluids (17) and (22). We observe that the generalized hydrodynamics equations (17) and (22) can be formally written in terms of a nonlinear Schrödinger equation of the Guerra-Pusterla type [30], which is Galilei-invariant [31]. In fact, by introducing the complex wave function Ψ(r, t) = n(r, t) e iθ(r,t) ,
which is the zero-temperature Ginzburg-Landau order parameter normalized to the total number N of superfluid atoms [17,25], and taking into account the correct phase-velocity relationship given by Eq. (24), the equation
ih ∂ ∂t Ψ = −h 2 4m ∇ 2 + 2U (r) + 22µ(|Ψ| 2 ) + (1 − 4λ)h 2 4m ∇ 2 |Ψ| |Ψ| Ψ ,(27)
is strictly equivalent to Eqs. (17) and (22). Notice that in the stationary case where Ψ(r, t) = n(r) e −i2μt/h , Eq. (27) becomes exactly Eq. (11). Remarkably only if λ = 1/4 the equation acquires the familiar structure of a nonlinear Schrödinger such as the Gross-Pitaevskii equation which describes the two-spin component Fermi system, but in the extreme BEC regime [43]. From the linearization of Eq. (27) one finds for the uniform Fermi gas the Bogoliubov excitations given precisely by Eq. (23).
Finally, we stress that Eq. (27) can be seen as the Euler-Lagrange equation of the following Galileiinvariant Lagrangian density
L = Ψ * ih 2 ∂ ∂t +h 2 8m ∇ 2 − U (r) Ψ −ε g (|Ψ| 2 , ∇(|Ψ| 2 ))|Ψ| 2 +h 2 8m (∇|Ψ|) 2 ,(28)
that is equivalent to the generalized Lagrangian of Eq. (25).
V. CONCLUSIONS
We have obtained the value of the coefficient λ of the gradient correction λh 2 (∇n) 2 /(8mn) for the extended Thomas-Fermi density functional in the unitary regime. By fitting diffusion Monte Carlo data with an even number N of particles we have found λ = 0.13. In addition, we have determined the coefficient ξ of the energy density ξ(3/5)nε F , finding ξ = 0.455. Fixing ξ to the value ξ = 0.44 proposed in [10] and looking for the best fitting λ we have found instead λ = 0.18. We stress that in our energy functional the gradient term takes into account phenomenologically all corrections of a gradient expansion. Our functional one can easily get the ground state properties (energy and density) for large as for small numbers of fermions; its main limitation is that it cannot account for the shell effects in the density profile. Moreover, we have shown that it is possible to take into account the odd-even splitting of the ground-state energy of the unitary gas in a harmonic trap of frequency ω by considering a correction proportional to N 1/9h ω as suggested by Son [29], where the constant of proportionality is found to be γ = 0.856. Finally, we have analyzed the effect of the gradient term in the dynamics of the unitary Fermi gas by introducing generalized hydrodynamics equations, which can be written for superfluid motion in the form of a Galilei-invariant nonlinear Schrödinger equation. As a final remark, we remind that the values of ξ and λ we have found are independent on the external potential and therefore our generalized energy functional can be used to investigate the unitary superfluid Fermi gas in various trapping configurations. Obviously in the limit of large numbers of particles the gradient term and the exact value of λ are less important since the dominant term becomes the usual Thomas-Fermi one. More extensive Monte-Carlo calculations with a larger number of particles will be certainly useful to have a more accurate determination of the value of ξ.
This work has been supported by Fondazione Cariparo. L.S. thanks Sadhan K. Adhikari, Boris A. Malomed and Thomas M. Schäfer for useful suggestions.
FIG. 1 :
1(Color online). Unitary Fermi gas under harmonic confinement of frequency ω. Upper panel: density profiles n(r) for N = 10 and N = 30 fermions obtained with ETF (solid lines) and TF (dashed lines). Lower panel: groundstate energy E vs N with ETF (solid line), beyond-TF formula (dot-dashed line) and TF (dashed line). In all calculations: universal parameter ξ = 0.44 and gradient coefficient λ = 1/4. Energy in units ofhω and lengths in units of aH = h/(mω).
FIG. 2 :
2(Color online). Ground-state energy E for the unitary Fermi gas of N atoms under harmonic confinement of frequency ω. Symbols: DMC data with even N [27]; solid line: ETF results with best fit (ξ = 0.455 and λ = 0.13); dot-dashed line: ETF results obtained from ε-expansion [18] (ξ = 0.475 and λ = 0.25). Energy in units ofhω.
FIG. 3 :
3(Color online). Ground-state energy E for the unitary Fermi gas of N atoms under harmonic confinement of frequency ω. Dimonds: DMC data with both even and odd N [27]; solid line: optimized ETF results (ξ = 0.455, λ = 0.13, γ = 0.856). Energy in units ofhω.
( 19 )
19Eq. (17) is the continuity equation, while Eq. (18) is the conservation of linear momentum. These equations are valid for the inviscid unitary Fermi gas at zero temperature. If the unitary Fermi gas is superfluid then it is not only inviscid but it is also irrotational, i.e.
. M Greiner, C A Regal, D S Jin, Nature. 426537M. Greiner, C.A. Regal, and D.S. Jin, Nature (London) 426, 537 (2003).
. C A Regal, M Greiner, D S Jin, Phys. Rev. Lett. 9240403C.A. Regal, M. Greiner, and D.S. Jin, Phys. Rev. Lett. 92, 040403 (2004).
. J Kinast, S L Hemmer, M E Gehm, A Turlapov, J E Thomas, Phys. Rev. Lett. 92150402J. Kinast, S.L. Hemmer, M.E. Gehm, A. Turlapov, and J.E. Thomas, Phys. Rev. Lett. 92, 150402 (2004).
. M W Zwierlein, Phys. Rev. Lett. 92120403M.W. Zwierlein et al., Phys. Rev. Lett. 92, 120403 (2004);
. M W Zwierlein, C H Schunck, C A Stan, S M F Raupach, W Ketterle, Phys. Rev. Lett. 94180401M.W. Zwierlein, C.H. Schunck, C. A. Stan, S.M.F. Raupach, and W. Ketterle, Phys. Rev. Lett. 94, 180401 (2005).
. C Chin, Science. 3051128C. Chin et al., Science 305, 1128 (2004);
. M Bartenstein, Phys. Rev. Lett. 92203201M. Bartenstein et al., Phys. Rev. Lett. 92, 203201 (2004).
. M E Gehm, S L Hemmer, S R Granade, K M O'hara, J E Thomas, Phys. Rev. A. 6811401M. E. Gehm, S. L. Hemmer, S. R. Granade, K. M. O'Hara, and J. E. Thomas, Phys. Rev. A 68, 011401(R) (2003).
. S Giorgini, L P Pitaevskii, S Stringari, arXiv:0706.3360S. Giorgini, L.P. Pitaevskii, and S. Stringari, arXiv:0706.3360.
. G A BakerJr, . ; G A BakerJr, Int. J. Mod. Phys. B. 601314Phys. Rev. CG.A. Baker, Jr., Phys. Rev. C 60 054311 (1999). G.A. Baker, Jr., Int. J. Mod. Phys. B 15, 1314 (2001);
. H Heiselberg, Phys. Rev. A. 6343606H. Heiselberg, Phys. Rev. A 63, 043606 (2001).
. G E Astrakharchik, J Boronat, J Casulleras, S Giorgini, Phys. Rev. Lett. 93200404G. E. Astrakharchik, J. Boronat, J. Casulleras, and S. Giorgini, Phys. Rev. Lett. 93, 200404 (2004)
. J Carlson, S.-Y Chang, V R Pandharipande, K E Schmidt, ibid. 9150401J. Carlson, S.-Y. Chang, V. R. Pandharipande, and K. E. Schmidt, ibid. 91, 050401 (2003);
. S Y Chang, V R Pandharipande, J Carlson, K E Schmidt, Phys. Rev. A. 7043602S. Y. Chang, V. R. Pandharipande, J. Carlson, and K. E. Schmidt, Phys. Rev. A 70, 043602 (2004).
. A Bulgac, Y Yu, Phys. Rev. Lett. 91190404A. Bulgac and Y. Yu, Phys. Rev. Lett. 91, 190404 (2003);
. Int. J. Mod. Phys. E. 13147Int. J. Mod. Phys. E 13, 147 (2004).
. Y E Kim, A L Zubarev, Phys. Rev. A. 7011603Y.E. Kim and A.L. Zubarev, Phys. Rev. A 70, 033612 (2004); 72, 011603(R) (2005);
. Y E Kim, A L Zubarev, Phys. Lett. A. 397327Y.E. Kim and A.L. Zubarev, Phys. Lett. A 397, 327 (2004);
. Y E Kim, A L Zubarev, J. Phys. B. 38243Y.E. Kim and A.L. Zubarev, J. Phys. B 38, L243 (2005).
. N Manini, L Salasnich, Phys. Rev. A. 7133625N. Manini and L. Salasnich, Phys. Rev. A 71, 033625 (2005);
. G Diana, N Manini, L Salasnich, Phys. Rev. A. 7365601G. Diana, N. Manini, and L. Salasnich, Phys. Rev. A 73, 065601 (2006).
. T Papenbrock, Phys. Rev. A. 7241603T. Papenbrock, Phys. Rev. A 72, 041603(R) (2005);
. A Bhattacharyya, T Papenbrock, Phys. Rev. A. 7441602A. Bhattacharyya and T. Papenbrock, Phys. Rev. A 74, 041602(R) (2006).
. B P Van Zyl, D A W Hutchinson, M Need, Phys. Rev. A. 7625601B.P. van Zyl, D.A.W. Hutchinson, and M. Need, Phys. Rev. A 76, 025601 (2007).
. Y Nishida, D T Son, Phys. Rev. Lett. 9750403Y. Nishida and D. T. Son, Phys. Rev. Lett. 97, 050403 (2006);
. Y Nishida, D T Son, Phys. Rev. A. 7563617Y. Nishida and D. T. Son, Phys. Rev. A 75, 063617 (2007);
. D T Son, M Wingate, Ann. Phys. 321197D.T. Son and M. Wingate, Ann. Phys. 321, 197 (2006).
. L Salasnich, N Manini, F Toigo, Phys. Rev. A. 7743609L. Salasnich, N. Manini and F. Toigo, Phys. Rev. A 77, 043609 (2008).
. G Rupak, T Schäfer, arXiv:0804.2678v2e-preprintG. Rupak and T. Schäfer, e-preprint arXiv:0804.2678v2.
. L N Oliveira, E K U Gross, W Kohn, Phys. Rev. Lett. 602430L.N. Oliveira, E.K.U. Gross, and W. Kohn, Phys. Rev. Lett. 60, 2430 (1988).
. C F Weizsäcker, Z. Phys. 96431C.F. von Weizsäcker, Z. Phys. 96, 431 (1935).
. D A Kirzhnits, Sov. Phys. JEPT. 564D.A. Kirzhnits, Sov. Phys. JEPT 5, 64 (1957);
D A Kirzhnits, Field Theoretical Methods in Many-Body Systems. LondonPergamon PressD.A. Kirzhnits, Field Theoretical Methods in Many-Body Sys- tems (Pergamon Press, London, 1967).
It has been recently suggested in Ref. [15] that for a strictly 2D unitary Fermi gas the gradient corrections are not needed: the Thomas-Fermi energy functional should be exact in 2D. It has been recently suggested in Ref. [15] that for a strictly 2D unitary Fermi gas the gradient corrections are not needed: the Thomas-Fermi energy functional should be exact in 2D.
E Lipparini, Modern Many-Particle Physics. Atomic Gases, Quantum Dots and Quantum Fluids. New JerseyE. Lipparini, Modern Many-Particle Physics. Atomic Gases, Quantum Dots and Quantum Fluids (World Sci- entific, New Jersey, 2003).
The Nuclear Many-Body Problem. P Ring, P Schuck, SpringerBerlinP. Ring and P. Schuck, The Nuclear Many-Body Problem (Springer, Berlin, 2005).
. L Salasnich, arXiv:0804.1277e-preprintL. Salasnich, e-preprint arXiv:0804.1277.
. S.-Y Chang, G F Bertsch, Phys. Rev. A. 7621603S.-Y. Chang and G.F. Bertsch, Phys. Rev. A 76, 021603(R) (2007).
. D Blume, J Stecher, C H Greene, Phys. Rev. Lett. 99233201D. Blume, J. von Stecher, and C. H. Greene, Phys. Rev. Lett. 99, 233201 (2007);
. J Stecher, C H Greene, D Blume, Phys. Rev. A. 7653613J. von Stecher, C.H. Greene, and D. Blume, Phys. Rev. A 76, 053613 (2007).
R G Parr, W Yang, Density-Functional Theory of Atoms and Molecules. OxfordOxford Univ. PressR.G. Parr and W. Yang, Density-Functional Theory of Atoms and Molecules (Oxford Univ. Press, Oxford, 1989).
. T D Son, arXiv:0707.1851e-preprintT.D. Son, e-preprint arXiv:0707.1851.
. F Guerra, M Pusterla, Lett. Nuovo Cim. 34351F. Guerra and M. Pusterla, Lett. Nuovo Cim. 34, 351 (1982).
. H.-D Doebner, G A Goldin, Phys. Rev. A. 543764H.-D. Doebner and G.A. Goldin, Phys. Rev. A 54, 3764 (1996).
. L Salasnich, J. Math. Phys. 418016L. Salasnich, J. Math. Phys. 41, 8016 (2000).
. A Holas, P M Kozlowski, N H March, J. Phys. A: Math. Gen. 244249A. Holas, P.M. Kozlowski, and N.H. March, J. Phys. A: Math. Gen. 24, 4249 (1991).
. L Salasnich, J. Phys. A: Math. Theor. 409987L. Salasnich, J. Phys. A: Math. Theor. 40, 9987 (2007).
. E Zaremba, H C Tso, Phys. Rev. B. 498147E. Zaremba and H.C. Tso, Phys. Rev. B 49, 8147 (1994).
. E Cerboneschi, R Mannella, E Arimondo, L Salasnich, Phys. Lett. A. 249245E. Cerboneschi, R. Mannella, E. Arimondo, and L. Salas- nich, Phys. Lett. A 249, 245 (1998).
. A Bulgac, Phys. Rev. A. 7640502A. Bulgac, Phys. Rev. A 76, 040502(R) (2007).
. A Perali, P Pieri, G C Strinati, Phys. Rev. Lett. 93100404A. Perali, P. Pieri, and G.C. Strinati, Phys. Rev. Lett. 93, 100404 (2004).
N H March, M P Tosi, Proc. R. Soc. Lond. A. R. Soc. Lond. A330373N.H. March and M.P. Tosi, Proc. R. Soc. Lond. A 330, 373 (1972).
G E Volovik, Proceedings of the XI Marcel Grossmann Meeting on General Relativity. H. Kleinert, R.T. Jantzen and R. Ruffinithe XI Marcel Grossmann Meeting on General RelativitySingaporee-preprint arxiv:gr-qc/0612134G.E. Volovik, in Proceedings of the XI Marcel Grossmann Meeting on General Relativity, pp. 1451-1470, edited by H. Kleinert, R.T. Jantzen and R. Ruffini (World Scien- tific, Singapore, 2008); e-preprint arxiv:gr-qc/0612134.
. L Salasnich, N Manini, F Bonelli, M Korbman, A Parola, Phys. Rev. A. 7543616L. Salasnich, N. Manini, F. Bonelli, M. Korbman, and A. Parola, Phys. Rev. A 75, 043616 (2007).
. G Orso, L P Pitaevskii, S Stringari, Phys. Rev. A. 7733611G. Orso, L.P. Pitaevskii, and S. Stringari, Phys. Rev. A 77, 033611 (2008).
. S De Palo, C Castellani, C Di Castro, B K Chakraverty, Phys. Rev. B. 60564S. De Palo, C. Castellani, C. Di Castro, and B. K. Chakraverty, Phys. Rev. B 60, 564 (1999);
. P Pieri, G C Strinati, Phys. Rev. Lett. 9130401P. Pieri and G.C. Strinati, Phys. Rev. Lett. 91, 030401 (2003).
|
[] |
[
"Learning from the Pandemic: the Future of Meetings in HEP and Beyond",
"Learning from the Pandemic: the Future of Meetings in HEP and Beyond"
] |
[
"Mark S Neubauer \nUniversity of Illinois at Urbana-Champaign\nChampaignILUSA\n",
"Todd Adams \nFlorida State University\nTallahasseeFLUSA\n",
"Jennifer Adelman-Mccarthy \nFermi National Accelerator Laboratory\nBataviaILUSA\n",
"Gabriele Benelli \nBrown University\nProvidenceRIUSA\n",
"Tulika Bose \nUniversity of Wisconsin-Madison\nMadisonWIUSA\n",
"David Britton \nUniversity of Glasgow\nGlasgowUK\n",
"Pat Burchat \nStanford University\nStanfordCAUSA\n",
"Joel Butler \nFermi National Accelerator Laboratory\nBataviaILUSA\n",
"Timothy A Cartwright \nUniversity of Wisconsin-Madison\nMadisonWIUSA\n",
"Tomáš Davídek \nCharles University\nPragueCzech Republic\n",
"Jacques Dumarchez \nLPNHE -Sorbonne\nUniversité -Paris\nFrance\n",
"Peter Elmer \nPrinceton University\nPrincetonNJUSA\n",
"Matthew Feickert \nUniversity of Illinois at Urbana-Champaign\nChampaignILUSA\n",
"Ben Galewsky \nUniversity of Illinois at Urbana-Champaign\nChampaignILUSA\n",
"Mandeep Gill \nStanford University\nStanfordCAUSA\n",
"Maciej Gladki \nEuropean Organization for Nuclear Research (CERN)\nMeyrinSwitzerland\n",
"Aman Goel \nUniversity of Illinois at Urbana-Champaign\nChampaignILUSA\n",
"Jonathan E Guyer \nNational Institute of Standards and Technology\nGaithersburgMDUSA\n",
"Bo Jayatilaka \nFermi National Accelerator Laboratory\nBataviaILUSA\n",
"Brendan Kiburg \nFermi National Accelerator Laboratory\nBataviaILUSA\n",
"Benjamin Krikler \nUniversity of Bristol\nBristolUK\n",
"David Lange \nPrinceton University\nPrincetonNJUSA\n",
"Claire Lee \nFermi National Accelerator Laboratory\nBataviaILUSA\n",
"Nick Manganelli \nUniversity of California at Riverside\nRiversideCAUSA\n",
"Giovanni Marchiori \nAPC Paris\nFrance\n",
"Meenakshi Narain \nBrown University\nProvidenceRIUSA\n",
"Ianna Osborne \nPrinceton University\nPrincetonNJUSA\n",
"Jim Pivarski \nPrinceton University\nPrincetonNJUSA\n",
"Harrison Prosper \nFlorida State University\nTallahasseeFLUSA\n",
"Graeme A Stewart \nEuropean Organization for Nuclear Research (CERN)\nMeyrinSwitzerland\n",
"Eduardo Rodrigues \nOliver Lodge Laboratory\nUniversity of Liverpool\nLiverpoolUK\n",
"Roberto Salerno \nLLR -Ecole Polytechnique\nPalaiseauFrance\n",
"Marguerite Tonjes \nUniversity of Illinois at Chicago\nChicagoILUSA\n",
"Jaroslav Trnka \nUniversity of California at Davis\nDavisCAUSA\n",
"Vera Varanda \nARISF / LPNHE\nParisFrance\n",
"Vassil Vassilev \nPrinceton University\nPrincetonNJUSA\n",
"Gordon T Watts \nUniversity of Washington\nSeattleWAUSA\n",
"Sam Zeller \nFermi National Accelerator Laboratory\nBataviaILUSA\n",
"Yuanyuan Zhang \nFermi National Accelerator Laboratory\nBataviaILUSA\n",
"\nUniversity of Delhi\nNew DelhiIndia\n"
] |
[
"University of Illinois at Urbana-Champaign\nChampaignILUSA",
"Florida State University\nTallahasseeFLUSA",
"Fermi National Accelerator Laboratory\nBataviaILUSA",
"Brown University\nProvidenceRIUSA",
"University of Wisconsin-Madison\nMadisonWIUSA",
"University of Glasgow\nGlasgowUK",
"Stanford University\nStanfordCAUSA",
"Fermi National Accelerator Laboratory\nBataviaILUSA",
"University of Wisconsin-Madison\nMadisonWIUSA",
"Charles University\nPragueCzech Republic",
"LPNHE -Sorbonne\nUniversité -Paris\nFrance",
"Princeton University\nPrincetonNJUSA",
"University of Illinois at Urbana-Champaign\nChampaignILUSA",
"University of Illinois at Urbana-Champaign\nChampaignILUSA",
"Stanford University\nStanfordCAUSA",
"European Organization for Nuclear Research (CERN)\nMeyrinSwitzerland",
"University of Illinois at Urbana-Champaign\nChampaignILUSA",
"National Institute of Standards and Technology\nGaithersburgMDUSA",
"Fermi National Accelerator Laboratory\nBataviaILUSA",
"Fermi National Accelerator Laboratory\nBataviaILUSA",
"University of Bristol\nBristolUK",
"Princeton University\nPrincetonNJUSA",
"Fermi National Accelerator Laboratory\nBataviaILUSA",
"University of California at Riverside\nRiversideCAUSA",
"APC Paris\nFrance",
"Brown University\nProvidenceRIUSA",
"Princeton University\nPrincetonNJUSA",
"Princeton University\nPrincetonNJUSA",
"Florida State University\nTallahasseeFLUSA",
"European Organization for Nuclear Research (CERN)\nMeyrinSwitzerland",
"Oliver Lodge Laboratory\nUniversity of Liverpool\nLiverpoolUK",
"LLR -Ecole Polytechnique\nPalaiseauFrance",
"University of Illinois at Chicago\nChicagoILUSA",
"University of California at Davis\nDavisCAUSA",
"ARISF / LPNHE\nParisFrance",
"Princeton University\nPrincetonNJUSA",
"University of Washington\nSeattleWAUSA",
"Fermi National Accelerator Laboratory\nBataviaILUSA",
"Fermi National Accelerator Laboratory\nBataviaILUSA",
"University of Delhi\nNew DelhiIndia"
] |
[] |
The COVID-19 pandemic has by-and-large prevented in-person meetings since March 2020. While the increasing deployment of effective vaccines around the world is a very positive development, the timeline and pathway to "normality" is uncertain and the "new normal" we will settle into is anyone's guess. Particle physics, like many other scientific fields, has more than a year of experience in holding virtual meetings, workshops, and conferences. A great deal of experimentation and innovation to explore how to execute these meetings effectively has occurred. Therefore, it is an appropriate time to take stock of what we as a community learned from running virtual meetings and discuss possible strategies for the future. Continuing to develop effective strategies for meetings with a virtual component is likely to be important for reducing the carbon footprint of our research activities, while also enabling greater diversity and inclusion for participation.This report summarizes a virtual two-day workshop on Virtual Meetings held May 5-6, 2021 which brought together experts from both inside and outside of high-energy physics to share their experiences and practices with organizing and executing virtual workshops, and to develop possible strategies for future meetings as we begin to emerge from the COVID-19 pandemic. This report outlines some of the practices and tools that have worked well which we hope will serve as a valuable resource for future virtual meeting organizers in all scientific fields.3/55Virtual Meetings WorkshopWhile the increasing deployment of effective vaccines around the world is a very positive development, the timeline and pathway to "normality" is uncertain and the "new normal" we will settle into is anyone's guess. Particle physics, like many other scientific fields, has more than a year of experience in holding virtual meetings, workshops, and conferences. A great deal of experimentation and innovation to explore how to execute these meetings effectively has occurred. The IRIS-HEP team has substantial experience with (co-)organizing and sponsoring workshops, conferences, and meetings over the years through the process of establishing the institute and its role as an intellectual hub for software R&D.For these reasons, it was viewed as an opportune time to take stock of what we as a community have learned from running virtual meetings and discuss effective strategies for the future through this virtual meetings workshop. Continuing to develop effective strategies for meetings with a virtual component is likely to be important for reducing the carbon footprint of our research activities, while also enabling greater diversity and inclusion for participation.The workshop on Virtual Meetings was held May 5 to 6, 2021. The aim for the workshop was to bring together experts from both inside and outside of particle physics to share their experiences and practices with organizing and executing virtual workshops, and to develop possible strategies for future meetings as we begin to emerge from the challenging conditions of the COVID-19 pandemic.Attendees participated remotely using a variety of videoconferencing and collaborative tools. Eighty-nine people registered for the workshop, though as a virtual workshop, attendees came and went at various times. We estimate that there were about 70 to 80 participants in the workshop at any one time.The timeline of the workshop presentations and activities are summarized inTable 1. 4/55 5/55 6/55 7/55 8/55 10/55 11/55 12/55 13/55 14/55 16/55
| null |
[
"https://arxiv.org/pdf/2106.15783v1.pdf"
] | 235,683,084 |
2106.15783
|
f383a36db6111128b1cc216b1c357e207f2fa29e
|
Learning from the Pandemic: the Future of Meetings in HEP and Beyond
Mark S Neubauer
University of Illinois at Urbana-Champaign
ChampaignILUSA
Todd Adams
Florida State University
TallahasseeFLUSA
Jennifer Adelman-Mccarthy
Fermi National Accelerator Laboratory
BataviaILUSA
Gabriele Benelli
Brown University
ProvidenceRIUSA
Tulika Bose
University of Wisconsin-Madison
MadisonWIUSA
David Britton
University of Glasgow
GlasgowUK
Pat Burchat
Stanford University
StanfordCAUSA
Joel Butler
Fermi National Accelerator Laboratory
BataviaILUSA
Timothy A Cartwright
University of Wisconsin-Madison
MadisonWIUSA
Tomáš Davídek
Charles University
PragueCzech Republic
Jacques Dumarchez
LPNHE -Sorbonne
Université -Paris
France
Peter Elmer
Princeton University
PrincetonNJUSA
Matthew Feickert
University of Illinois at Urbana-Champaign
ChampaignILUSA
Ben Galewsky
University of Illinois at Urbana-Champaign
ChampaignILUSA
Mandeep Gill
Stanford University
StanfordCAUSA
Maciej Gladki
European Organization for Nuclear Research (CERN)
MeyrinSwitzerland
Aman Goel
University of Illinois at Urbana-Champaign
ChampaignILUSA
Jonathan E Guyer
National Institute of Standards and Technology
GaithersburgMDUSA
Bo Jayatilaka
Fermi National Accelerator Laboratory
BataviaILUSA
Brendan Kiburg
Fermi National Accelerator Laboratory
BataviaILUSA
Benjamin Krikler
University of Bristol
BristolUK
David Lange
Princeton University
PrincetonNJUSA
Claire Lee
Fermi National Accelerator Laboratory
BataviaILUSA
Nick Manganelli
University of California at Riverside
RiversideCAUSA
Giovanni Marchiori
APC Paris
France
Meenakshi Narain
Brown University
ProvidenceRIUSA
Ianna Osborne
Princeton University
PrincetonNJUSA
Jim Pivarski
Princeton University
PrincetonNJUSA
Harrison Prosper
Florida State University
TallahasseeFLUSA
Graeme A Stewart
European Organization for Nuclear Research (CERN)
MeyrinSwitzerland
Eduardo Rodrigues
Oliver Lodge Laboratory
University of Liverpool
LiverpoolUK
Roberto Salerno
LLR -Ecole Polytechnique
PalaiseauFrance
Marguerite Tonjes
University of Illinois at Chicago
ChicagoILUSA
Jaroslav Trnka
University of California at Davis
DavisCAUSA
Vera Varanda
ARISF / LPNHE
ParisFrance
Vassil Vassilev
Princeton University
PrincetonNJUSA
Gordon T Watts
University of Washington
SeattleWAUSA
Sam Zeller
Fermi National Accelerator Laboratory
BataviaILUSA
Yuanyuan Zhang
Fermi National Accelerator Laboratory
BataviaILUSA
University of Delhi
New DelhiIndia
Learning from the Pandemic: the Future of Meetings in HEP and Beyond
IRIS-HEP Blueprint Workshop Summary
The COVID-19 pandemic has by-and-large prevented in-person meetings since March 2020. While the increasing deployment of effective vaccines around the world is a very positive development, the timeline and pathway to "normality" is uncertain and the "new normal" we will settle into is anyone's guess. Particle physics, like many other scientific fields, has more than a year of experience in holding virtual meetings, workshops, and conferences. A great deal of experimentation and innovation to explore how to execute these meetings effectively has occurred. Therefore, it is an appropriate time to take stock of what we as a community learned from running virtual meetings and discuss possible strategies for the future. Continuing to develop effective strategies for meetings with a virtual component is likely to be important for reducing the carbon footprint of our research activities, while also enabling greater diversity and inclusion for participation.This report summarizes a virtual two-day workshop on Virtual Meetings held May 5-6, 2021 which brought together experts from both inside and outside of high-energy physics to share their experiences and practices with organizing and executing virtual workshops, and to develop possible strategies for future meetings as we begin to emerge from the COVID-19 pandemic. This report outlines some of the practices and tools that have worked well which we hope will serve as a valuable resource for future virtual meeting organizers in all scientific fields.3/55Virtual Meetings WorkshopWhile the increasing deployment of effective vaccines around the world is a very positive development, the timeline and pathway to "normality" is uncertain and the "new normal" we will settle into is anyone's guess. Particle physics, like many other scientific fields, has more than a year of experience in holding virtual meetings, workshops, and conferences. A great deal of experimentation and innovation to explore how to execute these meetings effectively has occurred. The IRIS-HEP team has substantial experience with (co-)organizing and sponsoring workshops, conferences, and meetings over the years through the process of establishing the institute and its role as an intellectual hub for software R&D.For these reasons, it was viewed as an opportune time to take stock of what we as a community have learned from running virtual meetings and discuss effective strategies for the future through this virtual meetings workshop. Continuing to develop effective strategies for meetings with a virtual component is likely to be important for reducing the carbon footprint of our research activities, while also enabling greater diversity and inclusion for participation.The workshop on Virtual Meetings was held May 5 to 6, 2021. The aim for the workshop was to bring together experts from both inside and outside of particle physics to share their experiences and practices with organizing and executing virtual workshops, and to develop possible strategies for future meetings as we begin to emerge from the challenging conditions of the COVID-19 pandemic.Attendees participated remotely using a variety of videoconferencing and collaborative tools. Eighty-nine people registered for the workshop, though as a virtual workshop, attendees came and went at various times. We estimate that there were about 70 to 80 participants in the workshop at any one time.The timeline of the workshop presentations and activities are summarized inTable 1. 4/55 5/55 6/55 7/55 8/55 10/55 11/55 12/55 13/55 14/55 16/55
Introduction
The COVID-19 pandemic caused by the SARS-CoV-2 virus has had a devastating effect on human health and well-being, and on the global economy. First identified in Wuhan, China in December 2019, SARS-CoV-2 spread rapidly among human population leading to the World Heath Organization (WHO) declaring COVID-19 a global health emergency on January 31, 2020. The continued spread around the world led to global travel restrictions in February 2020 and the WHO officially declaring COVID-19 as a pandemic on March 11, 2020.
As the alarming scale of the global health crisis became apparent in March 2020, workshop organizers began to cancel or postpone planned in-person events. Many organizers scrambled to re-purpose their workshops for a virtual format where attendees participated remotely through videoconferencing technologies such as Zoom. In some cases this re-purposing was done on short order rather than cancelling the event. The Connecting-the-Dots Workshop held in April 2020 is one such an example and is described in Section 3.5. This report summarizes a two-day workshop on the topic of Virtual Meetings held over the period of May 5 to 6, 2021. The workshop is part of the "Blueprint" process of the NSF-funded Institute for Research and Innovation in Software for High-Energy Physics (IRIS-HEP) [1].
The report is organized similarly to flow of the workshop. In Section 2, the IRIS-HEP Blueprint process and this workshop is introduced followed by an overview of pre-COVID-19 meeting organization and the ways that COVID-19 rapidly changed the status quo. Section 3 provides a summary of the talks given during the workshop by organizers of virtual workshops since April 2020. At the end of Section 3, common themes and key findings from these workshop experiences are presented. In Section 4 we outline some of the practices and tools that have worked well (or have not but seem like they should have) for virtual workshops. In Section 5 we present important considerations for diversity, inclusion and accessibility for virtual workshops. Best practices for virtual workshops at each of the stages of preparation and execution are presented in Section 6. The final section ( §7) synthesizes what we have learned about virtual workshops over the last year and presents some ideas for the organization of future meetings in HEP and beyond, such as the "hybrid with hubs" approach.
Overview
IRIS-HEP and the Blueprint Process
The goal of the IRIS-HEP is to address key computational and data science challenges of the High-Luminosity Large Hadron Collider (HL-LHC) experiments and other HEP experiments in the 2020s. IRIS-HEP resulted from a 2-year community-wide effort involving 18 workshops and 8 position papers, most notably a Community White Paper [2] and a Strategic Plan [3]. The institute is an active center for software R&D, functions as an intellectual hub for the larger community-wide software R&D efforts, and aims to transform the operational services required to ensure the success of the HL-LHC scientific program.
The IRIS-HEP Blueprint activity is designed to inform development and evolution of the IRIS-HEP strategic vision and build (or strengthen) partnerships among communities driven by innovation in software and computing. The blueprint process includes a series of workshops that bring together IRIS-HEP team members, key stakeholders, and domain experts from disciplines of importance to the Institute's mission. This Blueprint meeting on the topic of Virtual Meetings is one of a series of workshops that have also included (2020) The Blueprint workshop discussions are captured and inform key outcomes which are summarized in a short report made publicly available, such as this report. The first day of the workshop was held using the Zoom videoconferencing and chat platform. It was primarily focused on experiences with virtual workshops via talks by its organizational representatives. The workshops were chosen to span a wide range of scales and purpose, from training events with tens of attendees such as the US ATLAS / Canada ATLAS Computing Bootcamp (Section 3.11) to international conferences with thousands of attendees such as the International Conference on High Energy Physics (Section 3.3) and Neutrino 2020 (Section 3.12). At the end of the first day, a discussion session was used to culled out common themes and key finds from the experiences talks.
The second day was held using the gather.town platform. gather.town is videoconferencing software like Zoom, but with the added component of seeing the virtual "room" that you are occupying with other participants represented as avatars, with the ability to move around and interact with others. To demonstrate the gather.town functionality for the workshop, the Fermilab Wilson Hall floorplan was imported and various areas were defined including a poster area and a private space where participants could arrange for one-on-one or group discussions. The presentations and discussion sessions were held virtually in the Wilson Hall 1 West (WH1W) conference room, as shown in Figure 1. The Day 2 focus was on community input from an American Physics Society Division of Particles and Fields Townhall event, demonstration of tools & techniques for virtual meetings and forward-looking discussion of possible meeting strategies as we begin to emerge from the challenging conditions of the COVID-19 pandemic. This discussion was facilitated by the use of a collaborative whiteboard tool called a "Miro board" where participants answered questions in the form of five "exercises":
1. Conference Essence: What are elements that are core to the conference? What are important elements that also happen at a conference? 2. Online/In-Person Format Comparison: What are the pros and cons of an online meeting format? Of an in-person format? Of a hybrid format? 3. Attendance Composition: Who is most likely to attend in-person conferences? 4. Attractors/Repellers: What might attract someone to move from one mode of conference to another? What might dissuade them? 5. Tools and Techniques: Which are important to achieving the goals of a given conference? Which might attract people to our attend our conference? The input provided via this collaborative whiteboard along with the notes and Zoom recording of the discussions are represented in this summary in the appropriate sections. An example of the Miro board in action from the workshop is shown Figure 2.
Conferences and Workshops Before COVID-19
Most readers of this document will have experience with attending in-person conferences, workshops, and/or meetings as part of their research or other professional pursuits. Therefore, we do not attempt to discuss all elements of in-person meetings in this report. Rather, we use this section to touch on some of the key aspects from the virtual meetings workshop as a baseline for the discussion that follows.
The status quo in HEP before COVID-19 was, especially for the larger workshop and conferences, was to travel to the venue to either deliver a talk on one's research of research on behalf of a collaboration or attend in-person to hear talks and participate in discussions. Of course, videoconferencing has been around for a long time and remote participation has been component to events, but these were more likely than not due to specific constraints by individuals rather than prominent element generally embraced by the organizers.
Travel to and from in-person events most often involves a substantial commitment of time and funds (most likely from research grants for which principle investigators typically budget as part of their research program). Depending on distance from the researcher's home location, its common to have one or two days on each side of the event consumed with traveling through a network of airports, train stations, etc. This level of time commitment away from home can be challenging for those with children, take care of others, or have other personal constraints. The transportation itself also involves a carbon load on the environment which can be substantial in the aggregate when one considers the scale of global research travel and the distances involved to participate in-person at international conferences and collaboration events.
Outside of the presentations themselves, in-person meetings provide opportunities for chance (or planned) encounters with other participants (e.g. during coffee breaks for conference dinners) to discuss and develop new research ideas or directions, foster new collaborations and engage in deep technical discussions about talks that were presented. These encounters and informal interactions can be career-changing for young scientists, both in terms of networking and access to leading scientists in their respective fields. These types of interactions are difficult to replicate in a remote format given the current state of technology for virtual presence.
Participants in the virtual meetings workshop noted other benefits of the in-person format, including an opportunity to focus on a specific topic for a period of time that is often difficult given responsibilities and distractions in their home/work environment, meet collaborators in-person (sometimes for the first time!), and obtain an expanded viewpoint on science, society and cultures that comes with traveling to somewhere new.
COVID-19 Effects on Meetings
As mentioned in Section 1, the move from largely in-person meetings to almost exclusively virtual events due to the COVID-19 pandemic was abrupt and unprecedented. Of course, it was not only the organizers and participants of conferences and workshops that had to pivot -schools and universities had to move quickly to remote or "hybrid" learning environments and businesses and people alike had to adapt to the challenges of COVID-related lock-downs and restrictions. Such a rapid transition to a virtual presence in personal and professional lives at scale was only possible due to the general availability of personal computers (including laptops, tablets and smart phones) coupled to high-speed networks and software tools like Zoom.
One clear effect that COVID-19 has had on our community is to expose inefficiencies in the status quo model for conferences, workshops and meetings and bring to the surface concerns around equity, access and environmental aspects of this model. A constructive rethinking of this model informed by experiences since April 2020 and emerging technologies is the first step to addressing some of the pre-existing deficiencies toward a more inclusive, environmentally-conscious and effective approach to future meetings.
Experiences with Virtual Meetings
This section summarizes the experiences drawn from a dozen virtual scientific workshops with a variety of sizes and purposes since April 2020. Not surprisingly, given that this report is a summary of a Blueprint workshop for a HEP software institute, the virtual events presented here are either software-focused, HEP-focused or both.
HSF/WLCG Virtual Workshops
Since the HSF Community White Paper [2] the HEP Software Foundation (HSF) and the Worldwide LHC Computing Grid (WLCG) have held joint workshops to advance R&D in software and computing for HEP.
A workshop had been planned for May 2020, in Lund, Sweden, but with the degenerating COVID-19 situation in March it became clear that an in-person workshop was impossible to hold and we started to plan a virtual event [4] that could replace it. This was one of the first reasonably large meetings in the community that moved to a virtual format, but the lessons learned here and at the follow up workshop in November 2020 [5] helped refine and inform the community on how to hold successful online events [6,7]. This section is a summary of the most important lessons learned from these workshops.
Focused Topic and Scheduling
It was immediately obvious to the organisers that simply moving the planned five full-day face-to-face meeting to a video conference setup would not work. Instead, the May workshop [5] was shortened and refocused to tackle a specific topic: New Architectures, Portability, and Sustainability.
Narrowing the scope meant also that scheduling the event would be easier. Given the relative centres of gravity in high-energy physics of Europe and the Americas, the 'golden hour' in which to schedule meetings is later afternoon in Europe and morning in the Americas. e.g., 16:00 CERN time is a comfortable 9:00 at Fermilab and 7:00 in the Pacific timezone, which is early, but possible. In May we went for fairly short sessions of two hours only, 16:00 to 18:00 CERN. Finshing at 18:00 avoided being too disruptive for European participants. However, it does mean that only a limited amount of time is available -again emphasising the need to focus on key topics during each session.
Unfortunately these times make it really difficult for Asia-Pacific colleagues to participate. In larger events, a mixture of sessions that start in the European mornings (thus afternoon in Asia) work well, e.g., starting at 9:00 CERN time, as was chosen for the recent conference [8]. Unfortunately, it is very difficult to have an event during working hours for North America and Asia: 17:00 in New York is still 5:00 in Tokyo as the Pacific Ocean is really large. Bear in mind that concentrating on online events is fatiguing and many participants will be experiencing multiple distractions (e.g. emails arriving, phone calls, children needing attention, etc.). It is imperative to schedule regular breaks to allow people to have a mental rest, stretch out, make a cup of tea, etc. Practical experience suggests that one break every two hours is a minimum. If one insists on running an activity during the break, light-hearted polls can keep people amused.
Material Upload and Videos
Given the difficulties of scheduling a convenient time for presentations that can suit all participants, one compensating factor is to make sure that material is uploaded in advance. For the first workshop we attempted to get material a week in advance. However, this was hard for speakers to manage and, in fact, feedback from participants was that one day in advance is sufficient, Figure 3.
The usefulness of this advanced availability of the material was borne out by the fact that 2/3 of workshop attendees reported viewing the material before the sessions, as shown in Figure 4.
If the session is really at an inconvenient time for some people, the best solution is to record the video of the talk and make it available post-facto. If Zoom is being used for the presentations, then its built-in recording facilities are ideal for this purpose. Depending on the organisers familiarity with video editing tools, session videos can be posted unedited, or split by talk or even given fancy introductions and epilogues. Splitting by talk is relatively easily managed using open source tools like ffmpeg.
Be aware, especially at CERN and in Europe, that data privacy laws apply. It is strongly advised, in fact even mandatory, that you gather specific consent from all participants to not only record the meeting, but also to make it available publicly on platforms such as the CERN Document Server or YouTube.
Hosting the Event
At a minimum, a video conferencing service will be needed to host the event. Almost every university and laboratory now has a solution for this. More often than not this is the Zoom platform, which at the moment is a de facto standard in HEP. This is for good reason: the clients are stable, the infrastructure scales well and features such as recording work well. It also has the advantage of being extremely familiar to most participants, thus technical hurdles are minimised. Back in May 2020, quite a number of participants had problems screen sharing, "hand" raising, muting/un-muting, but by now most people have worked these out. Still, it is a wise idea, especially for larger events to:
• Run some practice sessions in advance for speakers (and session chairs) to test their setup • Schedule one or two minutes between speakers to change sharing • Have a copy of the slides downloaded locally by the chairs, in case the speaker has a slide sharing issue During presentations, the usual recommendation is that the speaker shares slides and that they have their camera enabled to give a slightly more intimate feel. Everyone else should be muted and have their camera off, unless they are speaking. Having a participants' guide for the event makes it clear to people how they should contribute to the meeting discussion. E.g., in larger meetings using Zoom's raise hand feature is a very good discipline to adopt.
In addition to these preparatory steps for speakers the hosts of the event need to be well prepared. At the very least there should be a chair and a co-chair for each session. The co-chair can monitor the meeting chat ( §3.1.4) while the chair is managing the speaker and contributors.
The hosts of the meeting should have an out-of-band communication mechanism that they can use to discuss issues as the meeting is ongoing -private chat channels on instant messaging tools like Mattermost, Slack, or Skype work well for this. The co-chair can warn if a speaker looks badly set to go over time, for example, or perhaps the organisers feel there is an important question that should be prioritised.
Good timekeeping in online meetings is very important. Like it or not, participants will multiplex with other meetings and events and knowing that a presentation will happen at the scheduled time is vital for people to join for the sessions they desire. Family or other personal commitments are a reality for many people and it is not acceptable to exclude people by throwing the schedule out the window when the meeting starts. Make sure that when organising the meeting there is sufficient discussion time scheduled and that it is stressed to the speakers how much time they have for their presentation. Do not be afraid to remind them during their talk if it looks like they will overrun, or even insist they finish up if they exceed their time budget.
Finally, although "Zoom-bombing" (unwanted remote attendance by uninvited individuals) is quite rare in our community, it can happen. Make sure that Zoom rooms are password protected and only distribute the passwords to people registered for the event (CERN's Indico allows Zoom links to be nicely protected, or email them to the list of registered participants). Knowing what to do if a Zoom-bombing occurs is really important to minimise disruption if there is an incident -Zoom has good tools for managing this now and even a 'big red button' that will allow the hosts to suspend all participant activities in the case of a concerted attack. Know how you will proceed with the meeting in case something happens: e.g., keep the waiting room active, the (co-)chair shares slides, keep chat and rename disabled, don't allow participants to unmute except by invitation, etc.
All of this preparation will help the event to run smoothly.
Chat Tools
Having a channel by which discussion can happen on presentations is a very useful thing for online meetings. It allows participants to ask questions in a non-disruptive way during the talk (and some people even just feel more comfortable asking a question in text chat, so it also improves inclusivity). It also allows for discussions to continue after the time slot for the presentation has finished; it can also allow people to ask questions in advance (as we noted, slides should go up well before the meeting starts). The chat channel of the video system itself is rather disfavoured for this. Usually it is ephemeral and lacks many useful features, so save that only for technical issues during the meeting (it can be the co-chair's job to redirect questions if they do get asked there).
A number of options are used in the community, each with pros and cons:
• Publicly writable Google Docs work well at a small to medium scale -make sure that a skeleton layout is in place in advance so that comments and questions go to the correct place. A few of the drawbacks here are that contributors need some discipline to identify themselves in the comment they make (anonymous tapirs are cute, but not helpful) and that comments are easily misplaced as people can write anywhere in the document at any time.
• Slack and Mattermost are chat tools that impose some restrictions, in the sense that everything is a serial stream. That discipline can be useful to prevent chaos and more easily manage a discussion. However, it can also be confusing if multiple discussions are happening at the same time interleaved. This is mitigated if people use 'Reply' functionality that threads a discussion; success depends on how familiar people are with these kind of tools. If multiple sessions are running, use a different channel for each one to cut down on cross-talk.
• Discord is a popular platform (albeit less well known in HEP) that allows for multiple chats, breakouts and even video. It is very popular in the gaming community and seems to work particularly well for training and tutorial events. This platform was used successfully in the US ATLAS / Canada ATLAS Bootcamp described in Section 3.11.
For the HSF-WLCG workshops, the Google Doc solution was adopted and people thought that it did help the discussion, Figure 5. Likewise, Mattermost, as used at the vCHEP2021 conference was view positively by 54 % of participants, Figure 6 (this was despite a definite tension between security and convenience that caused some participants technical problems).
Virtual Social Platforms
One aspect of workshops and conferences that is much harder to replicate online is the social and informal human contact that we have during coffee breaks, lunches and social events. The opportunity to have less structured discussions, to branch out to other topics or just to enjoy the company of colleagues is not yet easily replicated through any virtual platform. There are no lack of platforms here (gather.town, Wonder, Mibo, to name a few), but it is clear that barriers to normal human interaction do remain. Perhaps in the future this will change, with technical advances (e.g., virtual reality) or with us becoming more accustomed to such platforms. For now, though not without their merits, do not expect them to be a substitute for face-to-face interactions.
Conclusions
The HSF and WLCG experience of running virtual events demonstrates that for the presentation of scientific content a virtual event functions rather well. The same is true of direct discussions, which can be supported in a variety of ways through live notes or text chat as well as live chat. Running a successful virtual event (just like an in-person one) requires careful planning and a lot of work from the organisers, from scheduling the event at the best times, running the event properly and then posting videos of the event after the fact. Virtual social platforms can supplement the presentations and discussion, but still fall far short of the in-person experience.
One meta-consideration when organising events is that as the barrier to entry for organising a virtual event is rather low, there was a tendency to have more events during 2020 than we would have had if it had been a normal year. Zoom-fatigue arose for more reasons that just what was happening in the HEP universe, but it affected many of our colleagues in a real way during difficult times. So we should endeavour to make any virtual events focused, meaningful and useful and, as ever, be sympathetic and understanding to the pressures that people have found themselves under during these extraordinary times.
LHCP 2020
Origin of the LHCP 2020 online conference
The LHCP conference series started in 2013 after a successful fusion of two international conferences, "Physics at Large Hadron Collider Conference" and "Hadron Collider Physics Symposium". It consists of a series of yearly conferences where the latest experimental and theoretical results on the Large Hadron Collider (LHC) physics are presented. They include research areas such as the Standard Model Physics and Beyond, the Higgs boson, Supersymmetry, Heavy Quark Physics, and Heavy Ion Physics as well as recent progress in the high luminosity upgrades of the LHC and future colliders developments. The LHCP conference typically attracts between 300 and 400 participants every year from all over the world, discussing together in parallel, plenary and poster sessions spanning a full week.
The 8 th episode of the series, LHCP 2020, was due to take place in Paris, France, in the International Conference Centre of Sorbonne Université, the week of May 25 to 30, 2020. At the beginning of March 2020 due to the outbreak of COVID-19 it became clear that an in-person conference would not be feasible, and it was decided by the conference Steering Committee (SC), in consultation with the conference International Advisory Committee (IAC), to postpone the Paris conference by one full year, to become LHCP 2021. However, the LHC experimental collaborations strongly suggested that an online "LHCP-like" event would be still held, in a similar period of the year and with a similar format to the planned LHCP 2020 conference, so that physicists could still join togetherthough virtually -to discuss the new results and keep the community together. The LHCP 2020 organisers with the assistance of the CERN IT department for technical details embarked in the project of setting up such online event with only two months to go before the start of the conference.
The LHCP 2020 online conference took place the week of May 25 to 30, 2020 as originally planned for the in-person conference. It was the first online HEP conference with a large (>1000) audience and as such the organisers had to take decisions based on limited previous experience. The success of the LHCP 2020 online conference and the lessons learned from it were useful for the setup of later HEP conferences.
LHCP 2020 online: the preparation
The organisation of LHCP 2020 online revolved around two main points: the programme and the technical organisation.
The programme had already been setup for the in-person conference, and the main issue here concerned the modification of the schedule in order to be accessible to a community that would be scattered over a large number of continents and timezones rather than being all sitting together in the same site. Two alternative scenarios were considered:
• keep the structure proposed for the in-person conference, with at most four parallel sessions taking place at the same time, and similar duration of the pauses (30 breaks), but with fewer blocks every day (typically one parallel and one plenary session with a pause in between), for about four hours/day but spanning two full weeks (Monday to Friday);
• increment the maximum number of concurrent parallel sessions, shorten the breaks, and keep three sessions/day, for a total of about six hours/day from Monday-Friday plus a shorter concluding 4-hour session on Tuesday.
It was not considered to shorten the programme due to the large amount of material that the experimental collaborations and the theory community had already prepared to release and discuss at the conference. The second scenario (see Figure 7) was chosen, with sessions between 12:30 and 18:30 CEST (18:30 to 00:30 CST,5:30 to 11:30 PDT). This would make a little fraction of the session less accessible to our colleagues in the countries on timezones far away from CEST, but would have the advantage of keeping the event within the same week as initially planned and advertised, so that participants would have less difficulties to find the time to interrupt their teaching and research activities and join the virtual event.
Concerning the technical and practical organisation, the main decisions taken were the following: • to keep the poster sessions, which would be held in virtual meeting rooms and to keep the awards for the best poster;
LHCP 2020 Agenda
• to publish the conference proceedings as initially planned, giving to the speakers and poster presenters the possibility to have a short citeable write-up on their presentation after the conference;
• to waive completely the fees for all participants making the conference accessible to the widest possible option and avoiding the need to handle processing small amounts of money from a large number of participants;
• to extend the deadlines for registration as well as poster abstract submission until a few days before the start of the conference to maximize the potential audience.
Technical setup
When the online LHCP 2020 conference was announced, participants started enthusiastically to register, at a pace of order of 50 new participants per day, increasing with the approach of the starting day of the conference. It soon became clear that the number of participants would reach several hundreds and that it would require an online videoconferencing tool that would be able to handle well such a large number of simultaneous connections and that would be intuitive to use for the participants, the speakers, and the session conveners. At the same time, when the world locked down and the meetings of the large LHC collaborations moved online, the videoconferencing solution adopted until then by CERN, Vidyo, started to show saturation issues with several persons unable to join meetings.
The recommendation from CERN IT was therefore to move to Zoom, which proved to be extremely scalable and intuitive to use. CERN established, in collaboration with Zoom, a pilot program that provided professional licences of the tool to CERN users during an evaluation period. Some Zoom features such as recording of the meetings on the cloud were disabled for privacy concerns.
In order to control access to the online meetings and restrict it to authorised people, to avoid "gatecrashing", and even to avoid that participants by accident would unmute their microphone or start broadcasting their video during the presentations and disrupt the normal flow of the conference (not an unlikely event with more than 1000 participants), we decided to choose the "Webinar"-style format for the plenary and parallel sessions, rather than the regular Zoom meeting style. The webinar format would give more control to the conveners ("Hosts" in Zoom's terminology) on who could (or could not) turn on their microphone, after having virtually raised their hand; it would provide a practice session for speakers ("Panelists") where they could join the meeting before its actual start, to test their connection and the sharing of their screen, and to avoid the problem of not being able to connect to the meeting in case they by accident tried to connect only after 500 participants (the limit of the Zoom CERN pilot) had already connected. On the negative side of this choice, the participants could not see who the other persons attending the same event (apart from the speakers and conveners) were, reducing the sense of community that is an important, integral part of in-person conferences.
A few days before the beginning of the conference, when the deadline for the registration was reached, all registrations were reviewed and those from participants not from the HEP community nor with an institutional e-mail address (for instance, an address provided by a University to a student) were rejected (only a handful of registrations would not be accepted). The confirmed registered participants then received by e-mail the connection details and the passwords for the Zoom webinars.
The webinars for the plenary sessions were created by the conference organisers. On the other hand, due to the impossibility for a single Zoom user to create several meetings running in parallel in the same time slot, the webinars for the parallel sessions were created by the conveners of the session, who were instructed on how to setup and password-protect the meeting by the conference organisers. To allow for off-line access to the conference presentations and Q&A, the option to record the meetings (on the local PC of the host) was enabled. The consent of each speaker to have their talk recorded was collected through the Indico webpage of the conference beforehand (in case of a speaker not accepting to be recorded, recording would have been paused). The participants connecting to the Zoom webinar would have to accept by clicking on the "Accept" button of a Zoom pop-up window or otherwise disconnect.
For the "online" poster presentations, it was decided to have two sessions of one hour each, in different times of the day to allow poster presenters from each part of the world to find at least one suitable time slot for them to join it. Poster sessions took place in Zoom regular meeting rooms, one for each poster presenter. The speakers were asked to create their own Zoom meeting rooms and upload the link to their meeting room on the conference website, while passwords were sent separately to the participants by e-mail. For presenters without a Zoom license, the CERN IT department created a lightweight CERN account for the duration of the conference, giving them access to the CERN Zoom pilot program. Poster presenters were also asked to pre-record a 3 min short presentation of their poster and upload it, together with the poster itself and the connection details, to the conference agenda before the poster sessions.
Finally, since the number of registered participants in the end (1300) exceeded the maximum capacity of the CERN Zoom license (1000 participants connected at the same time), for the plenary sessions it was put in place by the CERN IT department a streaming of the Zoom webinars to the CERN Webcast. In order to allow participants connected to the plenary sessions through Webcast to ask their question, a plenary-qa channel was created on a dedicated Discord online server created for the LHCP 2020 conference.
Accessibility and inclusion
An effort to make the conference as widely accessible as possible was done by the organisers by compressing the schedule into an agenda that would allow participants from all over the world to connect to most of the sessions at a reasonable time of the day.
Arrangements had also been made by the local organisation, before the conference was moved online, to support the stay of young participants from developing countries with the creation of a dedicated budget for waiving fees and booking of low-cost accommodation solution in Paris, as well as -for instance -to provide dedicated quiet rooms for mothers needing to breastfeed.
When the conference was announced in its online version, four weeks before the starting date of the conference, a request was brought forward by one participant to provided full real-time captioning for the whole conference (plenary and parallel sessions). The organisation looked around for potential solutions that would suit the user requesting them as well as fit in the very limited budget that the online conference had, from the CERN and EPJC sponsorships (that were also to cover for other expenses such as the poster awards, the publication of the conference proceedings, and communication activities). As the option to caption only the Q&A sessions was deemed not acceptable by the participant, it was proposed to caption in real time only a few, selected sessions chosen by the participant. For the other sessions, we decided to propose off-line captioning of all the talks, as obtained by running the recordings through an automatic AI-powered tool, Otter.ai. However, the agreement on this option came only a few working days before the starting of the contract; meanwhile, the company that had already been chosen for the captioning was not available any longer, and an alternative one had to be found quickly (including finding extra budget to cover for the price increase compared to the other one). This introduced some unfortunate delays, that we regret, in the setting up of the contract and we were only able to arrange live captioning for the sessions attended by the participant during the second half of the conference.
For the future conferences we recommend, based on this experience, that users with special requests like this one get in contact with the organisers well in advance with respect to the start of the conference, possibly at least two months before, and that they share with the conference organisers any knowledge they might have of captioning companies providing sufficiently accurate transcripts. For LHCP2020, the transcripts from the company that was hired can be found on the conference website; the quality of the transcript is not close to 100 %, due to the jargon/communication ubiquitous in our field. Another possibility would be that the speakers record their presentations and prepare a transcript of the talk in advance, before the start of the conference, and upload it to the conference website when their talk is scheduled. In this case only the question time would require to be captioned.
On a longer term, the CERN IT department is launching a project on an automated speech recognition (ASR) system for high-energy physics in collaboration with Universita Politecnica de Valencia, which has a very advanced system for automated transcription with a very efficient training system. The goal is to train an ASR on HEP talks with captions validated by HEP physicists, so that such an ASR could be used in the future to caption conferences as well as other online meetings (at CERN during most of the working days there are hundreds of meetings running in parallel on Vidyo or Zoom, which would translate into tens of millions of euros per year if all meetings were captioned). Some of the LHCP2020 participants kindly volunteered to edit the transcripts (automatic or from the 3rd party company) of some LHCP2020 presentations, and thus provide a high-quality training dataset for this tool, consisting of 40 presentations with transcripts validated by high-energy physicists. Hopefully this will become an accurate, and at the same time cost effective, solution, that could have large application (transcribed talks can also be searched for and are more accessible not just to hearing-impaired physicists but also to those with difficulties in understanding the speaker).
Running of the conference
Each day the conference would alternate plenary and parallel sessions, with short breaks between each session. Each session of the conference was chaired by two or more conveners.
The Zoom webinars for each session were started by the "host" (one of the session conveners or the conference organisers) typically 30 minutes before the starting time of the session, in practice mode, to allow the speakers to join the webinar and test their audio and video setup as well as the slide sharing. The employee of the captioning company providing real-time subtitles would also join at this time and test the captioning functionality.
Typically 10 minutes before the start of the session, the webinar were broadcast to all the participants, who would then be able to connect. At this point, streaming of the plenary sessions to Webcast would also be started.
People joining the webinar would have their microphone muted and their video turned off by default. Video was 15/55 reserved for speakers and session chairpersons. The session would then begin with the chairpersons starting the recording of the session, turning on their video, saying a few introductory words, with a brief welcome statement on the scope of the session, and a reminder to the speakers to mute themselves unless they are speaking, and to the participants to use the "raise hands" feature of Zoom to ask questions at the end of each presentation (or, alternatively, to post their questions in the Discord chat if connected via Webcast).
The chairpersons would then introduce the first speaker, who would turn on their video and microphone, share their screen, and start their presentation, while the chairpersons would mute their own microphone. During the presentation, from time to time, the chairpersons would unmute temporarily to remind the speaker of the time left. At the end of the presentation, the chairpersons would unmute and handle the question time, unmuting the participants who had "raised their hand" in Zoom and would thus be able to ask their own question, or directly forwarding to the speaker the question posted in the Discord chat. Then, the chairpersons would repeat the previous steps for each of the following presentations, until the end of the session, when the host would close the meeting.
Overall this setup worked very well, with no particular hiccups, with the exception of one afternoon in which, due to a broader outage of various IT services at CERN, it was not possible to start any Zoom webinar. The parallel session meetings of the afternoon were thus reverted to Vidyo meeting rooms that had been created on purpose as backup solutions, while the CERN IT department worked on a fix of the outage, that arrived in time for the following plenary session. The sessions went on as planned, the only problem being that some of the recordings of the parallel sessions of that day were not saved due to a glitch in the Vidyo server and are thus lost forever.
(Virtual) coffee breaks
In an in-person conference, coffee (and other meal) breaks serve two important purposes (in addition of course to provide the participants the time for a much needed pause or to get their favourite beverage): they allow the participants to socialise, as well as to discuss with the speakers of the session that just finished and asked them more questions on their presentations. To recreate these possibilities in the online conference experience, two solutions were adopted:
• various virtual rooms, on the same Discord online server used to allow plenary session attendees on
Webcast to ask their questions, were created, one for each parallel topic as well as a more general "coffeeand-tea" room, where people could, if they wish, interact with each other, on physics or more general topics. The same channel were also provided to allow the participants connected to the same parallel session, if they wished, to say "hello" to their fellow participants, and thus enforce the feeling of community that might be partially lost with the Zoom webinars, in which the regular participants cannot see who the other participants are.
• It was suggested to the speakers, on a voluntary basis, to be available, after the end of their session, in a Zoom meeting room ("breakout room") where they could meet directly the participants that were interested in their presentations and wanted to ask more questions about them.
While the Discord channels were barely used, the breakout room after the sessions had a reasonable success. Not all the speakers provided them nor all the participants took advantage of them, but those who did were largely satisfied.
Communication
A small, but dedicated communication team created content that highlighted events of the day which they posted on the Twitter channel of the conference, @LHCPConference, re-posting on this social medium slides -with a short associated comment -on the hot topics being shown in the plenary sessions.
Statistical information about the participants and the organising committees
The final number of registered participants to the conference was 1301, coming from institutions based in 56 countries and spanning 17 time zones. The geographical distribution of the participants is shown figure 8.
Figure 8.
The geographical distribution of the participants of the LHCP2020 conference, based on the institute they work for (or university they study at), with timezones overlaid.
The distributions of the gender of the conference participants and of the members of various organising committees, as defined by the participants themselves at registration time when they were given the choice between 'Female', 'Male' and 'Rather not say', are shown in figure 9. Among the organising committees (PC, SC, IAC) the male:female ratio was varying between 63:37 and 65:35, never exceeding 2:1. Among the session conveners, who were appointed by the SC for the plenary sessions and by the PC for the parallel ones, a similar ratio is found (64:36). Among the participants a higher male:female ratio of 70:30 was observed, and even slightly larger among the speakers (74:26), that were partially invited, partially selected by the experimental collaborations, or otherwise volunteered for a poster presentation by submitting an abstract to the conference website.
The logs of the videoconferencing sessions indicate that there were up to 520 unique participants in the plenary sessions connected on Zoom as well as up to 377 on Webcast. In the parallel sessions there were typically several tens of connections, with some sessions reaching up to 150 participants.
Participant satisfaction survey
After the conference, an online survey was circulated to the conference participants to collect their feedback about the logistics, organisation, format of the conference, and potential ideas on how to improve it. The questionnaire was left open for about two weeks, and two reminders were sent to the conference participants before the deadline. We collected a large sample size (N) of 354 people. The survey participants are well distributed in current position, age, continent in which their institution is located, and between experimental and theoretical physicists. The sample does not differ from the true population average attending a high energy conference as one of the LHCP conference series. The information about the respondents are collected in an online PDF file [9].
The sample is composed 44.4 % active conference participants (namely participants that had at least one of the following roles: speakers, poster presenters, conveners, members of the organising committee) and 55.6 % simple attendees. Most of the respondents attended either the plenary or the parallel or both the sessions while the poster sessions were less attended by 7.1 % of the respondents.
Distribution of participants & organisers by gender (1)
PC (17 persons Figure 9. The distribution of the participants and organisers of the LHCP2020 conference according to their self-declared gender.
for participants in North America. The choice of scheduling the parallel sessions in the "prime time" of the day, typically between 14:30 and 16:00 CEST, in order to make them as much accessible as possible to participants from all over the world, was also appreciated, with an average satisfaction of 8.6/10. The participants were also asked about possible alternative formats:
• a 2 week long conference, with less parallel sessions running in parallel, longer breaks, and about 4 hours of conference per day.
• a 1-week long conference, with "premiere" sessions whereby talks are live followed by 1 or 2 "replay" sessions to cover all time zones and during which recorded talks would be re-transmitted.
Only 23 % to 24 % of the respondents would have preferred these alternative options to the format that was chosen for LHCP2020.
As an optional request, the respondents were asked to provide additional feedback on the time format. The few received suggestions were on reducing the numbers of sessions in parallel, on having less compressed sessions, and on having longer time for discussions. These would unavoidably affect the conference time schedule forcing to have either longer conference days or going to a two-week long conference, solutions that as shown before are disfavoured by the majority of the respondents.
Technical setup The choice of Zoom as the videoconferencing platform was highly appreciated, with an average satisfaction of 8.8/10. A few respondents also proposed alternative platforms such as Vidyo or Microsoft Teams. It was also suggested that CERN develops its own videoconferencing system based on Jitsi. Among the participants who filled the survey, only 17 % connected to the plenary sessions through Webcast, either because they could not use Zoom (10.5 %) or they preferred Webcast over Zoom (6.2 %). The average satisfaction of the Webcast users is 8.5/10. No alternative solution was proposed.
Concerning the format of the sessions, we asked the participants which one would be more appropriate between a Webinar and a Meeting for plenary and parallel sessions, poster sessions and breakout rooms. The webinar format 18/55 is clearly preferred for the plenary sessions, while the meeting format is favoured for the poster and breakout rooms among the respondents who have an opinion on this topic. For the parallel sessions the respondents with an opinion are split almost evenly between the webinar and meeting formats.
The satisfaction of the participants concerning the way the question time was technically handled can be summarised as follows. The use of the raise-hand feature of Zoom in order to be able to turn on the microphone and ask the question, as would happen in in-person conferences, was appreciated by a large majority or participants, with an average satisfaction of 8.4/10. About a quarter of the respondents of the survey would prefer to use a chat-based system in which questions can be entered in a text window and can be up or down voted. The Discord chat that was provided to users following the plenary sessions on Webcast in order to ask questions was essentially not used. The respondents that used Webcast, however, found it more useful than not (average vote 5.6/10) to have the option available.
The satisfaction of the participants concerning how the feeling of community, lost in a online conference, was recreated, can be summarised as follows. Most participants did not actually use the Discord chat rooms created for the parallel sessions and for the virtual coffee breaks; the minority who did, found them of limited utility (average vote 5.3/10). Concerning the breakout rooms where to meet the speakers and discuss with them during the virtual coffee breaks, a bit more than one thirds of the respondents answered that they used them and the idea was deemed quite satisfactory (average vote 6.7/10). On average, all the respondents said that providing such rooms is rather important for a virtual conference (average vote 7.0/10), and future conferences could thus consider to request it to all the speakers.
The satisfaction of the participants concerning the recordings and live transcriptions can be summarised as follows. The respondents found the recordings of great utility (average vote 8.5/10), few of them already watched the recordings (17.8 %) and about half of the participants will watch them in the future.
When asked (optionally) to provide further feedback about the technical organisation of the conference, the respondents provided a few comments such as:
• find a better way for the participants to see who else is attending the same event. A possible solution for this, at least for parallel sessions, could be moving from webinar-style to meeting-style Zoom meeting rooms
• having a clock running during a speaker's presentation, visible to the speakers themselves, rather than the chair interrupting to tell the speaker how much time is left. Of course the speakers themselves are encouraged to use a clock on their desk or on the computer that they use during the presentation to keep track of the time, but solutions integrated into the technical tool used for the videoconference would be a welcome addition.
• breakout rooms are an interesting idea for virtual conferences and should be further encouraged • having a system to upvote questions (but no downvoting)
• releasing the recordings of the talks the same day as the presentation, otherwise people could lose interest afterwards. It should be noted that post-processing the recordings of the whole sessions (splitting them into single contributions, adding captions) takes time and person-power and even with a dedicated IT team it took 2 to 3 weeks after the end of the conference to have all the recordings online on the LHCP2020 conference website. A potential alternative would be that speakers themselves provide pre-recorded, self-transcribed talks. This would also solve the problem of the quality of the transcripts, which was deemed terrible by some respondents and acceptable (though not perfect) by others.
Funding
The third set of questions of the survey concerned questions related to the conference funding. Considering that the budget is very severely constrained for an online conference without a fee, the participants assigned to the items to be funded the following importance, ordered from higher to lower:
• communication, average vote 7.8/10; • publication of conference material (recordings, post production, ...), average vote 7.5/10; • outreach, average vote 6.6/10;
19/55
• poster awards, average vote 5.9/10; • proceedings, average vote 5.9/10; • others, average vote 1.3/10.
We had explicitly asked any other possible items to spend money and we have received few suggestions. Three suggestions were related to assuring the accessibility of the conference material to (visually, hearing, etc.) impaired participants and one suggestions was to have a solid IT infrastructure. The majority of the respondents would prefer to maintain a "fee free" registration (average vote 7.4/10). However, when asked about a reasonable fee for the the (major) online conferences, 45 % of the participants would find it acceptable to pay a fee of between 20 and 50 euros, with 43 % favoring lower (26 %) or no fees (17 %) at all. The respondents found that their institutes/funding agencies should contribute financially to support the organisation of a fully online conference with an average vote 6.4/10. Outreach While outreach activities had been foreseen for the in-person conference in Paris, no outreach programme was included in the online conference, since the organising committee focused, in the limited amount of time that was left for preparing the online conference, on the scientific programme and the technical setup. However, we asked the conference participants how important would it be to propose a public online outreach program during an online conference and, optionally, who could be the target public and to propose some ideas for the future. The average importance of proposing outreach activities in an online event is 6.6/10.
Such outreach activities should mainly target students of various levels (high-school, undergraduates) as well as general public (ranging from kids to retirees) with an interest in science. Attention should be paid also to propose activities that can reach out to students and post docs from nations without much budget for travelling around the world. Such activities could be useful to remind the general public why fundamental science is important and how the public money that supports it is used. Plenty of ideas are already being developed on potential online outreach activities, by the outreach teams of the LHC Collaborations as well as by the International Particle Physics Outreach Group (IPPOG). They include webinars, virtual tours of the detectors, virtual labs, hackathons, physics-inspired quizzes, and summary talks of the main highlights of the conference explained in a way that is accessible to everybody.
Overall satisfaction and feedback The overall satisfaction of the conference participants is shown in figure 10. The average satisfaction was found to be high at 8.6 out of 10.
Conclusion
Because of the extraordinary conditions related to the COVID-19 pandemic, the LHCP 2020 conference, the eighth in the series of annual conferences on the physics of the Large Hadron Collider, was exceptionally organised online, in the week of May 25 to 30, 2020. It was the first fully online HEP conference with more than 1000 participants. The conference was a success with a very high level of satisfaction based on participant surveys. We believe the experience gained in the preparation of the LHCP2020 online conference, as well as the feedback received by the participants, will be useful not only in the organisation of online events, but can also be beneficial for low-cost, inclusive, and delocalized conferences.
Acknowledgments
We would like to thank all the people involved in the organisation of the conference, the members of the international advisory for their suggestions and guidance, the programme committee for setting up such an exciting program and the speakers for delivering high quality talks and adapting to the new schedule, the local organising committees for the preparatory work for the Paris conference that was partially reused for LHCP2020.
We would also like to thank the CERN IT department, and in particular Thomas Baron and Jonathan Coloigner, for all the technical support in the setup of Zoom, Webcast, Discord, the recordings of the presentations and their captioning. Many thanks to Connie Potter and Dawn Hudson who helped with the preparation of the conference Indico webpage, the conference advertisement, and took care of reimbursing the fees already paid when the in-person conference was cancelled. An outstanding work of communication on the social media was done during the conference by our Twitter team: kudos to Yasmine Amhis, Zaida Conesa del Valle, and Laure Marie Massacrier.
Finally, we would like to thank all the participants of the conference, who all together made its success, and in particular those that volunteered to edit the transcripts of the recordings -hoping that future online events might benefit from this work -and those who filled the survey and gave us the precious feedback presented in this document.
ICHEP 2020
Introduction
The International Conference on High Energy Physics (ICHEP) is organized every two years. Since its first edition in Rochester 1950 it has grown to the leading conference in particle physics worldwide, where the latest experimental results and theory achievements are presented. The usual format is three full days of parallel sessions and another three days of plenaries. The typical attendance exceeds 1000 participants.
The Local Organizing Committee (LOC) in Prague started the preparations in 2016, foreseeing a standard conference format. In early spring of 2020 it became clear that the conference could not be held as originally planned due to the COVID-19 pandemic. Several discussions within LOC and with C11 IUPAP Commission took place, the final decision was to move the conference wholly to a virtual format.
Conference format
Since the conference participants are scattered over the whole world, the sessions were organized in shorter time slots, alternating between morning (8:00 to 13:00 CEST) and afternoon (15:30 to 20:30 CEST) slots that reasonably match people's working hours in Asia and America, respectively. Speakers were asked during the registration about their preferred time slots.
In order to accommodate the large number of accepted contributions (total 800 talks in 17 parallel sessions, 44 plenary contributions, 150 posters), the conference was extended to four days of parallel sessions and four days of plenary sessions. In addition to these so-called "premiere" sessions involving presentations, specific "replay" sessions were organized to further ease the attendance to participants from East/West time zones. During the replay sessions, the recordings on YouTube were streamed. The timeline and sessions are summarized in Figure 11. All sessions were recorded and the videos were available on YouTube and later also in CERN Document Server (CDS). Figure 11. The layout of ICHEP2020 parallel (left) and plenary (right) sessions. "P" stands for the premiere session, "R" indicates the replay session.
Technical solutions
The Zoom platform was chosen as the technical solution for the ICHEP2020 virtual conference. All talks were given live during the premiere sessions rather than being pre-recorded. A dedicated Zoom room with a technical assistant from LOC was available few days before the conference to allow participants to test their audio/video. This proactive approach minimized problems during the premiere sessions. As already mentioned above, the premiere sessions were recorded in Zoom platform by technical assistants from LOC as well as by the corresponding session convenor. This solution was adopted as precaution of potential internet connection problems on the organizer's side. Nevertheless, these backup recordings were never needed, as everything wound up running smoothly.
A Zoom Webinar up to 3000 participants was used for all plenary sessions and Neutrino parallel session (where the largest audience was expected). For the remaining parallel sessions Zoom large meeting (maximum 500 participants) was used. All sessions were simultaneously streamed to YouTube, which provided a backup solution for those who could not eventually join Zoom directly.
Apart from talks, young researchers and students were also invited to present their work as posters. Poster sessions were organized during the parallel session week. Posters were turned into mini-talks (maximum 5 slides) and presenters were offered to upload into the agenda additional material (e.g. a standard poster) with further details. Plenary sessions were close-captioned by Whitecoat Captioning company that was hired specifically for this purpose.
Interactions
Opportunity for discussion -both within each session as well as informal discussion afterwards -is a key ingredient of every conference. The discussion between participants was stimulated by several means:
• Usual questions and answers at the end of each talk or poster presentation. Additionally, Zoom rooms remained open for typically one hour after the respective session finished in order to enable further discussion • A dedicated Mattermost channel was setup for continuous discussion in each parallel session • Topical discussion sessions were organized during the plenary session week. Senior scientists were asked to lead and stimulate discussion on individual topics • A virtual tour through Prague via drawings by one of our colleagues was offered
Other program elements
Other events originally planned to accompany the conference were also moved to a virtual format:
• Outreach and public relations activities were performed on Facebook (Czech) and Twitter (English) • An art competition called "BeInspired" was organized • A public lecture was given online by Barry Barish, the Nobel Prize winner 2017, in the evening of the first day of plenary session week • A European Research Council (ERC) workshop was organized during the weekend between the parallel and plenary sessions Table 2 summarizes the attendance for the last three ICHEP conferences. The virtual format of ICHEP2020 allowed for a much larger number of registrants as compared to the previous in-person conferences. Since the ICHEP2020 conference fee was waived, the number of registrants does not necessarily reflect the number of active participants. Nevertheless, we encountered 2835 unique participants (connected more than 15 min) on Zoom and approximately 200 on YouTube. Parallel sessions were typically attended by 100 to 200 people and plenary sessions had typically 500 to 1200 participants. The discussion panels were attended by 20 to 100 people. The effort involved in organization of the virtual conference was very appreciated and we received many congratulations. The positive feedback is illustrated in Fig. 12, which shows the response to the post-conference survey.
Statistics & feedback
Concluding remarks
The ICHEP2020 was a great success for a virtual conference as evidenced by the high-level of satisfaction from the surveys. Many more people participated in the conference and the lack of travel burden helped make the conference more inclusive. Nevertheless, more people were needed to run the conference (Zoom technical assistants, management of recordings, etc.) compared to a standard in-person event. The limitation on attendees' interactions were at least partially offset by topical discussion sessions and live chats on Mattermost. Strong institutional support for ICHEP2020 is acknowledged, which allowed for waiving of the conference fee, for example.
Moriond 2021
The spirit of Moriond
Since 1966, physicists have been meeting in the snowy mountains of the Alps for a week of discussions. It was thought to be a place where all persons were sharing a time and place, with no hierarchy. Everybody has the same speech duration, whether they be theorists or experimentalists, graduate students or Nobel Laureates. People from all over the world live for one week in a family-like environment, presenting and discovering the latest results in the field, but more importantly, interacting. Full attendance is required and specific grants are proposed for those with difficulties to pay for the living expenses. This concept has been working for 54 years, never failing. Until the one we do not name arrived and got us all stuck at our homes. Again, no hierarchy applied. But the "meeting" was seriously compromised and with a knife in our hearts we had to cancel the 2020 edition only two weeks before its launch.
One year later, responding to a certain demand, but also to our own need to pursue, we had to face that the 2021 edition of the Rencontres de Moriond would be virtual or would again not be at all, at the risk of falling into a definitive breach where results would be presented elsewhere, finances would fall into a serious situation and inactivity would lead us to different paths. After all, there was a calendar to maintain, results kept being obtained and released, students kept working on their theses and needed to be presented to society in view of postdocs. The show must go on, as Freddy would say.
Three sessions, Three experiences
Over the years, the Rencontres de Moriond have split into half a dozen different sessions, dedicated to different subjects and communities. Some are annual, others biennial, others even quadrennial. In 2021, for this Moriond@home we decided to organise three sessions • Electroweak Interactions (annual) • QCD and High Energy Interactions (annual) • Gravitation (bi-annual) Each session had a very different structure with different tools -we were definitively trying different things! Gravitation The Gravitation session taking place every odd year would have to jump into 2023 if we skipped 2021. This would have made it impossible for a large number of graduate students to present results during their thesis time, which motivated us to go forward with a virtual event dedicated to them -a full poster session.
It was a short three half-day format (Tuesday to Thursday, from 13:00 to 17:00 local time), divided into two daily poster sessions of sixty minutes each preceded by a thirty minute plenary introduction to the theme of the session.
24/55
The existence of the gather.town platform was discovered by a scientific organizing committee member and we thought that, as it was addressed mainly to a young public, it was worth trying.
The result was better than the organizers could have expected. It required, on the organising side, a thorough learning on how to use and customise the gather.town platform for our purposes. We ended up having one conference room, six poster rooms, one meeting room, a bar and a garden where people could meet informally.
We had 124 participants, mostly graduate students and postdocs, but also quite a few "regulars" that were happy to meet the community, even if virtually. The possibility to present in small groups and navigate freely from group to group made it really close to an on-site conference. The selective activation of microphones and cameras inside a well-defined area without interfering with neighbouring areas is a real bonus. Also, the fact that the venue was always available and that participants could stay and meet at any time outside the official hours made the attendance quite lively. From a technical point of view, we created a short User's Guide with basic hints and proposed daily ten minute meetings during the prior week so that people could test the platform. Overall, there were no major difficulties except for really senior users, who needed to be individually guided for the first five minutes or so.
At the end it was almost frustrating not to continue using the venue, as it appeared so full of possibilities remaining unexplored.
Electroweak Interactions and Unified Theories
The EW session was a hybrid between what we usually do and the possibilities that virtual meetings offer. It consisted of short half-day live sessions (14:00 to 17:00 from Sunday to Saturday local time) over Zoom. Regular talks were pre-recorded using also Zoom and stored using Vimeo. Links to the talks were accessible from the program on the website and released 24 hours before the dedicated live session. Each of these sessions started with a Young Scientists Forum (five minute talks on the session's theme) followed by the discussion on the recorded talks and questions. In parallel, there was a conference Slack account, with one channel per talk and one channel per live session. Participants could then watch the talks at their convenience and take the time to write questions on the corresponding Slack channel. These questions could be answered by the speaker but would also be considered by the session Chairperson, along with live questions that were included in the discussion. Speakers could also follow-up with a discussion on Slack after the live session. This organisation was intended to limit the problem of people in significantly different time zones, making shorter live sessions and allowing each person to watch and comment on talks at their own pace and availability.
As questions were expected on Slack, we chose to disable the chat on Zoom in order to avoid multiple sources of input for the session Chairpersons. Indeed exchanges worked rather well summing up with a peak of 758 messages in a single day for an attendance of 216 persons. Of course, things do not always go well by themselves -we had to make a real effort for these exchanges to work properly. Each session was moderated by a dedicated member of the Scientific Organizing Committee, who appointed two dedicated Chairpersons (one theorist and one experimentalist) who in turn chose secret agitators. A whole system! QCD and High Energy Interactions The QCD session was the closest one to the classical format -seven days, from 14:00 to 20:00 local time with live talks and discussion, split by a 30 min break. It was held fully over Zoom, each 15 min talk was followed by five minutes of questions and discussion. For some themes, a general discussion time was reserved at the end of the program.
Virtual coffee breaks and after-meeting discussions were expected to happen on gather.town, but participants did not seem to find an interest in using this extra space. The truth is that sessions were quite long and intense and breaks were used as ... well, real breaks! You can't stretch your real legs in gather.town.
There was an attendance of 120 persons, which is quite similar to usual in-person participation.
Technical Aspects
Costs We thought there would be an important investment in subscriptions to all the different tools and platforms we wanted to use. But in the end it was surprisingly cheaper than expected, as all tools used have a special plan for education that we benefited from after a few email exchanges and documentation provided. The reduced price was about 20 % of the regular one, which was very affordable. Furthermore, there was no special need for hardware or supplies.
25/55
Human Resources Regarding the secretariat, we had more or less the same need as for a in-person event -dealing with the website, registrations, participant requests, accounting, etc. was a little less demanding than for an on-site event. On the other hand, we did not need any specific computer support. The managing of the various platforms did require a large amount of preparation time, but could be dealt with by the Organising Committee -creating the necessary spaces and channels, checking and uploading each poster and video in due time, creating all necessary links and reacting to other various details that one could not foresee. Also, a constant human presence at-the-ready in the background was required, making sure everything was working from the technical point of view and being prepared to jump into a back-up solution if anything failed. Practically speaking, this really meant an enormous amount of time during the week. Also, as already mentioned, an increased participation from the members of the scientific committee was required as compared to a traditional in-person event where building the program is their main concern. Things needed to run like clockwork -checking speakers sound, screen sharing and image, preparing questions, avoiding silences, handling the discussion threads, etc.
Fees We chose to charge a limited fee of 50 euros, which was an amount that was thought to be accessible to all. This was not meant to fully cover organising expenses since we have a permanent secretariat, but it served as a kind of engagement, avoiding inactive registrations and limiting paperwork and mail exchanges for a null participation. We also hoped it would maximise the presence of participants, since they did pay for the thing after all. In the very few cases of participants explaining that they could not pay for this fee, for budgetary or administrative reasons, we did waive it, although we did not open a webpage for grant applications in these cases.
Participation
We had a similar number of participants in the virtual conference to what we usually have. This relates both to the amount of talks (that was the same) and to the fee (that might have discouraged a few visitors). We were satisfied with this result, as we were not aiming for a very large or different event, but rather a substitute to what we usually do. It stayed between the limits of what is comfortably handled by our team.
We could see different trends of the daily participation amongst the three sessions. While Gravitation was quite stable around 80 persons for 124 registrants, the EW and QCD instances had a general tendency to decrease along the week. Also, on a daily analysis, we could see that in QCD there were fewer participants per day relative to registrants, and clearly decreasing towards the end of the day, while in EW it was a bit more stable. Only 1/4 of the participants in EW and 1/6 in QCD attended the full week, as measured in terms of connection time, and it does not mean people were actually listening all the time. This result is quite understandable -it is very difficult to stay connected during a full week while at work or at home. One may stay focused on a few subjects but hardly fully participating throughout.
Lessons Learned
A very detailed preparation is needed, otherwise interactions fall into a background noise. To achieve this, more human presence is needed before and during the events. We also noticed that interactions are often based on prior acquaintances. This was quite obvious in gather.town and Slack, where people seemed happy to "see" each other. The discussion relies greatly on the fact that people know each other and hence dare to talk, but it is difficult to create new connections under these circumstances.
Sessions need to be short since participants cannot make themselves available for the whole day. Sessions of three hours seem to be a good compromise. As mentioned before, there is no real cut-off from the home/work environment as when one is isolated in a hotel in the mountains, sometimes with a poor internet connection (those were the days...). Also it is difficult to include all participant time zones comfortably, so for someone having to wake up very early or go to bed very late, making things short is generally appreciated.
Regarding the platforms used, Zoom is the easiest to use, but by far not the most interactive. gather.town can do it all, but might not be intuitive for some users and can be limiting in resources available -not all web browsers supported it equally well at the time of the conference and smartphone features are reduced. Slack seems to be a must have for communication in the background, allowing an organised and long-lasting exchange.
Combining the two functionalities into a single application would seem to be good practice, as too many tools or spaces can get people confused and/or information can get lost. But this requires taking the time to structure and adapt these tools to specific needs, so participants will find a time and place for everything.
Proposing a dedicated time slot for testing is a really useful activity. This could happen either the week before, or simply 1/2 hour before each daily session where organizers make themselves available for individual testing of the various features, navigation, document sharing, sound and image, resources, etc. This really helped to make people comfortable and confident that everything would work smoothly when the time comes to start the conference.
Conclusion
Virtual meetings can work, but cannot efficiently replace an in-person meeting where the purpose is to discuss and where as much happens outside the conference room as inside, if not more. Some things cannot happen online and in the Moriond case, the virtual is not at all the preferred option. But in many other situations, it is clearly a field where things are improving and many intelligent solutions are being found.
Remains the difficulty of setting the world to rights while having a beer virtually...
Connecting the Dots 2020
The Connecting The Dots (CTD) workshop series brings together experts on track reconstruction and other problems involving pattern recognition in sparsely sampled data. The CTD scientific program is nominally 2.5 days with a fully plenary format, including three types of presentations: Plenary talks; young-scientist talks; and posters. An in person workshop had been scheduled for April of 2020 with about 80 people expected to attend. CTD2020 was moved to a virtual conference in mid-March once it was clear that the local and international restrictions due to the COVID-19 pandemic would prevent an in-person event. By this time, the workshop had accepted abstracts for contributions and had published an initial timetable for its event. Essentially six weeks remained before the workshop, so only a limited time was available to determine the format and overall details of a virtual event. Both the local organizers and international committee were strongly in favor of a virtual conference rather than no conference.
A virtual format was formulated in less than two weeks. The organizers wanted to preserve the full scientific program as planned for the in-person meeting -talks and posters, the planned length of oral presentations, and the planned opportunities for interacting with all presenters (oral and poster). We judged it to be not possible to schedule the entire scientific program during the overlap between the EU extended-working day and the US extended-working day. Asia-pacific participation was not prioritized as there were no abstracts from that region contributed to the workshop. Finally, the organizers felt that we had limited flexibility in rescheduling as speakers needed time to get results approved (e.g, could not move presentations forward).
In the end, the organizers settled on a two-phase workshop: "recording sessions" followed by dedicated question & answer sessions with about one week in between for viewing recording contributions. Posters were allocated as short talks as we had insufficient time to converge on a more creative solution appropriate for a poster presentation.
The recording sessions happened outside of the EU/US overlap time period to make scheduling easier, had an audience, and had a short question & answer period for each time. There were six sessions over three days. Recordings were done via Zoom, which was new to the CERN community at the time, and used Zoom's cloud recording service. Speakers were asked to share their video during their presentation if they could (most but not all could). Sessions had twenty participants on average and went essentially without technical issues.
The cloud recordings were available to the organizers about one hour after each recording session. The organizers automated the process of splicing each session into talks and managed to upload recordings to both the Indico workshop agenda and YouTube before the start of the next recording session. Some aspects were done by hand, including finding the time break for each talk and uploading to both Indico and YouTube. YouTube does have an API to upload videos but its free API quota is sufficient to upload only a few videos per day.
A Mattermost channel was set up to host discussion for each contribution as videos were viewed. In the end, this went essentially unused, suggesting that something more integrated with other infrastructure was needed instead of a standalone discussion platform.
The question/answer sessions happened about one week after the recording sessions. There were three sessions, each two hours. Sessions were held from 10:00 to 12:00 Eastern time and were recorded. Each speaker had a one-minute per one-slide introduction to their talk to get the discussion started. More than 200 people had registered for these sessions. In general, the question & answer sessions had good (50+) attendance, however the discussion was largely driven by a few "experts". The short introduction to each talk worked well. Speakers followed the format and generally successfully conveyed the important conclusions of their work.
Conclusions that the organizers drew from this workshop included:
• There was a clear audience preference towards real-time interactivity rather than recordings.
• Free/open discussion was clearly more difficult in a virtual setting. Perceived higher barrier to speaking up and coffee/lunch/evening discussion time was lost
• Indico timetables did not handle events without events every day very well. Days without contributions were shown in the agenda making it more difficult to find active days.
• It is important to engage, remember and thank the audience. Workshop photographs are one good approach (enabled by Zoom).
• Often conferences and workshops are organized by a small team having limited experience as event organizers. For CTD, the organizers did not consider the potential impact of up-front cost commitments when planning CTD2020. As HEP starts to organize in-person or hybrid events, it is also a good opportunity to think about best-practices for in-person event organization.
LLVM Developers Meeting
Introduction
The LLVM Project [10] is a collection of modular and reusable compiler and toolchain technologies. The LLVM Developers' Meeting is a bi-annual gathering of the entire LLVM Project community. The conference is organized by the LLVM Foundation and many volunteers within the LLVM community. Developers and users of LLVM, Clang, and related subprojects will enjoy attending interesting talks, impromptu discussions, and networking with the many members of our community. One of the event's main goals is to provide a "venue" where the geographically distributed developers community can interact and exchange ideas. The canonical event format includes:
• Technical Talks -These 20 to 30 min talks cover all topics from core infrastructure talks, to project's using LLVM's infrastructure. Attendees will take away technical information that could be pertinent to their project or general interest.
• Tutorials -Tutorials are 50 min sessions that dive down deep into a technical topic. Expect in-depth examples and explanations.
• Lightning Talks -These are fast 5 min talks that provide a "taste" of a project or topic. Attendees will hear a wide range of topics.
• Panels -Panel sessions are guided discussions about a specific topic. The panel consists of ≈ 3 developers who discuss a topic through prepared questions from a moderator. The audience is also given the opportunity to ask questions of the panel.
• Birds-of-a-Feather -Large round table discussions with a more formal directed discussion.
• Student Research Competition -Students present their research using LLVM or related subprojects. These are usually 20 min technical presentations with Q&A. The audience will vote at the end for the winning presentation and paper.
• Poster Session -An hour long session where selected posters are on display for attendees to ask questions and discuss.
28/55
• Round Table Discussions -Informal and impromptu discussions on a specific topic. During the conference there are set time slots where groups can organize to discuss a problem or topic.
• Evening Reception -After a full day of technical talks and discussions, attendees gather for an evening reception to continue conversations and meet other attendees.
The type of attendance is active project developers, novice and advanced users, students and researchers, programming language enthusiasts, and anybody interested in compilers. The usual in-person attendance is around 400 people.
Event Format
In 2020 due to the COVID-19 pandemic the event was fully virtual [11]. During its duration of three days there were keynote, technical talks, lightning talks, tutorials, poster sessions, bird-of-a-feather, round tables, student research competition (SRC), breaks, social event. The keynote, technical talks, lightning talks, tutorials and SRC were pre-recorded with a moderated live Q&A session. The SRC also included voting for the three best talks. The talks relied on the Whova app for content delivery and Zoom for the live Q&A. The rest used the Remo platform which allowed attendees to virtually walk around and network.
The event was aided by a professional event organization team [12].
The registration deadline was one week before the event and then extended until the end of the event. There were two types of tickets: a free registration subsidized by sponsors; and a US$50 supporter ticket which supports the LLVM Foundation activities. The total amount of registrants exceeded 800, but the real attendance was around 400.
The event had a code-of-conduct and special program encouraging diversity and inclusivity. The event organized branded merchandise such as T-shirts and other apparel.
Interactions
The conference materials were made available at the registration deadline -a week before the event. The discussions happened in the Whova platform as well as the live session questions. There were channels for public discussions. The talks were made available in YouTube.
Conclusion
The LLVM Developers' Meeting spans at least at three continents which makes picking a convenient time for speakers and attendees very challenging. It also seemed difficult for many people to dedicate the necessary amount of time while being in their usual work/home environment. The virtual platforms served their purpose well on a technical level, however did not achieve the usual attendance at breaks and the social event.
The event happened relatively early in the pandemic when the experience organizing virtual events was not substantial. Virtual events pose specific set of challenges in terms of time zones, networking and technical solutions. Overall, the event achieved its mission due to the extra efforts by the organizers and the community.
Snowmass Community Planning Meeting
Origin of the Snowmass Community Planning Meeting
The Snowmass Planning process [13] gathers the U.S. particle physics community to begin defining the most important questions in our field. Over the course of ≈1 year, various frontier working groups identify opportunities to address those questions and produce a report that is used as input for the Particle Physics Project Prioritization Panel (P5). The output of P5 defines the roadmap for the field over the subsequent ≈10 years. This exercise is repeated approximately every 7 to 8 years. A Snowmass Community Planning Meeting (CPM) [14] was scheduled to bring the community together and commence this cycle.
The nature of the meeting involves a significant amount of planning discussions, debate, and interactions with physicists from a wide set of working groups. The CPM represents the beginning of a process, rather than a summary or conclusion of work that has already been performed. There are nominally ten frontiers that reflect physics drivers as well as technology and infrastructure. Naturally, many proposed topics span multiple frontiers. Thus there are needs for both plenary lectures as well as parallel discussions.
The goals of the CPM were stated on the event page at [14], and highlighted communication:
29/55
• Inspire the community about the field, and encourage them to engage broadly in the Snowmass process • Inform the community about plans from other regions and from related fields and planned Snowmass activities • Listen to the community • Provide space for members across the field to talk to each other and to discuss, promote, and develop new ideas • Establish cross working-group connections and identify gaps
The 2013 Snowmass cycle began with a similar CPM meeting at Fermilab in the fall of 2012. This was a fully in-person meeting, and approximately 400 community members participated in the meeting. The 2020 CPM initially attempted to replicate that successful kickoff meeting, and aimed for a 2.5 day meeting in November of 2020. In March 2020, a local organizing committee (LOC) was formed to begin planning for the meeting. Throughout the spring of 2020, it became clear that the hopes of an in-person meeting were inconsistent with the reality imposed by COVID-19, and by June of 2020, the decision was made to move to a fully virtual meeting format. The LOC chosen for the in-person meeting continued and began the process of adapting for a fully virtual meeting.
Meeting Organization/Technical Setup
As the CPM focus was on U.S. planning, the meeting times were prioritized to fall during the traditional 9 am to 5 pm workday for all continental U.S. time zones. Thus, the meetings started at 11 am US Central time and concluded by 4 pm US Central. To accommodate the broad program the community needed to discuss, the meeting was extended to four days. New dates were selected to avoid major conferences, holidays, and the US general election. The virtual Snowmass CPM was scheduled for October of 2020, and the meeting organization benefited from the conferences, meetings, and workshops that were forced into a virtual format on short notice during the spring and summer of 2020. Registration fees were eliminated to encourage participation and improve accessibility.
The meeting structure consisted of two days of plenary meetings and two days of parallel sessions. The plenary agenda focused on interactions with the representatives of other global HEP programs, the views from funding agencies, and a community town hall to introduce new initiatives. On the final day, the plenary sessions focused on summaries from the frontier working groups and motivational visionary talks from experienced physicists.
The plenary sessions utilized a Zoom webinar with a capacity of 5000 participants. A decision was made to keep the security settings fairly strict to avoid unwanted incidents. Chat was disabled in the public-facing portion of the webinar, and the question & answer feature was enabled for the participants to communicate with the speakers and moderators. To improve accessibility during the meeting, captioning services were procured from Ai-Media to provide real-time description of the presentations via the Zoom application. To encourage participant engagement, multiple Slack channels were created within the Snowmass Slack space. These channels were communicated to the participants and provided means to contact the organizers, discuss the content of the plenary program and ask questions of the speakers.
Plenary sessions speakers were provided with unique "panelist" links to access the private side of the Zoom webinar. In the weeks leading up to the meeting, the LOC held multiple rehearsals to practice the mechanics for the plenary sessions, such as webinar content projection, recording and the question & answer interface. These rehearsals also provided speakers with opportunities to become familiar with the interface. Additional technical and code-of-conduct training were required for all session chairs.
During the parallel sessions, up to 18 Zoom "rooms" were provided in the format of a traditional Zoom meeting. Since these were smaller groups and focused on discussion and interaction, the security settings enabled chat and participants could unmute themselves and share their video. Presenters were promoted to co-host to share their slides. Typically the program was scheduled such that each room contained similarly themed topics. This structure was chosen to prevent excessive fragmentation of the meeting, and to ensure participants could locate the relevant topics. Between sessions, the rooms were held open to provide space for carryover conversations and to encourage the development of new ideas and relationships. We note these rooms were often underutilized. Each parallel session room had a designated "host" that focused on the Zoom meeting logistics and security, while the physics content of the meeting was handled by the session conveners. The overall structure of the meeting and instructions on how to navigate the meeting was collected in a guide for participants, which was modeled on a similar conference guide provided for ICHEP 2020 ( §3.3). This included specific instructions for Zoom hosts, conveners, moderators, and speakers, and was posted to the Indico agenda.
Statistics and lessons learned
About 2000 people registered to attend the meeting prior to the start of the CPM. Registration was required to receive the link to the Zoom webinar. By the final day of the meeting over 3000 people had registered. Statistics were collected on meeting participants from within the Zoom software. Unique connections (identified by Zoom display name) were tracked as a function of time, and the total integrated unique connections were recorded (see here for more information).
Since individuals have the freedom to enter different names into the Zoom software, it is estimated that these statistics over-count the actual attendance by up to 10 %.
An examination of Figure 13 reveals several features associated with this meeting. First, more than 2500 unique individuals connected to the meeting at some point during the week. This represents about 85 % of registrants. This attendance represents a factor of ≈6 times the participation of the in-person meeting in 2012. Removing the barriers of cost and ensuring the content was only a click away greatly enhanced the accessibility of the meeting.
On the flip side, engagement was a challenge each day and over the course of the week. Attendance tended to peak early in each day and dropped off anywhere from 20 % to 50 % from the peak. Attendance also dwindled from a peak of nearly 1200 participants early on the first day to 400 at the conclusion of the meeting Because security was prioritized during the plenary webinar sessions to avoid Zoom "bombing" or other disruptive experiences, participants reported a lack of sense of community. They were unable to clearly see other participants and directly chat through the Zoom interface. The Slack channels were used at some level, but the traffic was relatively low. Many of the in-person brainstorming interactions were not replicated by this setup. An investigation into tools such as Slido which allows an integrated, moderated wrapper to the Zoom webinar is warranted for future efforts.
OSG Virtual Meetings
Introduction
The OSG is a distributed organization that advances the state of the art of distributed High-Throughput Computing in the United States and worldwide. With staff and users scattered around the globe, the OSG has relied on virtual meetings for a long time, especially for smaller groups and events. However two longstanding and larger in-person kinds of events-the OSG All-Hands Meeting and the OSG User School-were significantly affected by the COVID-19 pandemic, and they are the focus of this section.
31/55
OSG All-Hands Meeting
The OSG All-Hands Meeting (AHM) is the premier annual meeting for the OSG, bringing together campuses, science users, site administrators, developers, and anyone else interested in the latest news and technologies. Historically, the format has been a mix of conference presentations, workshop segments for discussion, training, and time for the OSG Council to meet. In 2020, the usual in-person AHM had been scheduled for March. In early spring, the then-nascent pandemic caused OSG management to postpone the meeting until September 2020, to see how things played out. Of course, by autumn, travel and mass gatherings in the United States were largely not an option so the meeting was made virtual [15]. And the regularly scheduled AHM in March 2021 was never even considered as an in-person event [16]. Because the focus of the AHM had always been to bring the community together in person, using a virtual format was a significant change.
For the switch to an all-virtual format, the program changed in a few critical ways. First, the schedule was extended to a whole week (as opposed to 3 and a half days) but with daily content reduced to a single track totalling 3 hours. The net result of these changes was significantly fewer hours of presentations. Further, workshop-like and training elements were dropped in favor of other, more focused meetings. However, open discussion time was added to the end of each day in hopes of recapturing a bit of the feeling of being in a shared space. Thus, the resulting daily schedule was 1.5 hours of talks, 1 hour of break, 1.5 hours of talks, and then 1 hour of discussion time, for a total of 5 hours of allocated time each day. Especially for the AHM 2021, the talks for each day were organized around a topic or two with a specific audience in mind; that way, attendees with specific interests could focus on certain days. There were usually 3 to 5 talks per session, so about 10 min to 20 min apiece with a bit of time for questions during the transition times.
The main technologies used for the AHMs were the Fermi National Accelerator Laboratory instance of the Indico scheduling system and Zoom for the meetings themselves. Indico has been used for a long time in the OSG community, so that was nothing new. Zoom was chosen due to its already ubiquitous nature in March 2020-most people in the community already had and used it, plus several of the institutions containing OSG staff had institutional access which made planning easy. Each day, one main Zoom meeting was home to the talk sessions, the break between, and one of the discussion areas afterward (dubbed "the hallway"); separate Zoom meetings existed for the one or two parallel, topical discussion areas. Registration in Indico was required and the Zoom links with embedded passwords were emailed to all registered participants daily. OSG staff had roles throughout the event, with a session moderator on duty at all times, helpers during sessions to deal with Zoom issues and chat questions, and so on. There was also a Slack workspace channel for the meeting, but it was used lightly. Looking at participants for the AHM 2021, a total of 336 people registered. By analyzing detailed Zoom logs, it seems that just under 300 unique attendees joined at least once during the week, which is easily double the participation of any prior in-person OSG AHM. However, peak attendance during any day or session was about half of the total registration count. Thus, it can be misleading to look just at peak attendance statistics, which are easy to see while the meeting is in progress. From the peaks, one might conclude that a significant fraction of registrants did not attend, when in fact over 85 % of registrants attended at some point. Attendance varied considerably by topic, as might be expected, and discussion times were attended lightly. Figure 14 shows an example of OSG AHM 2021 live attendance numbers.
In sum, both virtual OSG AHMs went smoothly and participants who provided feedback (there was no post-event survey) seemed happy. Having fewer total hours of content meant that the program committee had to focus carefully on invited talks and the structure of sessions and days, and this extra focus seemed to result in a high-quality program that appealed to many. A real benefit of having virtual meetings was people from around the world who could not attend in person, due to travel constraints, were able to participate. Attendees came and went as schedules allowed and topics appealed, and it seemed that the hour-long break between sessions helped some people fit the AHM into their busy schedules. Discussion times were lively and mostly on topic for those few who joined. Of course, there are always ways to improve! By far the biggest request was to record and post videos of the talks, which will be done for future virtual events. And the staff identified a number of smaller ways in which to try to engage the community more and keep participants interested in the content longer. Figure 14. Sample of OSG All-Hands Meeting 2021 live attendance.
32/55
OSG User School
The OSG User School is by far the largest and most important training event that OSG offers to its community of researchers around the world. For 10 years prior to the pandemic, the School was run at the University of Wisconsin-Madison as a week-long in-person synchronous training program. In many respects, this event was like the Moriond conference described in Section 3.4: The focus was on camaraderie and the shared, intensive learning experience, with everyone from advanced undergraduates to graduate students, post-docs, staff, and faculty all treated the same and working together. In 2020, the pandemic began just as the application period for the School was ending, throwing all plans for the year into disarray.
After much consideration, the plan for 2020 evolved into an offering with a curriculum similar to but pared back from previous in-person events. Then to take advantage of some benefits of remote learning, the OSG Virtual School Pilot 2020, as it became known, added a strong focus on personalized learning and one-on-one and small-group learning and consulting opportunities. The primary goal for participants morphed into a very personal one: Learn enough about distributed High-Throughput Computing to apply it to at least one research project, and have that project running by the end of the event. Of the accepted applicants, twenty chose to participate in the re-imagined Pilot [17].
For technology, the venue for lectures was Blackboard Collaborate and then Zoom, Slack, and regular email were used heavily for individual and small-group interactions. For each lecture day-and there were only five such days-live lectures occurred twice each day to provide flexibility in participant scheduling; lectures were recorded and posted after the event. Exercises were posted on the public website, and all participants were given logins to Access Points at the University of Wisconsin-Madison and OSG Connect.
Overall, the Pilot School went very well and most participants reached their individual goals for the event.
33/55
The staff learned a great deal, too, and so the OSG Virtual School 2021 will follow a similar pattern bit with improvements throughout [18]. Nonetheless, the OSG User School will return to its in-person format as soon as possible.
General Observations
Based on OSG's experience with virtual events in 2020 and 2021, there are a few observations to make about such events and about what the future might hold. As others in this report have noted, virtual conferences and open meetings have a number of advantages and disadvantages. The primary advantage is the opportunity to reach more members of the community who would otherwise be unable to attend due to time constraints, travel costs (financial and other), and so forth. Both OSG All-Hands Meetings had far more total unique participants-double or more-than previous in-person events. Accordingly, without travel and other in-person costs, OSG was able to offer its events without charge, expanding access and simplifying logistics.
There are many disadvantages, all fairly obvious at this point: lack of in-person connections and interactions, time zone disparities that make attendance difficult for some, a greater sense of fatigue that necessarily shortens events, and seemingly reduced engagement due in part to still being "in the office" and the ease of leaving and rejoining at will. Nonetheless, perhaps because of the pan-in pandemic, everyone seemed to understand the limitations and did their best to engage and enjoy.
One might be tempted to say that an all-virtual event is easier to organize than an in-person one. However, the OSG experience was that the effort savings from having no travel, accommodation, food, and venue logistics were largely offset by effort expenditures on preparing virtual venues, documenting and practicing key procedures and back-up plans, preparing staff, and so forth. Simply put, it takes a great deal of effort to run a successful virtual event. That being said, preparations for a virtual event need not start as far in advance (by calendar days) as for an in-person one.
Another lesson learned was that it can be useful to completely rethink the approach to and goals of an event when it must become virtual. For OSG, this was especially true of the User School, where the in-person approach simply did not translate into a virtual one. But even for the All-Hands Meetings, it was expected that interactions among participants would be severely reduced and hence the common AHM elements that depended on it-such as training events-were best omitted and handled via other, focused events.
Looking forward, the next great challenge will be the hybrid event. While no specific plans are in place yet, it is generally accepted within OSG that future AHMs will have to include a significant online component. What does such an event look like and how will it work? The key will be to identify the elements of virtual events that worked the best and preserve those. For example, it may be good to keep the condensed, single-track schedule of the virtual AHMs, even for the in-person participants, thereby acknowledging the fatiguing, come-and-go nature of virtual events. The extra time made available by a condensed schedule could be used for in-person activities that emphasize the benefits of being together for those who choose to do so-that is, more workshops, training events, discussions, and loosely structured collaborative work time. Much work remains to be done in this space. Given the uncertainty of the future, it might even be helpful to have another workshop in a couple years to discuss hybrid meetings!
PyHEP 2020 Experience and 2021 Plans
Origin and format of the PyHEP series of workshops
The PyHEP, "Python in HEP", workshop series started in 2018, recognizing the increasing importance of Python in Particle Physics. The workshops are organised as part of the activities of the PyHEP Working Group of the HSF, the HEP Software Foundation [19]. Under the support of the HSF, the aim from the onset has always been to provide an informal environment to discuss and promote the usage of Python in the Particle Physics community at large. Furthermore, diversity and inclusion aspects have also been taken seriously into consideration, in terms of the set of participating communities, and cultural backgrounds, gender, ethnicity, disability, sexual orientation of participants.
The first workshop, PyHEP 2018, was run as a pre-CHEP 2018 conference event, profiting from the presence of a large community in Sofia, Bulgaria. Slightly over 10 % of the CHEP 2018 attendees participated in PyHEP 2018.
34/55
The workshop format remained unchanged until the time of the COVID-19 pandemic, that is with an in-person format, given the spirit and goals of PyHEP:
• Only plenary, topical, sessions • Bring together users and developers • Very informal, with significant time for (lively) discussions • Educative, not just informative • A mix of keynote presentations, tutorials and "standard" 20 + 10 minute presentations
The COVID-19 pandemic constrained the workshop organization into a decision to either cancel the event or run it fully virtually. With a few months ahead of us we opted for a virtual event and we learned a great deal from this experience.
PyHEP 2020 organisation and running
To adapt to the requirements and constraints of a virtual event, the duration of the workshop was extended from 2.5 days as in the 2019 in-person format to 5 days, with shorter sessions. Because of the global nature of the event we organized the sessions in two different times zones -a "Europe-friendly" session and an "Americas-friendly" session. The former was by default 3-hour long whereas the latter only lasted 1 to 1.5 hours. We had approximately a third of the time devoted to tutorials and two thirds devoted to standard presentations.
PyHEP 2020 became a truly global event with participants from all over the world. Without travel and/or budget constraints, and no registration fees, the level of interest increased significantly, from about 50 to 70 participants in in-person events to an incredible 1000 registrations, which we had to limit to for technical reasons related to the video conferencing system. Understandably, we observed a significant increase in the fraction of students registered, including undergraduates. In fact, almost 40 % of the participants were not members of an experiment or a collaboration (e.g. students and theory, simulation or instrumentation colleagues).
The workshops try to timely address topics in the spotlight as well as important topics specific to Particle Physics. PyHEP 2020 had sessions on the following topics: analysis fundamentals, analysis platforms and systems, automatic differentiation, performance, fitting and statistics, and the HEP analysis ecosystem.
We strongly encouraged all presentations to be prepared as (Jupyter) notebook presentations, with all material made publicly available on GitHub. To enhance the interactive experience we also encouraged the preparation of a "Binder launch button" so that any participant could follow and experiment in real time the notebooks being presented by simply launching on the browser the material. To ensure a smooth run we used both the Binder Federation and the CERN BinderHub resources (for those with CERN accounts), and made sure that resources on Binder were (kindly) allocated by the Binder Team for the relevant repositories at the time of the presentations. All material got posted onto the workshop agenda, including slides, GitHub repository links, and the links to the recordings. The latter were captioned -thanks to our sponsors -and uploaded on to a dedicated playlist of the HSF channel on YouTube.
As mentioned above, PyHEP workshops allow for a fair fraction of time for discussion, which is paramount. The online platform Slido was used to crowdsource questions from the audience: via a web page each participant has the opportunity to post a question, even anonymously, and also "upvote" or "downvote" any question so that, effectively, by the beginning of the Q&A session, the session chair sees a prioritising set of questions that got selected as the most popular and interesting ones. At the end of the Q&A sessions all questions got copied to Slack in the appropriate topical channel, where speaker and participants could continue to discuss and exchange. A few polls were also run via Slido as a fun means to socialize. The use of Slido turned out to work extremely well; about 40 % of the participants joined and used the platform at least once.
PyHEP 2021 planning
The PyHEP 2021 workshop [20] will again be held as a virtual event, in early July. At the time of writing we have approximately 950 registrations, which shows the continuous interest from a large fraction of the community in these series of workshops.
35/55
Unlike last year's edition, this year's will not be run in two different time zones but nominally between 14:00 and 17:00 CEST, a time slot that has proven appropriate, or at least acceptable, to a large fraction of the attendees. The format is otherwise largely kept the same, though live streaming to YouTube will be experimented for the first time atop the set-up of the usual video conferencing room (with Zoom).
Lessons learned
As expected, virtual events are far more inclusive and truly global, with participation from all over the world, even if time zone constraints imply that some participants are either attending (very) early in the day or rather late in the day. Workshops such as PyHEP, which are meant to foster exchanges among experts and learners, with plenty of time devoted to discussions, are nevertheless made more challenging when running virtually, though the organisation burden is less. In the future we may consider alternating the workshops as virtual/hybrid and in-person.
SciPy 2020 Experience and 2021 Plans
SciPy, the Scientific Computing with Python Conference, is a community conference dedicated to the advancement of scientific computing through open source Python software for mathematics, science, and engineering. The annual SciPy Conference allows participants from all types of organizations to showcase their latest projects, learn from skilled users and developers, and collaborate on code development.
The first SciPy meeting was held at CalTech in 2001 for a few dozen attendees. By 2009, attendance had reached 150 and, exceeding the capacity of the CalTech facilities, the meeting moved to the campus of UT-Austin in 2010. The most recent in-person meeting had roughly 800 registrants.
Typical in-person SciPy meeting
A SciPy meeting is organized around three main elements:
Tutorials half-and full-day, interactive classroom setting. Instructors propose topics, and the Tutorials Committee chooses those that make the most compelling case for their pedagocial approach and appeal to the widest audience of SciPy attendees.
Conference single-and multi-track sessions for audiences ranging from small groups to all attendees.
Keynotes 45 min to 1 h plenary talks, typically one focused on a significant scientific advance enabled by Python, one focusing on the ecosystem of scientific python packages, and one addressing one of the broad themes of that year's meeting, not necessarily from a Python perspective. In recent years, we have also held a keynote on diversity, equity, and inclusion.
Themes and Minisymposia 30 min technical talks in parallel sessions. Themes are two or three cross-cutting sessions identified for each year. Minisymposia are topical sessions organized by different scientific communities.
Birds of a Feather (BoF) Sessions self-organized sessions where communities of interest can meet, talk, and plan.
Lightning Talks 5 min talks (no going over!) for an hour every afternoon on late-breaking news and whimsical projects.
Posters traditional static and semi-interactive presentation en masse.
Sprints self-organized, intense development of Python packages in the SciPy ecosystem.
In a typical year, tutorials run in two or three parallel sessions on Monday and Tuesday; the conference is held Wednesday through Friday, and sprints run through the night from Saturday to Sunday. Figure 15. Schedule for SciPy 2020, showing virtual platforms used for different portions.
36/55
Virtual SciPy 2020
After briefly debating delaying the meeting, SciPy 2020 was held, as scheduled, from July 6 to 12, 2020. In the transition to a virtual format, we kept the key elements of a traditional meeting, but rearranged the schedule, shown in Fig. 15, to hold live tutorials in the mornings, followed by live keynotes, and then live question & answer sessions, lightning talks, and BoFs in the afternoons, all over the course of five days. Sprints were held after the conference, on Saturday and Sunday, as usual. For this first, purely virtual meeting, we reduced registration fees, inclusive of tutorials, to US$75 standard/US$25 student (fees for in-person SciPy 2019 were US$500 standard/US$225 student, with the same charges again for tutorial attendees). As SciPy traditionally draws a North American audience, meeting times were kept in Central Daylight Time (there is a long history of other regional meetings, e.g., EuroScipy, SciPy India, SciPy Latin America, and SciPy Japan). SciPy talks have been recorded for many years and distributed via YouTube since 2013 [21]; the difference for 2020 being that Theme and Minisymposia talks were pre-recorded and made available via the SciPy YouTube channel several days in advance; speakers were given a few minutes to recapitulate their talk at the beginning of their respective question & answer session. Tutorials and Birds of a Feather were held in two parallel sessions; all other elements were held sequentially. Poster authors were responsible for hosting their own content asynchronously. After an initial kickoff using the Crowdcast conference platform, individual sprint leaders set up their preferred mode of communicating and streaming with their team.
Tutorials and live sessions were hosted on the Crowdcast platform. This platform afforded four "hosts" to accommodate speakers and session chairs. For conventional sessions, this worked well, but for non-traditional 37/55 elements like BoFs and lightning talks, with the potential for many speakers in quick succession, more hosts would have been beneficial.
SciPy meetings have had dedicated Slack workspaces for many years, with both official and self-organized channels. The Crowdcast platform included a chat feature that was used heavily, but it did not offer any sort of direct messaging capability and it was challenging to correlate participants between Crowdcast and Slack.
Due to the limited number of channels and hosts per channel allowed by Crowdcast, informal themed sessions were planned for Zoom breakout rooms; due to technical difficulties, these were switched to Google Meet at the last minute.
More than 1400 attended at least one live session and more than 1200 attended at least one tutorial. This substantial increase can likely be attributed to attendees not needing to travel and to the significantly reduced registration fees. Many attendees from overseas emphasized that the virtual format was what enabled them to attend and they expressed the hope that SciPy would retain a virtual component in the future.
Plans for Virtual SciPy 2021
With more time to plan and a better idea what to expect than in 2020, we strove to recapture as much of the feeling of the in-person meetings as possible. To achieve this,
• we returned to a conventional schedule, illustrated in Fig. 16, of 2 days of tutorials, 3 days of conference, and 2 days of sprints.
38/55
• all talks will be live. Talks will still be recorded and made available afterwards on YouTube, as they have been for more than a decade, but we will not be posting pre-recorded talks.
• we are changing the primary meeting platform to Airmeet. This change was motivated by scaling considerations, as the new platform allows up to 10 concurrent sessions with 10 presenters, each. A "green room" for presenters to prepare before going live and live technical support were additional attractions of this platform.
• we will host the more interactive/social elements, e.g., BoFs, posters, and general networking on the gather.town avatar-based virtual meeting space. This will enable us to have both large and small meeting spaces, as well as a nostalgic recreation of SciPy's original home at CalTech.
• sprints will be hosted on Discord. This platform was favored for collaborative coding due to its combination of video streaming, screen sharing, and chatting. Sprint organizers are free to choose a different platform if it works better for their team.
• we will again have a Slack workspace for SciPy 2021, in addition to chat features in Airmeet, gather.town, and Discord.
• we increased registration to US$125 standard/US$50 student in order to support the improved attendee experience via a wider range of platforms this year. These rates are still well below the in-person rates, as well as the rates charged by many domain-specific society virtual conferences. Some sponsor funds are available to support attendees for whom these rates are a hardship.
SciPy 2021 is scheduled for July 12 to 18, 2021.
Lessons learned
• Many more people could attend a virtual meeting who would otherwise be limited by time, cost, travel policies, or family constraints. Several appreciated the reduced environmental impact of virtual attendance. Overseas attendance was up significantly.
• Conversely, work and family tend to be less forgiving for those who are not really "away" at a meeting.
• Many people miss the informal and social interactions of the traditional in-person meeting.
• Pre-recorded talks need to be "dropped" far enough in advance for people to view them prior to any live discussion/Q&A session. For attendees already experiencing "Zoom-fatigue", the prospect of spending all night watching videos as soon as the live meeting is over can be exhausting.
• Interactive, dynamic, or social elements of a meeting either require more people with "hosting" privileges or a platform that allows all participants to engage equally.
• Quasi-anonymity makes managing conduct issues a challenge. Lack of direct-messaging capability on some platforms and correlating identities between different platforms can make it difficult to address concerns in private.
US ATLAS / Canada ATLAS Computing Bootcamp
Introduction
The US-ATLAS Computing Bootcamp is an ongoing annual bootcamp designed to educate newcomers to the ATLAS Collaboration -in particular graduate students -on the core technical computing concepts and tools that are used throughout ATLAS and the broader HEP community. The goal of the bootcamp is to have all students have basic proficiency in the computing tool-chain for a typical ATLAS analysis and an understanding of how to use them. The scope and format of the bootcamp is modeled off of the widespread software workshops by The Carpentries and the training formats of the High Energy Physics Software Foundation (HSF). A similar workshop was planned by
39/55
Canada ATLAS in 2020, but given the pandemic was canceled. The Canada ATLAS instructor team contained guest instructors from US-ATLAS, and so the Canada ATLAS instructors joined with the US-ATLAS instructor team to form a joint bootcamp that allowed for a total of 44 students from both countries to participate. The 2020 Bootcamp covered:
• Version control with Git
Bootcamp Format
In addition to having all of the bootcamp materials be publicly available online, the 2020 bootcamp had all instruction and interactions occur through a Discord server that was setup for the bootcamp, seen in Figure 17.
Discord offered a unified platform for instruction, discussion, and problem solving across text, audio, and video communication. This makes for a very effective platform for a teaching environment like a bootcamp, where the video feed of the instructor team and technical discussion are able to coexist on a single platform. Additionally, unlike Zoom, the text support for Discord supports a flavor of Markdown that provides syntax highlighting, which is useful tool when trying to communicate subtle differences in code that could otherwise be difficult to see. An example of this is also seen in Figure 17 where a student has posted code with no syntax highlighting at the top of the screen and an instructor has replied with syntax highlighting enabled. Syntax highlighting exists across many communication platforms, like Mattermost and Slack, but the advantage of it in Discord is the ability to have it integrated into live discussions with video. Another strong advantage of Discord is the ability to move seamlessly between existing video and audio channels or "rooms". This is critically useful when teaching, as it allows for a student in the main instructional video stream to signal an instructor that they need assistance, have an instructor and the student move to a breakout room with audio/video and text support to iterate on the student problem, and then move back to the main stream without needing to request permissions or support from an administrator. Lowering the barrier for students to interact with an instructor and get help without disrupting the main discussion improves the learning experience, and is similar to the kind of interaction an instructor and student would have at an in-person bootcamp.
Discord is designed to be extensible and has a rich plugin and "bot" ecosystem for adding support and services to the platform. For the 2020 bootcamp, the instructional team enabled the "Dyno bot" which is provides user customizable dashboarding, moderation, and interaction tools. For example, as seen in Figure 18, an instructor is able to quickly generate an interactive poll in the text channel for the main video room by using Dyno's markup syntax. The poll dynamically updates and allows for users to change their response through time, allowing for instructors to gauge real time progress of the entire class and be able to direct instructor time to students that have questions or are behind without having to single individuals out. While they were not needed or used for the bootcamp, Discord has a robust selection of extensible moderation tools that have been well vetted and improved upon given Discord's use as a community interaction platform.
Areas of potential concern or consideration regarding Discord do exist. Discord is publicly available, but has additional features as paid services. The free tier of the service has limited video streaming quality, though for a trivial price the bootcamp was able to purchase a bandwidth extension that delivered video quality that was as good, if not superior, to Zoom's. Similarly, the free tier has a limited number of simultaneous video streams per room, which mean that only one speaker in the general "auditorium" room could share their video at a time, or in a student and instructor breakout room. Additionally, in-platform video recording is not supported unless users are using Open Broadcaster Software (OBS), which can be problematic if users are not experienced with setting up video recording software. Self hosting of the servers for Discord is also currently not supported, which could be problematic for use in some countries given the Mozilla Foundation rates Discord's data privacy practices as not 41/55 being particularly robust [24].
Summary
The use of a Discord server for the 2020 US-ATLAS and Canada ATLAS Computing Bootcamp had a large positive impact on the success of the bootcamp. In addition to creating a centralized virtual location for all discussion and instruction across text, audio, and video, the lower barrier for rapid interaction between students and instructors when students had questions was critical to students learning. Having a centralized location for all interactions also kept the bootcamp focused and avoided overwhelming participants with application fatigue. The server is persistent as well, so the discussions and resources added to the Discord server are still useful references at the time of writing in 2021.
It is worth noting though that the use of Discord for bootcamps or workshops is not contingent of the events being fully remote. It is possible to still have student peer interactions and discussions that leverage the Discord server's strengths, and possibly serve as a bridge in hybrid in-person and remote workshops between the remote and local participants. While the experience of debugging code with a student while physically next to them is preferable and not replicated in a fully remote environment, the shared rooms and tools on the Discord server made the experience significantly better than attempting to have the interactions be over a text chat only.
Neutrino 2020
Neutrino 2020 Virtual Conference
The Neutrino conference series is the largest conference in neutrino physics. It is an international conference which has been hosted around the world every other year since 1972. The Neutrino 2020 [25] conference was slated to be held in the U.S. in June 2020 and planning had begun for a 5-day in-person conference in downtown Chicago, Illinois. In March of that year, given the realities of the COVID-19 pandemic, the conference was completely retooled to an online conference. The Neutrino 2020 team, including 3 conference co-chairs, 11 local organizing committee members, 5 session chairs, and additional support from Fermilab staff had less than 3 months to prepare for a virtual conference. The conference was hosted online by Fermilab and the University of Minnesota on eight 1/2 days over the course of two weeks from June 22 -July 2, 2020. Conference registration was handled via Fermilab Indico. Conference content was archived both on the Neutrino 2020 website [25] and using Zenodo [26].
Neutrino 2020 was an unexpectedly large success. It brought a very large number of neutrino physicists together during a challenging time. While recent in-person Neutrino conferences have typically had roughly 600 to 900 participants in attendance, the Neutrino 2020 conference had record attendance with 4 350 people from every continent, including Antarctica, participate in the online conference. The conference reached participants from 67 countries, roughly 60 % of whom were students or post-docs ( Figure 19), thus allowing access to a much broader and more diverse set of conference participants.
Technical Aspects
The Neutrino 2020 conference included four main components:
• Plenary talks were broadcast live via a Zoom webinar Monday through Thursday from 7:00 to 11:30 CDT across the two weeks of the online conference. Conference participants were given Fridays and the weekends off during the event. During the conference, 79 plenary talks were delivered by neutrino physicists around the globe. These talks were recorded live and posted daily [27] so that conference participants who were not able to attend the live presentation sessions could view the talks after they were presented and still be a part of online discussions of their content over Slack. There were over 62 000 visits to the Neutrino 2020 web page to view recorded talks during the conference. Increased support provided by Fermilab was needed to handle the web load.
• A web portal [28] was created to host the conference poster session. This web portal was custom built for the conference to host the 532 conference posters (a Neutrino conference series record) as well as an optional 2 minute pre-recorded summary video that was uploaded by poster presenters. This too was popular. There were Figure 19. Demographics of Neutrino 2020 conference participants by career stage. More than half of the participants were students and postdocs. The "other" category includes the general public, members of the press, etc.
over 5 800 views of the posters on YouTube throughout the duration of the conference. The web portal also offered an alternative to the virtual reality in case participants were not able to attend the live poster sessions in the virtual reality platform.
• Conference interactions took place via Slack. During the conference, over 23 000 communications were posted on Slack. Separate Slack channels for each of the conference plenary sessions were created in advance as well as a channel for interacting directly with the conference organizers. Conference participants were also allowed to create their own channels. An especially popular channel, for example, was a Jobs Board on Slack advertising job openings in neutrino physics.
• Perhaps the most popular aspect of Neutrino 2020 was an innovative Virtual Reality platform [29] that was used to host the three live poster sessions during the conference. Over 3 400 participants visited the virtual reality platform that included > 500 conference posters in themed rooms. See section 3.12.3 for more on the Neutrino 2020 Virtual Reality portal.
The Neutrino 2020 conference was promoted daily on social media via both Facebook and Twitter and conference organizers also hosted a public event on the last day of the conference in the form of an online live Physics Slam [30].
Virtual Reality at Neutrino 2020
To give conference participants something new to try and to increase personal interaction during the entirely online event, Neutrino 2020 hosted an innovative virtual reality platform [29] using open source software from Mozilla Mozilla Hubs. Conference registrants had access to this virtual reality platform that gave poster presenters an opportunity to share their work and conference participants a chance to interact. The virtual reality platform was available for the duration of the conference, although due to popular demand it was kept live for a few days even after the conference had ended. Using the virtual reality platform, participants had a chance to get immersed in a virtual poster session where they were able to see the posters and interact live with poster presenters during dedicated poster sessions. Visitors were able to create custom avatars and able to move within and between the virtual reality rooms. When a person approached another person's avatar, people could speak and hear each others voices. Participants also had the chance to explore virtual sight-seeing of Fermilab and downtown Chicago as well as visit dedicated virtual social rooms to meet up with colleagues. During the conference, 3,409 conference attendees visited the virtual 43/55 reality platform, including the 532 early career scientists who presented posters of their research. The virtual reality platform received overwhelmingly positive feedback by those who attended and on social media [31]. An example of the space can be seen in Figure 20. For more views of the Neutrino 2020 Virtual Reality platform, please also see [32].
Figure 20.
Example view inside the Neutrino 2020 Virtual Reality platform that hosted the 532 neutrino physics posters created by conference participants.
What Worked Well
In the end, the response to the first-ever virtual Neutrino conference was very positive [31][32][33]. There are several aspects that helped make this large online conference a success:
• Dress rehearsals for the local organizing committee were held weekly in the two months leading up to the conference to give conference organizers experience running a Zoom webinar and in working together as a team.
• Written job descriptions were created for each task needed to run the conference. These assignments included hosting the webinar, speaker assistance, talk recording and posting, session chairing, Q&A handling in the webinar, poster assistance, Slack monitoring, virtual reality platform assistance, email list monitoring, social media content posting, and Zoom webinar technical support.
• An attempt was made to anticipate possible problems in advance. A written trouble-shooting guide was created by the local organizing committee that included a list of possible things that could go wrong during the conference, who would respond, and how one should respond.
• At the end of each day during the conference, a regrouping session was held for the local organizing committee so that the group could discuss and solve any issues that arose during the conference that day.
• A Code of Conduct [34] set expectations for behavior during the conference. Conference participants could not register for the conference or receive connection details without agreeing to the Code of Conduct. An online reporting system was also made available during the conference. Reports of violations of the Code of Conduct could be made anonymously.
44/55
• Conference organizers asked plenary speakers to provide recordings of each of them pronouncing their own names. These audio recordings were made available to session chairs to ensure correct pronunciation of presenters' names.
• Pulling together advice from a variety of sources, detailed instructions were provided to speakers including a checklist and tips for giving a talk over Zoom. Other conferences may also find these speaker tips helpful [35].
• Rehearsal sessions for plenary talk presenters were arranged during the weeks leading up to the conference so that speakers could practice sharing their slides and test their audio and video. Practice sessions were also set up during the first 1/2 hour before the conference on each day of the event in case speakers wanted to do a last minute check of their setup.
• Conference organizers set up multiple ways for conference participants to communicate with the organizers during the conference including over email, Slack, and in the virtual reality platform so as to be able to ask direct questions of the conference team. A Frequently Asked Questions (FAQ) page [36] was also posted online for conference participants and updated daily.
Lessons Learned
Lessons learned from Neutrino 2020 were twofold. First, it would, of course, had been better had the conference organizers had more than 2.5 months to pull together this large scale online event. Unfortunately, due to the timing and uncertainty related to COVID-19, more planning time was not possible. Second, the conference would have benefited from a larger support staff to handle real-time conference registrations, as registration was allowed during the event and many participants expected to be able to attend the conference on the same day as having registered. The latter was an unanticipated large task with upwards of 100 people per day registering for the conference as word got out about the event on social media and in the community.
Summary
Neutrino 2020 was hosted as an entirely online event from June 22 to July 2, 2020. This virtual conference was put together by a small team of ≈ 15 people in 2.5 months and included plenary talks, a web portal for posters, a popular virtual reality platform, and an online chat forum via Slack. The conference had record attendance: 4 350 attendees from 67 countries in all 7 continents, 60 % of whom were students and postdocs. This reach included many scientists who might not have otherwise been able to attend an in-person conference because of funding, visas, family responsibilities, or other issues. Given this extensive and diverse participation, some aspects of Neutrino 2020 may well affect the planning and organization of future in-person and online conferences.
Common Themes and Key Findings
Several themes from the experience talks and subsequent discussion emerged:
• Virtual meetings come with several benefits such as increased participation by early-career scientists and international collaborators, smaller overall budgets are required to run an event, decreased cost of attendance and reduced carbon footprint associated with the event (due to a lack of required travel).
• Roughly 25%-50% of registered participants are connected at a given time and this decreases over the event, while the vast majority of registrants attend the event at some point over its duration (see Figures 13 and 14, for example).
• Challenges include a lack of personal interaction opportunities and diminished quality of those interactions, "Zoom fatigue", attention capture and retention, and synchronous participation over wide range of time zones.
• Zoom is an adequate current tool for "traditional" presentation-style, but other tools are needed to be able to approach in-person style interactivity (many great ideas shown at the workshop!).
45/55 4 Tools and Techniques
For virtual conferences at a minimum we need a way for all participants to hear each other. This could be as simple as a conference call. More subtle communications can occur if the voice call is supplemented with video feeds of the speaker and other participants. This is provided by many of the different platforms. Slides and online demonstrations of software can add additional context and information. Most of the video conferencing systems can allow for someone to share their screen. In this way, the simple video conferencing systems can provide the basics of what people experience in an in-person conference. There is additional tooling that can be brought to bear that can extend this conference experience and, in some ways, take participants further than has been possible in an in-person environment.
Virtual Whiteboards
Visual communication is a very powerful medium. It is common for workshop presenters to include diagrams on their slides and for organizers to use Google Docs for collaborative note taking during the meeting which often also includes diagrams. These diagrams are static and cannot reflect new knowledge gained during the session or the collective wisdom of the gathered participants.
A useful tool is Miro, which is an online whiteboard built for collaboration. It has a number of features that make it easy to share a link to the whiteboard and to manage even a large number of contributors who may wish to view the whiteboard, follow a user around as they discuss or modify the board. Anyone can add items to the board or cleanup diagrams as a discussion proceeds. The Miro whiteboard offers infinite space so groups can break up and even work on their own ideas an a separate area of the board. Current alternatives to Miro include Mural and Google's Jamboard.
Q and A Management
In traditional in-person meetings, managing questions from the audience can be difficult and often favor the boldest. For virtual or hybrid meetings this issue is can be harder unless additional tools are brought in, given the need for participants to unmute, and the larger audience sizes. Several tools offer the chance to add more sophistication to the process of submitting and selecting questions. In particular, Slido, which is a tool that allows participants to pose questions. The whole audience can review and up-vote questions that interest them. Using this tool makes is easy for people to submit questions even if they are uncomfortable speaking to the assembled group and insures that precious conference time is used for questions that are of general interest.
Live polling is a way to increase participant engagement and expose additional background on the audience as a whole. Results are often presented in interactive ways that evolve in real time as results come in, often taking the form of word clouds or bar and pie charts. While Zoom offers this built-in, tools like Slido and Mentimeter have a wider variety in question types, outcome displays, and interactivity.
Although particularly pertinent for online and hybrid events, there is nothing to stop these tools being used at fully in-person conferences as well.
Increased Social Interactions
As has been described in this report, a common challenge for virtual events over the last year has been giving participants the same opportunity to meet new people and reconnect with old acquaintances. Several tools exist which can help in this regard with many being trialed in the different events described in Section 3. One such approach is to give participants a common map which can, for example, represent the conference "venue". By moving around this map participants can interact with each other based on their proximity. In particular, gather.town uses a 2D map reminiscent of old 8-bit computer games while Mozilla Hubs offers a 3D world that can be used with virtual reality headsets. Both of these have been used successfully for poster sessions and other more interactive pieces of typical conferences. An alternative approach is to offer participants small rooms which allow for a handful of people to join at a time. Platforms like Airmeet and Remo allow audience members to choose a table to sit down at and begin discussions, similar to finding a table in a lunch canteen or classroom. An alternative to this is guided networking, where participants are less directly involved in choosing who they meet, such as offered by RemotelyGreen.
46/55 5 Diversity, Inclusion and Accessibility Considerations
The shift to fully virtual meetings, conferences, and workshops during the pandemic, and subsequent discussions on the likelihood of being able to maintain such a strong virtual component in the future has highlighted many areas of inequality in our current model of in-person meetings. In this section, we highlight the benefits as well as potential pitfalls in terms of diversity, inclusion, and accessibility that virtual meetings have over in-person ones. Given the lessons learned over the past year it is imperative that we continue to take these considerations into account post-pandemic, and ensure we do not simply return to what we were doing before.
One of the largest impacts of online meetings is in the reduced travel costs for attendees. This directly allows for participation by a larger and more diverse group of people around the world, and especially benefits people in less well-funded groups, newly-established research groups, and those located far away from the meeting location. This also has a strong benefit for early career researchers which often have challenging economic circumstances, given that many institutions require one to cover all the bills in advance and only reimburse after the meeting, sometimes many weeks later. Finally, many groups have a funding limit of one in-person event per year, but researchers would participate in many more events remotely. This is again especially useful for early career researchers (or those who are new to the field).
On the other hand, network reliability in many parts of the world -perhaps most especially those who would benefit most from the reduced travel costs -may be a hindrance to being able to fully participate in and get the most out of the meeting. In such a situation, a balance between live and pre-recorded talks may be helpful, as well as having the meeting recorded and available to watch at a later time.
Travel itself also has challenges for many people who may benefit from online meetings. Travel time is a luxury that many, such as those with family or other personal commitments, cannot afford. Some individuals may be unable to travel due to disabilities, health limitations, or, especially in near-term post-COVID times, those who have been unable to be vaccinated. Many U.S. National Laboratories have travel caps or bans in place, limiting researchers' travel to in-person meetings and certain countries. And while visa restrictions are not generally a concern for most Northern American or European passport holders, the costs, administration load, and time involved in visa applications is significant for many researchers around the world. Additionally, while all effort should be made to ensure the safety of attendees at all times, meetings may be held in locations that are potentially life-threatening to participants, given their race, sexual or gender orientation, or religion.
A particular feature of online meetings that has accessibility benefits is the relative ease of availability of captioning for presentations. While this does have additional cost implications for the organisers, it should be considered up-front as default rather than an after-thought. As automatic captioning in discussions with many technical terms can be complicated and render inaccuracies, it is highly preferable to accurately caption the content in as close to real-time as possible through professional captioning services or other similarly effective approaches. Captioning benefits not only those with hearing-impairment, but also non-native speakers, and even new students and researchers who may be unfamiliar with all the technical jargon in the field.
Organising social activities for fully online meetings over the past year has required some innovation, but in the cases where it was done successfully has contributed positively to the inclusivity of the event. The social component is a very important part of in-person meetings, but can be alienating if one does not already know a number of people attending. One of the things to think about going into the future will be how to maintain this inclusivity and avoid a fabricated "class divide" in meetings with both online and in-person components.
A Code of Conduct must exist and be agreed upon by all parties in advance to explicitly outline the rules for participation and the values associated with the event. A carefully crafted Code of Conduct can go a long way towards fostering diversity and inclusion by establishing acceptable behavior and promoting inclusive thinking. However, having an agreed upon Code of Conduct is not enough -clearly stated protocols for reporting and handling of violations, including expectations for anonymity, data privacy and repercussions for violators must be associated with any Code of Conduct policy advertisement otherwise it risks being ineffectual. Virtual events bring additional challenges regarding conduct issues as compared with in-person events given the impersonal and sometimes quasi-anonymous nature of remote interactions.
47/55
Finally, it is important for event organizers to be proactive about establishing a diverse set of speakers, panelists and participants (and organizers!). There is much more to say on this important topic, especially regarding effective and ineffective approaches to improving diversity in meetings, workshops and conferences (and our research itself!). However, most points are not specific to the virtual event format which is the topic of this report. Still, we would be remiss in not mentioning this critically important aspect of event organization, so we conclude this section with a reminder on this point.
Best Practices
With participants spread out around the globe, the most important goal for any virtual meeting is to most efficiently and effectively use the time that is available. Synchronous time -where all attendees are available and able to be attentive -is to be considered as precious, and careful and thorough planning is necessary to make optimal use of this commodity. Drawing from the experience gained from the conferences and workshops described in Section 3, we list some non-exhaustive suggestions for virtual meeting organizers and attendees to help with smooth running, and to make the meeting as productive as possible.
Timetable Planning
• The most important initial step is to consider the aim and goals of the meeting or conference, as this will dictate the way the timetable is planned.
• For large conferences that aim to show latest results, organizers should carefully consider the fraction of synchronous time dedicated to plenary talks (whether live or pre-recorded), compared to the time spent on questions and discussions, and ensure these are in line with the conference goals.
• For workshops where the goal is to have discussions or brainstorming of solutions, the majority of synchronous time should be dedicated to these activities. Any necessary material should be distributed and reviewed beforehand, though highlights or focal points could be summarised at the start of each session.
• A well planned agenda that includes links to any tools that may be used during the meeting will go a long way in ensuring smooth running and optimal use of time.
• Realistic planning of the timetable is important; meeting chairs should contact speakers in advance to agree on the amount of time allocated for presentations, questions, or any other activities.
6.2 Before the meeting • For large conferences, dress rehearsals for the local organizing committee in the weeks leading up to the conference can give the organizers experience running the various tools used in the conference, and in working together as a team.
• Written job descriptions should be created for each task needed to run the conference and assigned to individuals. These may include hosting the webinar, speaker assistance, talk recording and posting, session chairing, Q&A handling in the webinar, poster assistance, chat program monitoring, virtual reality platform assistance, email list monitoring, social media content posting, and webinar technical support.
• It is helpful to try to anticipate possible problems in advance. A written trouble-shooting guide should be created that includes a list of possible things that could go wrong during the conference, how one should respond, and the person responsible for handling the problem.
• A Code of Conduct should be a non-negotiable aspect of every conference, workshop, or meeting, to set expectations for behavior during the event. Where registration is required, participants should not be able to register for the conference or receive connection details without agreeing to the Code of Conduct.
48/55
• Clearly stated protocols for the reporting and handling of conduct violations should be associated with the Code of Conduct itself. An online reporting system should be made available during the conference. Reports of violations of the Code of Conduct must be able to be made anonymously. Organizers should consider assigning someone outside of the organizing committee to serve as an at-the-ready ombudsperson to investigate complaints and strive for a satisfactory resolution.
• Conference organizers should ask all speakers for their preferred pronouns (provided voluntarily, of course), and to provide audio recordings of each of them pronouncing their own names. The organizers should provide the means (a specific service or more likely instructions for a common computing platforms) for speakers to easily produce these audio recordings. These audio recordings should be made available to session chairs to ensure correct pronunciation of presenters' names. Presenter pronouns should always be respected.
• Conference organizers should consider asking participants to provide, on a voluntary basis, information beyond the usual registration content to facilitate interactions, such as research interests. For example, I might be interested in knowing if there are other participants interested in machine learning approaches to neutrino event reconstruction. A searchable database of keywords from participants research interests could prove useful to increase interactions. Something similar could be achieved by allowing participants to create topical channels on the event's Slack workspace.
• Detailed instructions should be provided to speakers including a checklist and tips for giving a talk over the conference platform of choice. As an example, see the speaker tips provided by Neutrino2020 [35].
• Rehearsal sessions for plenary talk presenters should be arranged during the weeks leading up to the conference, so that speakers can practice sharing their slides and test their audio and video. Practice sessions could also be set up during the first half hour before the conference on each day of the event, in case speakers want to do a last-minute check of their setup.
• Conference organizers can set up multiple ways for conference participants to communicate with the organizers during the conference including over email, collaboration workspaces such as Slack, and in a virtual reality platform (if available). Be sure to allocate someone responsible for overseeing each.
• A Frequently Asked Questions (FAQ) page (see for example [36]) should be posted online for conference participants and updated daily.
During the Meeting
• Staying on time is possibly the biggest challenge for any meeting or conference -virtual or not. Chairs should have the authority to stop a talk, for example by muting the speaker, if they run overtime. They should also be able to show the speaker how much time they have remaining at any time.
• Have a back channel, instant communication mechanism between key organizers during the meeting to troubleshoot on the fly.
• At the end of each day during the conference, a regrouping session should be held for the local organizing committee to discuss and solve any issues that arose during the conference that day.
Discussion sessions
• Chairs should communicate their preferred method for taking questions, whether by "raising one's hand", or by comments in the chat panel or a dedicated workspace. Each of these should, of course, be monitored.
• Dedicated workspaces are a useful tool for extending discussions after the allocated time is up, or for additional questions there may not have been time for. 49/55
For Presenters
• Speakers, whether live or pre-recorded, must do their best to stick to their allocated time. To run overtime in your presentation shows a lack of preparation at best and can be construed as disrespect for everyone else involved.
• Speakers should make sure they attend any rehearsal sessions held for them to familiarise themselves with the conference platform.
Socialising
• Section 4 lists a number of tools available to improve the social aspect of fully virtual meetings. While this is no substitute for in-person interactions, use of one or more of these tools is encouraged.
• In some of the smaller meetings, a nice touch by the organizing committee has been sending out small "care packages" which may include things like local snacks, branded merchandise (such as stickers), etc. Whilst this is an additional financial and administration load, it is well received by participants and helps to foster a feeling of community and can also give a little bit of local flair.
Looking Forward
It has become exceedingly clear over the past year that online meetings have a variety of strong advantages, and as we move to planning post-pandemic meetings we should strive to maintain as many of these advantages as possible. However, the in-person interactions, informal chats and networking have been greatly missed over the past year, and this is perhaps the strongest motivator for researchers to want to return to in-person meetings as soon as it is allowed. A possible approach to this is to avoid the binary choice between in-person and online, and work toward creating a new, hybrid event format that pulls from the benefits of each. Such an event would host a sufficient number of in-person attendees to be able to cover most of the costs of the conference, whilst also providing the ability to attend online at a reduced cost if the attendee so wishes. The additional logistical requirements for successfully including this virtual component are certainly not negligible, but have already been demonstrated a number of times in events such as the large LHC experiments' collaboration weeks where remote participants are able to attend talks and participate in post-talk discussions via a teleconferencing tool such as Zoom.
In such meetings one has previously been required to be physically present if one is giving a talk, a requirement that will need to be relaxed to maintain a more inclusive approach to the meeting, and to avoid a "class divide" that favours those with privileged access to in-person events. Technological challenges such as unreliable internet connection is of course a concern, but one possibility to overcome this has already been seen in virtual conferences over the past year where talks are pre-recorded and uploaded ahead of time.
On its own this version of hybrid event still lacks the social and networking opportunities for the virtual participants, however, and a way to overcome this could be to host local conference hubs at strategic locations around the world, where the virtual attendees could get together for discussions amongst themselves. The hubs could be located at a local university with potentially low overhead costs, and could also help to solve some of the technological challenges for more remote participants.
To maintain some level of equity between the meeting host location and the hubs, it should be encouraged that a fraction of the speakers broadcast their talks from the hubs themselves. For those with the means to travel, this could be considered an outreach opportunity, or an chance for a researcher to return to and meet with local researchers from their home country.
A large amount of work goes into organising any conference, and a hybrid conference will certainly prove an even bigger challenge to conference organisers than those that are either fully in-person or fully virtual. Additional costs will be involved, to cover the technological and room requirements necessary for a good online event, while the risk of lower in-person attendance also has financial considerations, as many conference budgets are developed based on a minimum number of (in-person) attendees. An initial approach here could be to charge the usual fee for in-person attendees, and a smaller fee (up to US$50 per person seems to be considered reasonable based on 50/55 discussions during this workshop) for the online attendees, to cover the additional costs involved with taking the conference online. Local hubs should be discouraged from charging additional fees for attendance at the hub.
Ultimately, conference organisers want their attendees -especially those attending in-person -to have the best experience possible. A particular challenge for the "hybrid with hubs" approach is with time zones, trying to find time slots for the talks and discussions that work for most attendees worldwide. Whilst this is impossible to solve completely, many conferences over the past year have already done a good job at managing their schedule for maximum participation across the globe, providing session recordings for others to be able to catch up on at their convenience, or with pre-recorded talks available shortly before the conference combined with a live discussion session between the speakers during the event. In most of these cases, the actual time spent in live talks has been reduced to half a day instead of a full day, to account for the variation in time zones. This should be considered a positive development, with the other half of the day available for discussions, poster sessions, local outreach activities, or local tours. These supplementary events could be organised in the local hubs as well as the main conference venue, providing part of the social aspect of the conference that virtual attendees would otherwise be missing. Finally, while the conference organisers should be encouraged to make the timetable as accessible as possible for all, in the "hybrid with hubs" approach perhaps a good incentive to attend in-person would be to be able to attend the live talks in the most convenient timezone.
For workshops and conferences that are organized through funded projects or projects yet to be proposed, creative thinking about how to make better use of funds nominally slated for participant costs and project personnel travel to create and curate workshop content would be prudent in the post-pandemic "new normal". For example, these funds could be used to improve accessibility to event content through professional captioning or to pay for services that can produce high-quality training materials to provide more in-depth understanding and amplify the impact after completion of the event.
It would be really encouraging if within the next two years one or more major conferences takes a lead on trialling the "hybrid with hubs" approach and implements many of the suggestions outlined in this report.
Given what our community has experienced with virtual meetings over the last year due to this terrible pandemic, it is hard to imagine the HEP community returning to a pre-pandemic approach of "fully" in-person meetings. Through this ongoing journey, we have learned the value of a substantial virtual component to meetings organization as well as many associated challenges. This is a genie that will not easily go back into the bottle. New approaches such as the "hybrid with hubs" approach explored in our workshop and described in this report as well as technical innovations that continue to blur the lines of experience between virtual presence and physical presence will shape the future of meetings in HEP and beyond.
•
Analysis Systems R&D on Scalable Platforms (2019) • Fast Machine Learning and Inference (2019 & 2020) • A Coordinated Ecosystem for HL-LHC Computing R&D (2019) • Software Training (2020) • Sustainable Software in HEP (2020) • Future Analysis Systems & Facilities (2020) • Portable Inference
Figure 1 .
1Day 2 of the workshop was held in a virtual Wilson Hall 1 West space within the gather.town application.
Figure 2 .
2Example use of the Miro collaborative whiteboard to work through a set of questions during Day 2 of the workshop. A set of virtual sticky-notes were annotated and placed in the appropriate square.
Figure 3 .
3Participant survey responses from the May HSF-WLCG workshop on their preference for when presentations should be uploaded.
Figure 4 .
4Survey response on if participants thought that material should be available in advance and if they viewed it.
Figure 5 .
5Participant survey responses towards asking questions in the Google Doc live notes.
Figure 6 .
6Participant survey responses at vCHEP about the Mattermost chat tool.
Figure 7 .
7The final LHCP 2020 timeline (CEST time zone) and block structure.
Time formatThe first set of questions that was asked in the survey concerned the choice of the time format of the conference. Most of the participants were satisfied with the choice of concentrating the programme in the hours between 12:30 and 18:30 Central European Summer Time (CEST), with two short 15 minutes breaks and several parallel sessions taking place at the same time. On a 0-10 scale, the average satisfaction was 8.7, with a drop to 7.6 17/55
Figure 10 .
10LHCP2020 online survey: overall satisfaction of the conference participants.
Figure 12 .
12The results the satisfaction survey, 300 participants responded.
Figure 13 .
13Snowmass CPM participation statistics as determined by self-identified Zoom user display name. (a) Unique connections vs time (b) Integrated unique participations vs time .
Figure 16 .
16Schedule for SciPy 2021, showing virtual platforms used for different portions.
Figure 17 .
17View of the US-ATLAS Computing Bootcamp Discord server where students and instructors are interacting on questions related to ATLAS CMake and RECAST.
Figure 18 .
18A member of the instructor team uses Dyno's markup syntax to generate an interactive poll to gauge student progress.
Day 1 (via Zoom) 08:30 Welcome, Blueprint Activity and Workshop Overview M. Neubauer 08:50 A Virtual Hitchhiker's Guide to Virtual Conferencing Taking Stock of Experiences and Discussion on Future Events 12:00 Summary Report Discussion and Writing Table 1. Workshop Agenda (times are CDT) Presentations are available from the workshop website * . * https://indico.cern.ch/event/1026363/timetableB. Krikler
09:20 HSF/WLCG Virtual Workshop Experience
G. Stewart
09:40 LHCP 2020
G. Marchiori, R. Salerno
10:00 ICHEP 2020
T. Davidek
10:20 Moriond 2021
V. Varanda
10:40 Connecting the Dots 2020
D. Lange
11:00 LLVM Developers Meeting
V. Vassilev
11:35 Snowmass Community Planning Meeting
B. Jayatilaka
11:55 OSG Virtual Meetings
T. Cartwright
12:15 PyHEP 2020 Experience and 2021 Plans
E. Rodrigues
12:35 SciPy 2020 Experience and 2021 Plans
J. Guyer
12:55 US ATLAS / Canada ATLAS Computing Bootcamp
M. Feickert
13:15 Neutrino 2020
S. Zeller
13:35 Discussion Session
Day 2 (via gather.town)
08:30 Day 2 Topics and Goals
M. Neubauer
08:50 Summary of Community Input from DPF Townhall
M. Narain
09:20 Tools and Techniques for Virtual Workshops Session
B. Galewsky
11:00
Table 2. Number of registrants over the last three ICHEP conferences22/55
ICHEP2020
ICHEP2018
ICHEP2016
Gender
Regs. Fraction Regs. Fraction Regs. Fraction
Male
2178
72 %
893
77 % 1144
80 %
Female
772
26 %
267
23 %
286
20 %
Rather not say
54
2 %
Other
6
0 %
Total
3010
100 % 1160
100 % 1430
100 %
AcknowledgmentsWe thank the attendees for their active participation in the workshop to discuss these issues and suggest paths forward. This workshop was partially supported through the U.S. National Science Foundation (NSF) under Cooperative Agreement OAC-1836650 (IRIS-HEP).DisclaimerCertain commercial software platforms or services are identified in this paper in order to specify the conference formats adequately. This does not imply a recommendation or endorsement by IRIS-HEP, the individual conference organizers or sponsors, the National Institute of Standards and Technology or the National Science Foundation that the software platforms or services identified are necessarily the best available for the purpose.51/55
An AI/ML-based real-time captioning service. Ai-Media , Ai-Media An AI/ML-based real-time captioning service. https://www.ai-media.tv/. 30
Airmeet An online meeting platform. 3946Airmeet An online meeting platform. https://www.airmeet.com/. 39, 46
BinderHub A cloud service to share reproducible interactive computing environments from code repositories. BinderHub A cloud service to share reproducible interactive computing environments from code repositories. https://binderhub.readthedocs.io/. 35
Crowdcast Event hosting platform. 3738Crowdcast Event hosting platform. https://www.crowdcast.io/. 37, 38
A Discord, Voip, instant messaging, and digital distribution platform. 14Discord A VoIP, instant messaging, and digital distribution platform. https://discord.com/. 10, 14, 16, 19, 20, 39-42
Facebook A social media and social networking service. 2343Facebook A social media and social networking service. https://www.facebook.com/. 23, 43
An avatar-based meeting platform. 1046gather.town An avatar-based meeting platform. https://gather.town/. 4, 5, 10, 25, 26, 39, 46
GitHub Version control and software development host. GitHub Version control and software development host. https://github.com/. 35
Google Doc Collaborative document editing service. 1146Google Doc Collaborative document editing service. https://docs.google.com/. 10, 11, 46
Google Meet A video communication service. Google Meet A video communication service. https://meet.google.com/. 38
Jitsi A collection of free and Open Source multiplatform VoIP, video conferencing, and instant messaging applications. Jitsi A collection of free and Open Source multiplatform VoIP, video conferencing, and instant messaging applications. https://jitsi.org/. 18
Jupyter Interactive data science and scientific computing application. Jupyter Interactive data science and scientific computing application. https://jupyter.org/. 35
Mattermost An Open Source collaboration platform. 2240Mattermost An Open Source collaboration platform. https://mattermost.com/. 9-11, 22, 23, 27, 40
Mentimeter Interactive presentations with live polls, quizzes, word clouds, and Q&As. Mentimeter Interactive presentations with live polls, quizzes, word clouds, and Q&As. https://www. mentimeter.com. 46
Mibo An avatar-based meeting platform. Mibo An avatar-based meeting platform. https://getmibo.com/. 10
Microsoft Teams A business communications platform featuring workspace chat and videoconferencing, file storage, and application integration. Microsoft Teams A business communications platform featuring workspace chat and videoconferencing, file storage, and application integration. https://teams.microsoft.com/. 18
Miro An online collaborative whiteboard platform. 646Miro An online collaborative whiteboard platform. https://miro.com/about. 5, 6, 46
Mozilla Hubs An avatar-based meeting platform. 4346Mozilla Hubs An avatar-based meeting platform. https://hubs.mozilla.com. 43, 46
Mural A digital workspace for visual collaboration. Mural A digital workspace for visual collaboration. https://mural.co. 46
based real-time speech to text transcription service. Otter.ai An AI/ML-Otter.ai An AI/ML-based real-time speech to text transcription service. https://otter.ai/. 15
RemotelyGreen AI driven online networking events. RemotelyGreen AI driven online networking events. https://remotely.green. 46
Remo An interactive virtual event platform. 46Remo An interactive virtual event platform. https://remo.co. 29, 46 52/55
Skype A video and voice calling platform. Skype A video and voice calling platform. https://www.skype.com/. 9
Slack A virtual workspace, featuring chat, teams, document sharing, and app integration. 38-401049Slack A virtual workspace, featuring chat, teams, document sharing, and app integration. https://slack. com/. 9, 10, 25, 26, 30-33, 38-40, 42-45, 49
Slido An interactive app for hybrid meetings. 3546Slido An interactive app for hybrid meetings. https://www.sli.do/. 31, 35, 46
Twitter A micro-blogging and social networking service. 2143Twitter A micro-blogging and social networking service. https://www.twitter.com/. 16, 21, 23, 43
. Vidyo, 1518Vidyo A real-time video communication platform. https://www.vidyo.com/. 13, 15, 16, 18
Vimeo A video sharing platform. Vimeo A video sharing platform. https://vimeo.com/. 25
. A Webcast, Cern-Hosted, Server, Webcast A CERN-hosted webcast server. https://webcast.web.cern.ch/. 14-20
Whova Event management software. Whova Event management software. https://whova.com/. 29
Wonder A virtual meeting space. Wonder A virtual meeting space. https://www.wonder.me/. 10
YouTube A video sharing platform. 2743YouTube A video sharing platform. https://www.youtube.com/. 8, 21-23, 27, 29, 35-37, 39, 43
Zenodo A general-purpose open-access repository for deposition of research papers, data sets, research software, reports, and any other research related digital artifacts. Zenodo A general-purpose open-access repository for deposition of research papers, data sets, research software, reports, and any other research related digital artifacts. https://zenodo.org/. 42
. A Zoom, 1450Zoom A video conferencing platform. https://zoom.us/. 3-10, 14-20, 22, 23, 25-33, 36, 38, 40, 42, 44-46, 50
A roadmap for HEP software and computing R&D for the 2020s. 10.1007/s41781-018-0018-8References 1. Institute for Research and Innovation in Software for High-Energy Physics. 3The HEP Software Foundation etReferences 1. Institute for Research and Innovation in Software for High-Energy Physics. https://iris-hep.org. 2. The HEP Software Foundation et al., "A roadmap for HEP software and computing R&D for the 2020s", Computing and Software for Big Science 3 (Mar, 2019) 7, doi:10.1007/s41781-018-0018-8. https://doi.org/10.1007/s41781-018-0018-8.
P Elmer, M Neubauer, M D Sokoloff, arXiv:1712.06592Strategic Plan for a Scientific Software Innovation Institute (S2I2) for High Energy Physics. P. Elmer, M. Neubauer, and M. D. Sokoloff, "Strategic Plan for a Scientific Software Innovation Institute (S2I2) for High Energy Physics", arXiv:1712.06592.
HSF WLCG Virtual Workshop on New Architectures, Portability and Sustainability. HSF WLCG Virtual Workshop on New Architectures, Portability and Sustainability. https://indico.cern.ch/event/908146.
. Hsf Wlcg Virtual, Workshop, HSF WLCG Virtual Workshop. https://indico.cern.ch/event/941278.
. ; Hsf-Wlcg Hsf-Wlcg Workshop Organisers, Workshop, HSF-WLCG Workshop Organisers, "Feedback from May 2020 HSF-WLCG Workshop". https://indico.cern.ch/event/925974/.
Organisers Hsf Workshop, Feedback from November 2020 HSF-WLCG Workshop. HSF Workshop Organisers, "Feedback from November 2020 HSF-WLCG Workshop". https://indico.cern.ch/event/1000087/.
25th International Conference on Computing in High-Energy and Nuclear Physics. 25th International Conference on Computing in High-Energy and Nuclear Physics. https://indico.cern.ch/event/948465.
LHCP 2020. LHCP 2020. https://indico.cern.ch/event/856696/attachments/1964474/3531885/ LHCP2020_stats.pdf.
. Llvm The, Project, The LLVM Project. http://llvm.org.
Virtual LLVM Developers' Meeting. Virtual LLVM Developers' Meeting. https://llvm.org/devmtg/2020-09.
. B Line, Events, B Line Events. https://blineevents.com/.
Snowmass 2021 (virtual) conference. Snowmass 2021 (virtual) conference. https://snowmass21.org.
Snowmas Community Planning Meeting. Snowmas Community Planning Meeting. https://indico.fnal.gov/event/44870.
OSG All-Hands Meeting 2020. OSG All-Hands Meeting 2020. https://opensciencegrid.org/all-hands/2020.
OSG All-Hands Meeting 2021. OSG All-Hands Meeting 2021. https://opensciencegrid.org/all-hands/2021.
. OSG Virtual School Pilot. OSG Virtual School Pilot 2020. https://opensciencegrid.org/virtual-school-pilot-2020.
. OSG Virtual School. OSG Virtual School 2021. https://opensciencegrid.org/virtual-school-2021.
HEP Software Foundation PyHEP Working Group. HEP Software Foundation PyHEP Working Group. https://hepsoftwarefoundation.org/workinggroups/pyhep.html.
US-ATLAS Computing Bootcamp 2020 Indico page. SciPy: Annual Scientific Python Conference Talks & Tutorials. 2021PyHEP 2021 (virtual) workshop. https://indico.cern.ch/e/PyHEP2021. 21. SciPy: Annual Scientific Python Conference Talks & Tutorials. https://www.youtube.com/c/enthought/playlists. 22. "US-ATLAS Computing Bootcamp 2020 Indico page". https://indico.cern.ch/event/933434/. Accessed on 2021-06-11.
US-ATLAS Computing Bootcamp 2020 Website. "US-ATLAS Computing Bootcamp 2020 Website". https://matthewfeickert.github.io/usatlas-computing-bootcamp-2020/. Accessed on 2021-06-11.
Mozilla Foundation's *Privacy Not Included Guide: Discord. "Mozilla Foundation's *Privacy Not Included Guide: Discord". https://foundation.mozilla.org/en/privacynotincluded/discord/. Accessed on 2021-06-11.
Neutrino 2020 conference archive on Zenodo. 2020Neutrino 2020 (virtual) conference. http://nu2020.fnal.gov. 26. Neutrino 2020 conference archive on Zenodo. https://zenodo.org/communities/neutrino2020-talks/?page=1&size=20.
. Virtual Reality. Neutrino 2020 posters. https://nusoft.fnal.gov/nova/nu2020postersession/. 29. Neutrino 2020 Virtual Reality. https://conferences.fnal.gov/nu2020/virtual-reality/. 54/55
. Physics Slam. Neutrino 2020 Physics Slam. https://www.c2st.org/event/neutrino-2020-physics-slam/.
. Virtual Reality on Twitter. Neutrino 2020 Virtual Reality on Twitter. https: //twitter.com/search?q=%40nu2020_chicago%20%22VR%22&src=typed_query.
More than 3000 Scientists Gather Online for Neutrino 2020. Symmetry magazine article"More than 3000 Scientists Gather Online for Neutrino 2020", Symmetry magazine article, July 13, 2020 . https://www.symmetrymagazine.org/article/ more-than-3000-scientists-gather-online-for-neutrino-2020.
Neutrino 2020 Zooms Into Virtual Reality. Cern Courier article. "Neutrino 2020 Zooms Into Virtual Reality", Cern Courier article, July 23, 2020. https://cerncourier.com/a/neutrino-2020-zooms-into-virtual-reality/.
Neutrino 2020 Code of Conduct. Neutrino 2020 Code of Conduct. https://conferences.fnal.gov/nu2020/conduct/.
Neutrino 2020 Frequently Asked Questions (FAQ) page. Neutrino 2020 Frequently Asked Questions (FAQ) page. https://conferences.fnal.gov/nu2020/faqs/. 55/55
|
[
"https://github.com/."
] |
[
"Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices",
"Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices"
] |
[
"Jaouad Mourtada "
] |
[] |
[] |
We consider random-design linear prediction and related questions on the lower tail of random matrices. It is known that, under boundedness constraints, the minimax risk is of order d/n in dimension d with n samples. Here, we study the minimax expected excess risk over the full linear class, depending on the distribution of covariates. First, the least squares estimator is exactly minimax optimal in the well-specified case, for every distribution of covariates. We express the minimax risk in terms of the distribution of statistical leverage scores of individual samples, and deduce a minimax lower bound of d/(n − d + 1) for any covariate distribution, nearly matching the risk for Gaussian design. We then obtain sharp nonasymptotic upper bounds for covariates that satisfy a "small ball"type regularity condition in both well-specified and misspecified cases.Our main technical contribution is the study of the lower tail of the smallest singular value of empirical covariance matrices at small values. We establish a lower bound on this lower tail, valid for any distribution in dimension d 2, together with a matching upper bound under a necessary regularity condition. Our proof relies on the PAC-Bayes technique for controlling empirical processes, and extends an analysis of Oliveira devoted to a different part of the lower tail.
| null |
[
"https://arxiv.org/pdf/1912.10754v3.pdf"
] | 209,445,129 |
1912.10754
|
842a1052d2793ef28de5f843f31b8e6b2b0401bf
|
Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
24 Feb 2022
Jaouad Mourtada
Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
24 Feb 2022
We consider random-design linear prediction and related questions on the lower tail of random matrices. It is known that, under boundedness constraints, the minimax risk is of order d/n in dimension d with n samples. Here, we study the minimax expected excess risk over the full linear class, depending on the distribution of covariates. First, the least squares estimator is exactly minimax optimal in the well-specified case, for every distribution of covariates. We express the minimax risk in terms of the distribution of statistical leverage scores of individual samples, and deduce a minimax lower bound of d/(n − d + 1) for any covariate distribution, nearly matching the risk for Gaussian design. We then obtain sharp nonasymptotic upper bounds for covariates that satisfy a "small ball"type regularity condition in both well-specified and misspecified cases.Our main technical contribution is the study of the lower tail of the smallest singular value of empirical covariance matrices at small values. We establish a lower bound on this lower tail, valid for any distribution in dimension d 2, together with a matching upper bound under a necessary regularity condition. Our proof relies on the PAC-Bayes technique for controlling empirical processes, and extends an analysis of Oliveira devoted to a different part of the lower tail.
Introduction
Linear least-squares regression, also called random-design linear regression or linear aggregation, is one of the basic statistical prediction problems. Specifically, given a random pair (X, Y ) where X is a covariate vector in R d and Y is a scalar response, the aim is to predict Y using a linear function β, X = β ⊤ X (with β ∈ R d ) of X as well as possible, in a sense measured by the prediction risk with squared error R(β) = E[(Y − β, X ) 2 ]. The best prediction is achieved by the population risk minimizer β * , which equals:
β * = Σ −1 E[Y X]
where Σ := E[XX ⊤ ], assuming that both Σ and E[Y X] are well-defined and that Σ is invertible. In the statistical setting considered here, the joint distribution P of the pair (X, Y ) is unknown. The goal is then, given a sample (X 1 , Y 1 ), . . . , (X n , Y n ) of n i.i.d. realizations of P , to find a predictor (also called estimator ) β n with small excess risk E P ( β n ) := R( β n ) − R(β * ) = β n − β * 2 Σ , where we define β 2 Σ := Σβ, β = Σ 1/2 β 2 . Arguably the most common procedure is the Ordinary Least Squares (OLS) estimator (that is, the empirical risk minimizer), defined by β LS n := arg min
β∈R d 1 n n i=1 (Y i − β, X i ) 2 = Σ −1 n · 1 n n i=1
Y i X i , * CREST, ENSAE, Institut Polytechnique de Paris, France; [email protected] with Σ n := n −1 n i=1 X i X ⊤ i the sample covariance matrix. Linear classes are of particular importance to regression problems, both in themselves and since they naturally appear in the context of nonparametric estimation [GKKW02,Tsy09]. In this note, we analyze this problem from a decision-theoretic perspective, focusing on the minimax excess risk with respect to the full linear class F = {x → β, x : β ∈ R d }, and in particular on its dependence on the distribution of X. The minimax perspective is relevant when little is known or assumed on the optimal parameter β * . Specifically, define the minimax excess risk (see, e.g., [LC98]) with respect to F under a set P of joint distributions P on (X, Y ) as:
inf βn sup P ∈P E[E P ( β n )] = inf βn sup P ∈P E[R( β n )] − inf β∈R d R(β) ,(1)
where the infimum in (1) spans over all estimators β n based on n samples, while the expectation and the risk R depend the underlying distribution P . Our aim is to characterize the influence of the distribution P X of covariates on the hardness of the problem. Hence, our considered classes P of distributions are obtained by fixing the marginal distribution of X, and letting the optimal regression parameter β * vary freely in R d (see Section 2). Some minimal regularity condition on the distribution P X is required to ensure even finiteness of the minimax risk (1) in the random-design setting. Indeed, assume that the distribution P X charges some positive mass on a hyperplane H ⊂ R d (we call such a distribution degenerate, see Definition 1). Then, with positive probability, all points X 1 , . . . , X n in the sample lie within H, so that the component of the optimal parameter β * which is orthogonal to H cannot be estimated. However, this component matters for out-of-sample prediction, in case the point X for which one wishes to compute prediction does not belong to H. Such a degeneracy (or quantitative variants, where P X puts too much mass at the neighborhood of a hyperplane) turns out to be the main obstruction to achieving controlled uniform excess risk over R d .
The second part of this note (Section 3) is devoted to the study of the sample covariance matrix
Σ n := 1 n n i=1 X i X ⊤ i ,(2)
where X 1 , . . . , X n are i.i.d. samples from P X . Indeed, upper bounds on the minimax risk require a control of relative deviations of the empirical covariance matrix Σ n with respect to its population counterpart Σ, in the form of negative moments of the rescaled covariance matrix Σ n := Σ −1/2 Σ n Σ −1/2 , namely
E[λ min ( Σ n ) −q ](3)
where q 1 and λ min (A) is the smallest eigenvalue of symmetric matrix A.
Control of lower relative deviations of Σ n with respect to Σ can be expressed in terms of lower-tail bounds, of the form
P λ min ( Σ n ) t δ ,(4)
where t, δ ∈ (0, 1). Sub-Gaussian tail bounds for λ min ( Σ n ), of the form (4) with
δ = exp −cn 1 − C d n − t 2 +
for some constants c, C depending on P X , as well as similar bounds for the largest eigenvalue λ max ( Σ n ), can be obtained under the (strong) assumption that X is sub-Gaussian (see, e.g., [Ver12]). Remarkably, it is shown in [Oli16,KM15] that such bounds can be obtained for the smallest eigenvalue under much weaker assumptions on X, namely bounded fourth moments of linear marginals of X.
While sub-Gaussian bounds provide a precise control of deviations (4) for t ∈ (c, 1−C d/n) (for some constants c, C), they do not suffice to control moments of λ min ( Σ n ) −1 . Indeed, such bounds "saturate" in the sense that δ = δ(t) does not tend to 0 as t → 0; in other words, they provide no nonvacuous guarantee (4) with t > 0 as the confidence level 1 − δ tends to 1. This prevents one from integrating such tail bounds and deduce a control of moments of the form (3). In fact, the covariance matrix of a sub-Gaussian matrix can be singular with positive probability (exponentially small in n), for instance for matrices with independent Bernoulli entries; in order to ensure invertibility at all confidence levels, different regularity assumptions are required. In Section 3, we complement the sub-Gaussian tail bounds by a study of non-asymptotic large deviation bounds (4) with δ = exp(−nψ(t)) for small values of t, namely t ∈ (0, c).
Summary of contributions
Below is an overview of our results on least squares regression, which appear in Section 2:
1. We determine the minimax excess risk in the well-specified case (where the true regression function x → E[Y |X = x] is linear) for every distribution P X of features and noise level σ 2 . For some "degenerate" distributions (Definition 1), the minimax risk is infinite (Proposition 1); while for non-degenerate ones, the OLS estimator is exactly minimax (Theorem 1) irrespective of P X , σ 2 .
2. We express the minimax risk in terms of the distribution of statistical leverage scores of samples drawn from P X (Theorem 2). Quite intuitively, distributions of X for which leverage scores are uneven are seen to be harder from a minimax point of view. We deduce a precise minimax lower bound of σ 2 d/(n − d + 1), valid for every distribution P X of covariates. This lower bound nearly matches the σ 2 d/(n − d− 1) risk for centered Gaussian covariates, in both low (d/n → 0) and moderate (d/n → γ ∈ (0, 1)) dimensions; hence, Gaussian covariates are almost the "easiest" ones in terms of minimax risk. This provides a counterpart to results obtained in the moderate-dimensional regime for independent covariates from the Marchenko-Pastur law.
3. We then turn to upper bounds on the minimax risk. Under some quantitative variant of the non-degeneracy assumption (Assumption 1) together with a fourth-moment condition on P X (Assumption 2 or 3), we show that the minimax risk is finite and scales as (1 + o(1))σ 2 d/n when d = o(n), both in the well-specified (Theorem 3) and misspecified (Proposition 3) cases. In particular, OLS is asymptotically minimax in the misspecified case as well, as d/n → 0. To our knowledge, this gives the first bounds on the expected risk of the OLS estimator for general random design distribution.
The previous upper bounds rely on the study of the lower tail of the sample covariance matrix Σ n , carried out in Section 3. Our contributions here are the following (assuming, to simplify notation, that E[XX ⊤ ] = I d ):
4. First, we establish a lower bound on the lower tail of λ min ( Σ n ), for d 2 and any distribution P X such that E[XX ⊤ ] = I d , of the form: P(λ min ( Σ n ) t) (ct) n/2 for some numerical constant c and every t ∈ (0, 1) (Proposition 4). We also exhibit a "small-ball" condition (Assumption 1) which is necessary to achieve similar upper bounds.
5. Under Assumption 1, we show a matching upper bound on the lower tail P(λ min ( Σ n ) t), valid for all t ∈ (0, 1), and in particular for small t. This result (Theorem 4) is the core technical contribution of this paper. Its proof relies the PAC-Bayesian technique for controlling empirical processes, which was used by [Oli16] to control a different part of the lower tail; however, some non-trivial refinements (such as non-Gaussian smoothing) are needed to handle small values of t. This result can be equivalently stated as an upper bound on moments of λ min ( Σ n ) −1 , namely λ min ( Σ n ) −1 L q = O(1) for q ≍ n (Corollary 4).
6. Finally, we discuss in Section 3.3 the case of independent covariates. In this case, the "small-ball" condition (Assumption 1) holds naturally under mild regularity assumptions on the distribution of individual coordinates. A result of [RV14] establishes this for coordinates with bounded density; we complement it by a general anti-concentration result for linear combination of independent variables (Proposition 6), implying Assumption 1 for sufficiently "non-atomic" coordinates.
Related work
Linear least squares regression is a classical problem, and the literature on this topic is too vast to be surveyed here; we refer to [GKKW02,AC10,HKZ14] (and references therein) for a more thorough overview. In addition, while we focus on mean-squared prediction error, different criteria can be considered, as in the predictive inference literature [RWG19]. Analysis of least squares regression is most standard and straightforward in the fixed design setting, where the covariates X 1 , . . . , X n are deterministic and the risk is evaluated within-sample; in this case, the expected excess risk of the OLS estimator is bounded by σ 2 d/n (see, e.g., [HKZ14]
(x) = E[Y |X = x] satisfies |g * (X)| L almost surely, then the risk R(g) = E[(g(X) − Y ) 2 ] of the (nonlinear) truncated ERM estimator, defined by g L n (x) = min(−L, max(L, β LS n , x )), is at most E[R( g L n )] − R(g * ) 8 R(β * ) − R(g * ) + C max(σ 2 , L 2 ) d(log n + 1) n (5)
for some universal constant C > 0. This result is an inexact oracle inequality, where the risk is bounded by a constant times that of the best linear predictor β * . Such guarantees are adequate in a nonparametric setting, where the approximation error R(β * ) − R(g * ) of the linear model is itself of order O(d/n) [GKKW02]. On the other hand, when no assumption is made on the magnitude of the approximation error, this bound does not ensure that the risk of the estimator approaches that of β * . By contrast, in the linear aggregation problem as defined by [Nem00] (and studied by [Tsy03, Cat04, BTW07, AC11, HKZ14, LM16, Men15, Oli16]), one seeks to obtain excess risk bounds, also called exact oracle inequalities (where the constant 8 in the bound (5) is replaced by 1), with respect to the linear class. In this setting, Tsybakov [Tsy03] showed that the minimax rate of aggregation is of order O(d/n), under boundedness assumptions on the regression function and on covariates. It is also worth noting that bounds on the regression function also implicitly constrain the optimal regression parameter to lie in some ball. This contrasts with the approach considered here, where minimax risk with respect to the full linear class is considered. Perhaps most different from the point of view adopted here is the approach from [Fos91, Vov01, AW01, Sha15, BKM + 15], whose authors consider worstcase covariates (either in the individual sequences or in the agnostic learning setting) under boundedness assumptions on both covariates and outputs, and investigate achievable excess risk (or regret) bounds with respect to bounded balls in this case. By contrast, we take the distribution of covariates as given and allow the optimal regression parameter to be arbitrary, and study under which conditions on the covariates uniform bounds are achievable. Another type of non-uniform guarantees over linear classes is achieved by Ridge regression [Hoe62,Tik63] in the context of reproducing kernel Hilbert spaces [CS02a , CS02b, DVCR05, CDV07, SZ07, SHS09, AC11, HKZ14], where the bounds do not depend explicitly on the dimension d, but rather on spectral properties of Σ and some norm of β * . This work is concerned with the expected risk. Risk bounds in probability are obtained, among others, by [AC11,HKZ14,HS16,Oli16,Men15,LM16]. While such bounds hold with high probability, the probability is upper bounded and cannot be arbitrarily close to 1, so that they cannot be integrated to control the expected risk. Indeed, some additional regularity conditions are required in order to have finite minimax risk, as will be seen below. To the best of our knowledge, the only available uniform expected risk bounds for random-design regression are obtained in the case of Gaussian covariates, where they rely on the knowledge of the closed-form distribution of inverse covariance matrices [Ste60,BF83,And03]. One reason for considering the expected risk is that it is a single scalar, which can be more tightly controlled (in terms of matching upper and lower bounds) and compared across distributions than quantiles. In addition, random-design linear regression is a classical statistical problem, which justifies its precise decision-theoretic analysis. On the other hand, expected risk only provides limited information on the tails of the risk in the high-confidence regime: in the case of heavy-tailed noise, the OLS estimator may perform poorly, and dedicated robust estimators may be required (see, e.g., [AC11] and the references in [LM19]).
Another line of work [EK13,Dic16,DM16,EK18,DW18] considers the limiting behavior of regression procedures in the high-dimensional asymptotic regime where d, n tend to infinity at a proportional rate, with their ratio kept constant [Hub73]. The results in this setting take the form of a convergence in probability of the risk to a limit depending on the ratio d/n as well as the properties of β * . With the notable exception of [EK18], the previous results hold under the assumption that the covariates are either Gaussian, or have a joint independence structure that leads to the same limiting behavior in high dimension. In contrast, here we consider non-asymptotic bounds valid for fixed n, d, general design distribution and uniformly over β * ∈ R d .
The study of spectral properties of sample covariance matrices has a rich history (see for instance [BS10,AGZ10,Tao12] and references therein); we refer to [RV10] for an overview of results (up to 2010) on the non-asymptotic control of the smallest eigenvalue of sample covariance matrices, which is the topic of Section 3. It is well-known [Ver12] that sub-Gaussian tail bounds on both the smallest and largest eigenvalues can be obtained under sub-Gaussian assumptions on covariates (see also [KL17] for operator norm concentration under general population covariance). A series of work obtained control on these quantities under weaker assumptions [ALPTJ10,MP14,SV13,Tik18]. A key observation, which has been exploited in a series of work [SV13,KM15,Oli16,Yas14,Yas15,vdGM14], is that the smallest eigenvalue can be controlled under much weaker tail assumptions than the largest one. Our study follows this line of work, but considers a different part of the lower tail, which poses additional technical difficulties; we also provide a general lower bound on the lower tail.
Notation. Throughout this text, the transpose of an m × n real matrix A is denoted A ⊤ , its trace (when m = n) Tr(A), and vectors in R d are identified with d × 1 column vectors. In addition, the coordinates of a vector x ∈ R d are indicated as superscripts: x = (x j ) 1 j d . We also denote x, z = x ⊤ z = d j=1 (x j ) · (z j ) the canonical scalar product of x, z ∈ R d , and x = x, x 1/2 the associated Euclidean norm. In addition, for any symmetric and positive d×d matrix A, we define the scalar product x, z A = Ax, z and norm x A = Ax, x 1/2 = A 1/2 x . The d × d identity matrix is denoted I d , while S d−1 = {x ∈ R d : x = 1} refers to the unit sphere. The smallest and largest eigenvalues of a symmetric matrix A are denoted λ min (A) and λ max (A) respectively; if A is positive definite, then λ max (A) = A op is the operator norm of A (with respect to · ), while λ min (A) = A −1 −1 op . We denote by dist(x, A) = inf y∈A x − y the distance of x ∈ R d to a subset A ⊂ R d .
Exact minimax analysis of least-squares regression
This section is devoted to the minimax analysis of the linear least-squares problem, and in particular on the dependence of its hardness on the distribution P X of covariates. In Section 2.1, we indicate the exact minimax risk and estimator in the well-specified case, namely on the class P well (P X , σ 2 ). In Section 2.2, we express the minimax risk in terms of the distribution of statistical leverage scores, and deduce a general lower bound. Finally, Section 2.3 provides upper bounds on the minimax risk under some regularity condition on the distribution P X , both in the well-specified and misspecified cases. Throughout this note, we assume that the covariate vector X satisfies E[ X 2 ] < +∞, and denote Σ = E[XX ⊤ ] its covariance matrix (by a slight but common abuse of terminology, we refer to Σ as the covariance matrix of X even when X is not centered). In addition, we assume that Σ is invertible, or equivalently that the support of X is not contained in any hyperplane; this assumption is not restrictive (up to restricting to the span of the support of X, a linear subspace of R d ) and only serves to simplify notations. Then, for every distribution
of Y given X such that E[Y 2 ] < +∞, the risk R(β) = E[( β, X − Y ) 2 ] of any β ∈ R d is finite; this risk is uniquely minimized by β * = Σ −1 E[Y X], where E[Y X] is well-defined since E[ Y X ] E[Y 2 ] 1/2 E[ X 2 ] 1/2 < +∞. The response Y then writes Y = β * , X + ε ,(6)
where ε is the error, with
E[εX] = E[Y X] − Σβ * = 0.
The distribution P of (X, Y ) is then characterized by the distribution P X of X, the coefficient β * ∈ R d as well as the conditional distribution of ε given X, which satisfies E[ε 2 ] E[Y 2 ] < +∞ and E[εX] = 0. Now, given a distribution P X of covariates and a bound σ 2 on the conditional second moment of the error, define the following three classes, where Y is given by (6):
P Gauss (P X , σ 2 ) = P (X,Y ) : X ∼ P X , β * ∈ R d , ε|X ∼ N (0, σ 2 ) P well (P X , σ 2 ) = P (X,Y ) : X ∼ P X , β * ∈ R d , E[ε|X] = 0, E[ε 2 |X] σ 2 P mis (P X , σ 2 ) = P (X,Y ) : X ∼ P X , β * ∈ R d , E[ε 2 |X] σ 2 .(7)
The class P Gauss corresponds to the standard case of independent Gaussian noise, while P well includes all well-specified distributions, such that the true regression function x → E[Y |X = x] is linear. Finally, P mis corresponds to the general misspecified case, where the regression function
x → E[Y |X = x]
is not assumed to be linear.
Minimax analysis of linear least squares
We start with the following definition.
Definition 1. The distribution P X on R d is degenerate if there exists a linear hyperplane H ⊂ R d such that P(X ∈ H) > 0 (that is, if there exists some θ ∈ S d−1 such that P( θ, X = 0) > 0).
Fact 1. Let n d. The following properties are equivalent:
1. The distribution P X is non-degenerate;
2. The sample covariance matrix Σ n is invertible almost surely;
3. The ordinary least-squares (OLS) estimator β LS n := arg min
β∈R d n i=1 ( β, X i − Y i ) 2 (8)
is uniquely defined almost surely, and equals β LS
n = Σ −1 n n −1 n i=1 Y i X i .
Proof. The equivalence between the second and third points is standard: the empirical risk being convex, its global minimizers are the critical points β characterized by Σ n β = n −1 n i=1 Y i X i . We now prove that the second point implies the first, by contraposition. If P( θ, X = 0) = p > 0 for some θ ∈ S d−1 , then with probability p n , θ, X i = 0 for i = 1, . . . , n, so that Σ n θ = n −1 n i=1 θ, X i X i = 0 and thus Σ n is not invertible. Conversely, let us show that the first point implies the second one. Note that the latter amounts to saying that X 1 , . . . , X n span R d almost surely. It suffices to show this for n = d, which we do by showing that, almost surely, V k = span(X 1 , . . . , X k ) is of dimension k for 0 k d, by induction on k. The case k = 0 is clear. Now, assume that k d and that V k−1 is of dimension k − 1 d − 1 almost surely. Then, V k−1 is contained in a hyperplane of R d , and since X k is independent of V k−1 , the first point implies that P(X k ∈ V k−1 ) = 0, so that V k is of dimension k almost surely.
Remark 1 (Intercept). Assume that X = (X j ) 1 j d , where X d ≡ 1 is an intercept variable. Then, the distribution P X is degenerate if and only if there exists θ = (θ j ) 1 j<d ∈ R d−1 \ {0} and c ∈ R such that d−1 j=1 θ j X j = c with positive probability. This amounts to say that (X 1 , . . . , X d−1 ) belongs to some fixed affine hyperplane of R d−1 with positive probability.
The following result shows that non-degeneracy of the design distribution is necessary to obtain finite minimax risk.
Proposition 1 (Degenerate case). Assume that either n < d, or that the distribution P X of X is degenerate, in the sense of Definition 1. Then, the minimax excess risk with respect to the class P Gauss (P X , σ 2 ) is infinite.
An infinite minimax excess risk means that some dependence on the true parameter β * (for instance, through its norm) is unavoidable in the expected risk of any estimator β n . From now on and until the rest of this section, we assume that the distribution P X is non-degenerate and that n d. In particular, the OLS estimator is well-defined, and the empirical covariance matrix Σ n is invertible almost surely. Theorem 1 below provides the exact minimax excess risk and estimator in the well-specified case.
Theorem 1. Assume that P X is non-degenerate and n d. The minimax risks over classes P well (P X , σ 2 ) and P Gauss (P X , σ 2 ) coincide, and equal
inf βn sup P ∈P well (P X ,σ 2 ) E E P ( β n ) = σ 2 n · E Tr( Σ −1 n ) (9)
where Σ n = Σ −1/2 Σ n Σ −1/2 is the rescaled empirical covariance matrix. In addition, the minimax risk is achieved by the OLS estimator (8) over the classes P Gauss (P X , σ 2 ) and P well (P X , σ 2 ) for every P X and σ 2 .
The proof of Theorem 1 and Proposition 1 is provided in Section 5.2, and relies on standard decision-theoretic arguments (see [Tsy09, Chapter 2] and [Joh19, Section 4.10]). First, an upper bound (in the non-degenerate case) over P well (P X , σ 2 ) is obtained for the OLS estimator. Then, a matching lower bound on the minimax risk over the subclass P Gauss (P X , σ 2 ) is established by considering the Bayes risk under Gaussian prior on β * and using a monotone convergence argument.
Remark 2 (Linear changes of covariates). The minimax risk is invariant under invertible linear transformations of the covariates x. This can be seen a priori, by noting that the class of linear functions of x is invariant under linear changes of variables. To recover it from Theorem 1,
let X ′ = AX, where A is an invertible d × d matrix. Since Σ ′ = E[X ′ X ′⊤ ] equals AΣA ⊤ and Σ ′ n = n −1 n i=1 X ′ i X ′⊤ i equals A Σ n A ⊤ , we have Σ ′−1 n Σ ′ = ((A ⊤ ) −1 Σ −1 n A −1 )(AΣA ⊤ ) = (A ⊤ ) −1 ( Σ −1 n Σ)A ⊤ ,
which is conjugate to Σ −1 n Σ and hence has the same trace; this concludes by Theorem 1 (as Tr( Σ −1 n ) = Tr( Σ −1 n Σ)). In particular, the minimax risk for the design X is the same as the one for X = Σ −1/2 X.
Note that the OLS estimator β LS n is minimax optimal for every distribution of covariates P X and noise level σ 2 . This shows in particular that the knowledge of neither of those properties of the distribution is helpful to achieve improved risk uniformly over the linear class. On the other hand, when additional knowledge on the optimal parameter β * is available, OLS may no longer be optimal, and knowledge of σ 2 may be helpful.
Another consequence of Theorem 1 is that independent Gaussian noise is the least favorable noise structure (in terms of minimax risk) in the well-specified case for a given noise level σ 2 .
Finally, the convexity of the map A → Tr(A −1 ) on positive matrices [Bha09] implies (by Jensen's inequality combined with the identity E[ Σ n ] = I d ) that the minimax risk (9) is always at least as large as σ 2 d/n, which is the minimax risk in the fixed-design case. We will however show in what follows that a strictly better lower bound can be obtained for d 2.
Connection with statistical leverage and distribution-independent lower bound
In this section, we provide another expression for the minimax risk over the classes P well (P X , σ 2 ) and P Gauss (P X , σ 2 ), by relating it to the notion of statistical leverage score [HW78, CH88,Hub81].
Theorem 2 (Minimax risk and leverage score). Under the assumptions of Theorem 1, the minimax risk (9) over the classes P well (P X , σ 2 ) and P Gauss (P X , σ 2 ) is equal to
inf βn sup P ∈P Gauss (P X ,σ 2 ) E E P ( β n ) = σ 2 · E ℓ n+1 1 − ℓ n+1 (10)
where the expectation holds over an i.i.d. sample X 1 , . . . , X n+1 drawn from P X , and where ℓ n+1 denotes the statistical leverage score of X n+1 among X 1 , . . . , X n+1 , defined by:
ℓ n+1 = n+1 i=1 X i X ⊤ i −1 X n+1 , X n+1 .(11)
The leverage score ℓ n+1 of X n+1 among X 1 , . . . , X n+1 measures the influence of the response Y n+1 on the associated fitted value Y n+1 = β LS n+1 , X n+1 : Y n+1 is an affine function of Y n+1 , with slope ℓ n+1 = ∂ Y n+1 /∂Y n+1 [HW78,CH88]. Theorem 2 shows that the minimax predictive risk under the distribution P X is characterized by the distribution of leverage scores of samples drawn from this distribution. Intuitively, uneven leverage scores (with some points having higher leverage) imply that the estimator β LS n is determined by a smaller number of points, and therefore has higher variance. This is consistent with the message from robust statistics that points with high leverage (typically seen as outliers) can be detrimental to the performance of the least squares estimator [HW78,CH88,Hub81], see also [RM16].
Proof of Theorem 2. By Theorem 1, the minimax risk over P Gauss (P X , σ 2 ) and P well (P X , σ 2 ) equals, letting X n+1 ∼ P X be independent from X 1 , . . . , X n :
σ 2 n · E Tr( Σ −1 n ) = σ 2 n · E Tr( Σ −1 n Σ) = σ 2 · E Tr (n Σ n ) −1 X n+1 X ⊤ n+1 = σ 2 · E (n Σ n ) −1 X n+1 , X n+1 = σ 2 · E (n Σ n + X n+1 X ⊤ n+1 ) −1 X n+1 , X n+1 1 − (n Σ n + X n+1 X ⊤ n+1 ) −1 X n+1 , X n+1 (12) = σ 2 · E ℓ n+1 1 − ℓ n+1 ,
where (12) follows from Lemma 1 below, with S = n Σ n and v = X n+1 .
Lemma 1. For any symmetric positive d × d matrix S and v ∈ R d , S −1 v, v = (S + vv ⊤ ) −1 v, v 1 − (S + vv ⊤ ) −1 v, v .(13)
Proof. Since S + vv ⊤ S is positive, it is invertible, and the Sherman-Morrison formula [HJ90] shows that
(S + vv ⊤ ) −1 = S −1 − S −1 vv ⊤ S −1 1 + v ⊤ S −1 v , so that (S + vv ⊤ ) −1 v, v = S −1 v, v − S −1 v, v 2 1 + S −1 v, v = S −1 v, v 1 + S −1 v, v , hence (S + vv ⊤ ) −1 v, v ∈ [0, 1).
Inverting this equality yields (13).
We now deduce from Theorem 2 a precise lower bound on the minimax risk (9), valid for every distribution of covariates P X . By Proposition 1, it suffices to consider the case when n d and P X is nondegenerate (since otherwise the minimax risk is infinite).
Corollary 1 (Minimax lower bound). Under the assumptions of Theorem 1, the minimax risk (9) over P Gauss (P X , σ 2 ) satisfies
inf βn sup P ∈P Gauss (P X ,σ 2 ) E E P ( β n ) σ 2 d n − d + 1 .(14)
Proof of Corollary 1. By Theorem 2, the minimax excess risk over P Gauss (P X , σ 2 ) writes:
σ 2 · E ℓ n+1 1 − ℓ n+1 σ 2 · E[ ℓ n+1 ] 1 − E[ ℓ n+1 ] ,(15)
where the inequality follows from the convexity of the map
x → x/(1 − x) = 1 − 1/(1 − x) on [0, 1)
. Now, by exchangeability of (X 1 , . . . , X n+1 ),
E[ ℓ n+1 ] = 1 n + 1 n+1 i=1 E n+1 i=1 X i X ⊤ i −1 X i , X i = 1 n + 1 E Tr n+1 i=1 X i X ⊤ i −1 n+1 i=1 X i X ⊤ i = d n + 1 .(16)
Plugging equation (16) into (15) yields the lower bound (14).
Since n − d + 1 n, Corollary 1 implies a lower bound of σ 2 d/n. The minimax risk for linear regression has been determined under additional boundedness assumptions on Y (and thus on β * ) by [Tsy03], showing that it scales as Θ(d/n) up to numerical constants. The proof of the lower bound relies on information-theoretic arguments, and in particular on Fano's inequality [Tsy09]. Although widely applicable, such techniques often lead to loose constant factors. By contrast, the approach relying on Bayes risk leading to Corollary 1 recovers the optimal leading constant, owing to the analytical tractability of the problem.
In fact, the lower bound of Corollary 1 is more precise than the σ 2 d/n lower bound, in particular when the dimension d is commensurate to n. Indeed, in the case of centered Gaussian design, namely when X ∼ N (0, Σ) for some positive matrix Σ, the risk of the OLS estimator (and thus, by Theorem 1, the minimax risk) can be computed exactly [And03,BF83], and equals
E E P ( β LS n ) = σ 2 d n − d − 1 .(17)
The distribution-independent lower bound of Corollary 1 is very close to the above whenever n − d ≫ 1. Hence, it is almost the best possible distribution-independent lower bound on the minimax risk. This also shows that Gaussian design is almost the easiest design distribution in terms of minimax risk. This can be understood as follows: degeneracy (a large value of Tr( Σ −1 n )) occurs whenever the rescaled sample covariance matrix Σ n is small in some direction; this occurs if either the direction of X = Σ −1/2 X is far from uniform (so that the projection of X in some direction can be small), or if its norm can be small. If X ∼ N (0, I d ), then X/ X is uniformly distributed on the unit sphere, while X = d j=1 ( X j ) 2 is sharply concentrated around √ d:
with high probability, X = √ d + O(1) (see e.g. [Ver18, Eq. 3.7])
. In particular, in the high-dimensional regime where d and n are large and commensurate, namely d, n → ∞ and d/n → γ, the lower bound of Corollary 1 matches the minimax risk (17) in the Gaussian case, which converges to σ 2 γ/(1 − γ). The limit σ 2 γ/(1 − γ) has a form of universality in the high-dimensional regime: indeed, it is connected to the Marchenko-Pastur law for the spectrum of random matrices [MP67], which extends to more general distributions with jointly independent coordinates. However, the "universality" of this limiting behavior is quite restrictive [EKK11,EK18], since it relies on the assumption of independent covariates, which induces in high dimension a very specific geometry due to the concentration of measure phenomenon [Led01,BLM13]. For instance, [EK18] obtains different limiting risks for robust regression in high dimension when considering non-independent coordinates. Corollary 1 shows that, if not universal, the limiting risk obtained in the independent case provides a lower bound for general design distributions.
Finally, the property of the design distribution that leads to the minimal excess risk in high dimension can be formulated succinctly in terms of leverage scores, using Theorem 2.
Corollary 2. Let (d n ) n 1 be a sequence of positive integers such that d n /n → γ ∈ (0, 1), and (P (n) X ) n 1 a sequence of non-degenerate distributions on R dn . Assume that the minimax excess risk (9) over P well (P (n) X , σ 2 ) converges to σ 2 γ/(1 − γ). Then, the distribution of the leverage score ℓ (n) n+1 of one sample among n + 1 under P (n) X converges in probability to γ.
Proof. Let φ(x) = x/(1−x) for x ∈ [0, 1), and ψ(x) := φ(x)−φ(γ)−φ ′ (γ)(x−γ) (with ψ(γ) = 0).
Since φ is strictly convex, ψ(x) > 0 for x = γ, and ψ is also strictly convex. Hence, ψ is decreasing on [0, γ] and increasing on [γ, 1). In particular, for every ε > 0, η ε := inf |x−γ| ε ψ(x) > 0.
By Theorem 2, the assumption of Corollary 2 means that
E[φ( ℓ (n) n+1 )] → φ(γ). Since in addition E[ ℓ (n)
n+1 ] = d n /(n + 1) → γ (the first equality, used in the proof of Corollary 1, holds for d n n + 1, hence for n large enough since γ < 1),
we have E[ψ( ℓ (n) n+1 )] → 0. Now, for every ε > 0, ψ(x) η ε · 1(|x − γ| ε), so that P(| ℓ (n) n+1 − γ| ε) η −1 ε E[ψ( ℓ (n) n+1 )] → 0.
Upper bounds on the minimax risk
In this section, we complement the lower bound of Corollary 1 by providing matching upper bounds on the minimax risk. Since by Proposition 1 the minimax risk is infinite when the design distribution is degenerate, we introduce the following quantitative version of the non-degeneracy condition:
Assumption 1 (Small-ball condition). The whitened design X = Σ −1/2 X satisfies the following: there exist constants C 1 and α ∈ (0, 1] such that, for every linear hyperplane H of R d and
t > 0, P dist( X, H) t (Ct) α .(18)
Equivalently, for every θ ∈ R d \ {0} and t > 0,
P | θ, X | t θ Σ (Ct) α .(19)
The equivalence between (18) and (19) comes from the fact that the distance dist( X, H) of X to the hyperplane H equals | θ ′ , X |, where θ ′ ∈ S d−1 is a normal vector to H. Condition (19) is then recovered by letting θ = Σ −1/2 θ ′ (such that θ Σ = θ ′ = 1) and by homogeneity.
Assumption 1 states that X does not lie too close to any fixed hyperplane. This assumption is a strengthened variant of the "small ball" condition introduced by [KM15, Men15,LM16] in the analysis of sample covariance matrices and least squares regression, which amounts to assuming (19) for a single value of t < C −1 . This latter condition amounts to a uniform equivalence between the L 1 and L 2 norms of one-dimensional marginals θ, X (θ ∈ R d ) of X [KM15]. Here, we require that the condition holds for arbitrarily small t; the reason for this is that in order to control the minimax excess risk (9) (and thus E[Tr( Σ −1 n )]), we are led to control the lower tail of the rescaled covariance matrix Σ n at all confidence levels. The study of the lower tail of Σ n (on which the results of this section rely) is deferred to Section 3. We also illustrate Assumption 1 in Section 3.3, by discussing conditions under which it holds in the case of independent coordinates.
First, Assumption 1 itself suffices to obtain an upper bound on the minimax risk of O(σ 2 d/n), without additional assumptions on the upper tail of XX ⊤ (apart from integrability).
Proposition 2. If Assumption 1 holds, then for every P ∈ P well (P X , σ 2 ), letting C ′ = 3C 4 e 1+9/α we have:
E[E( β LS n )] 2C ′ · σ 2 d n .(20)
Proposition 2 (a consequence of Corollary 4 from Section 3.2) is optimal in terms of the rate of convergence; however, it exhibits the suboptimal 2C ′ factor in the leading term. As we show next, it is possible to obtain an optimal constant in the first-order term (as well as a second-order term of the correct order) under a modest additional assumption.
Assumption 2 (Norm kurtosis). E[ Σ −1/2 X 4 ] κd 2 for some κ > 0.
Remark 3. Since E[ Σ −1/2 X 2 ] = d,
Assumption 2 is a bound on the kurtosis of the variable Σ −1/2 X . This condition is implied by the following L 2 -L 4 equivalence for one-dimensional marginals of X: for every θ ∈ R d , E[ θ, X 4 ] 1/4 κ 1/4 · E[ θ, X 2 ] 1/2 (Assumption 3 below). Indeed, assuming that the latter holds, then taking
θ = Σ −1/2 e j (where (e j ) 1 j d denotes the canonical basis of R d ), so that θ, X is the j-th coordinate X j of X, we get E[( X j ) 4 ] κ E[( X j ) 2 ] 2 = κ (since E[ X X ⊤ ] = I d ). This implies that E X 4 = E d j=1 ( X j ) 2 2 = 1 j,k d E ( X j ) 2 ( X k ) 2 1 j,k d E ( X j ) 4 1/2 E ( X k ) 4 1/2 1 j,k d κ 1/2 · κ 1/2 = κ · d 2 ,
where the first inequality above comes from the Cauchy-Schwarz inequality. The converse is false: if X is uniform on { √ de j : 1 j d}, then the first condition holds with κ = 1, while the second only holds for κ d (taking θ = e 1 ). Hence, Assumption 2 on the upper tail of X is weaker than an L 2 -L 4 equivalence of the one-dimensional marginals of X; on the other hand, we do require a small-ball condition (Assumption 1) on the lower tail of X.
Theorem 3 (Upper bound in the well-specified case). Grant Assumptions 1 and 2. Let C ′ = 3C 4 e 1+9/α (which only depends on α, C). If n min(6α −1 d, 12α −1 log(12α −1 )), then
1 n E Tr( Σ −1 n ) d n + 8C ′ κ d n 2 .(21)
In particular, the minimax excess risk over the class P well (P X , σ 2 ) satisfies:
σ 2 d n inf βn sup P ∈P well (P X ,σ 2 ) E E P ( β n ) σ 2 d n 1 + 8C ′ κd n .(22)
The proof of Theorem 3 is given in Section 5.3; it relies in particular on Lemma 7 herein and on Theorem 4 from Section 3. From a technical point of view, some care is required since the assumptions of Theorem 3 provide control on lower, rather than upper, relative deviations of Σ n with respect to Σ. As shown by the lower bound (established in Corollary 1), the constant in the first-order term in (22) is tight; in addition, one could see from a higher-order expansion (under additional moment assumptions) that the second-order term is also tight, up to the constant 8C ′ factor.
Consider now the general misspecified case, namely the class P mis (P X , σ 2 ). Here, we will need the slightly stronger Assumption 3.
Assumption 3 (L 2 -L 4 norm equivalence). There exists a constant κ > 0 such that, for every
θ ∈ R d , E[ θ, X 4 ] κ · E[ θ, X 2 ] 2 .
Proposition 3 (Upper bound in the misspecified case). Assume that P X satisfies Assumptions 1 and 3, and that
χ := E E[ε 2 |X] 2 Σ −1/2 X 4 /d 2 < +∞ (note that χ E[(Y − β * , X ) 4 Σ −1/2 X 4 ]/d 2 )
. Then, for n max(96, 6d)/α, the risk of the OLS estimator satisfies
E E( β LS n ) 1 n E (Y − β * , X ) 2 Σ −1/2 X 2 + 276C ′2 √ κχ d n 3/2 .(23)
In particular, we have
σ 2 d n inf βn sup P ∈P mis (P X ,σ 2 ) E E( β n ) σ 2 d n 1 + 276C ′2 κ d n .(24)
The proof of Proposition 3 is provided in Section 5.4; it combines results from Section 3 with a tail bound from [Oli16]. Proposition 3 shows that, under Assumptions 1 and 3, the minimax excess risk over the class P mis (P X , σ 2 ) scales as (1 + o(1))σ 2 d/n as d/n → 0. This implies that the OLS estimator is asymptotically minimax on the misspecified class P mis (P X , σ 2 ) when d = o(n), and that independent Gaussian noise is asymptotically the least favorable structure for the error ε.
Parameter estimation
Let us briefly discuss how the results of this section obtained for prediction can be adapted to the problem of parameter estimation, where the loss of an estimate β n given β * is β n − β * 2 .
By the same proof as that of Theorem 1 (replacing the norm · Σ by · ), the minimax excess risk over the classes P Gauss (P X , σ 2 ) and P well (P X , σ 2 ) is (σ 2 /n)E[Tr( Σ −1 n )], achieved by the OLS estimator. By convexity of A → Tr(A −1 ) over positive matrices [Löw34], this quantity is larger than σ 2 Tr(Σ −1 )/n.
In the case of centered Gaussian covariates, − 1). On the other hand, the improved lower bound for general design of Corollary 1 for prediction does not appear to extend to estimation. The reason for this is that the map A → A/(1 − Tr(A)) is not convex over positive matrices for d 2 (where convexity is defined with respect to the positive definite order, see e.g. [BV04, Section 3.6.2]), although its trace is.
E[Tr( Σ −1 n )] = Tr(Σ −1 E[ Σ −1 n ]) = Tr(Σ −1 )n/(n − d − 1) [And03], so the minimax risk is σ 2 Tr(Σ −1 )/(n − d
Finally, the results of Section 3 on the lower tail of Σ n can be used to obtain upper bounds in a similar fashion as for prediction. For instance, an analogue of Proposition 2 can be directly obtained by bounding Tr( Σ −1 n ) λ min ( Σ n ) −1 · Tr(Σ −1 ). Since this work is primarily focused on prediction, we do not elaborate further in this direction.
3 Bounding the lower tail of a sample covariance matrix at all probability levels
Throughout this section, up to replacing X by Σ −1/2 X, we assume unless otherwise stated that
E[XX ⊤ ] = I d .
Our aim is to obtain non-asymptotic large deviation inequalities of the form:
P(λ min ( Σ n ) t) e −nψ(t)
where ψ(t) → ∞ as t → 0 + . Existing bounds [Ver12,SV13,KM15,Oli16] are typically sub-Gaussian bounds with ψ(t) = c(1 − C d/n − t) 2 + for some constants c, C > 0, which "saturate" for small t. In this section, we study the behavior of the large deviations for small values of t, namely t ∈ (0, c), where c < 1 is a fixed constant. In Section 3.1, we provide a lower bound on these tail probabilities, namely an upper bound on ψ, valid for every distribution of X when d 2. In Section 3.2, we show that Assumption 1 is necessary and sufficient to obtain tail bounds of the optimal order. Finally, in Section 3.3 we show that Assumption 1 is naturally satisfied in the case of independent coordinates, under a mild regularity condition on their distributions.
A general lower bound on the lower tail
First, Proposition 4 below shows that in dimension d 2, the probability of deviations of λ min ( Σ n ) cannot be arbitrarily small.
Proposition 4. Assume that d 2. Let X be a random vector in R d such that E[XX ⊤ ] = I d . Then, for every t 1, sup
θ∈S d−1 P(| θ, X | t) 0.16 · t ,(25)
and therefore P λ min ( Σ n ) t (0.025 · t) n/2 .
The assumption that d 2 is necessary since for d = 1, if X = 1 almost surely, then λ min ( Σ n ) = 1 almost surely. Proposition 4 is proved in Section 6.1 through a probabilistic argument, namely by considering a random vector θ drawn uniformly on the sphere S d−1 .
Proposition 4 shows that P(λ min ( Σ n ) t) is at least (Ct) cn , where C = 0.025 and c = 1/2 are absolute constants; this bound writes e −nψ(t) , where ψ(t) ≍ log(1/t) as t → 0 + . In the following section, we study matching upper bounds on this lower tail.
Optimal control of the lower tail
In this section, we study conditions under which an upper bound matching the lower bound from Proposition 4 can be obtained. We start by noting that Assumption 1 is necessary to obtain such bounds:
Remark 4 (Necessity of small ball condition). Assume that there exists c 1 , c 2 > 0 such that P(λ min ( Σ n ) t) (c 1 t) c 2 n for all t ∈ (0, 1). Then, Lemma 2 below implies that sup θ∈S d−1 P(| θ, X | t) (c 1 t 2 ) c 2 for all t ∈ (0, 1). Hence, P X satisfies Assumption 1 with C = √ c 1 and α = 2c 2 .
Lemma 2. For t ∈ (0, 1), let p t = sup θ∈S d−1 P(| θ, X | t). Then, P(λ min ( Σ n ) t) p n √ t .
Proof of Lemma 2. Let p < p √ t . By definition of p √ t , there exists θ ∈ S d−1 such that P( θ, X 2 t) p. Hence, by independence, with probability at least p n , θ, X i 2 t for i = 1, . . . , n, so that λ min ( Σ n ) Σ n θ, θ t. Taking p → p √ t concludes the proof. As Theorem 4 shows, Assumption 1 is also sufficient to obtain an optimal control on the lower tail.
Theorem 4. Let X be a random vector in R d . Assume that E[XX ⊤ ] = I d and that X satisfies Assumption 1. If n 6d/α, then for every t ∈ (0, 1):
P λ min ( Σ n ) t (C ′ t) αn/6(27)
where C ′ = 3C 4 e 1+9/α .
Theorem 4 can be stated in the non-isotropic case, where Σ = E[XX ⊤ ] is arbitrary:
Corollary 3. Let X be a random vector in R d such that E[ X 2 ] < +∞, and let Σ = E[XX ⊤ ].
Assume that X satisfies Assumption 1. Then, if d/n α/6, for every t ∈ (0, 1), the empirical covariance matrix Σ n formed with an i.i.d. sample of size n satisfies Σ n tΣ
with probability at least 1 − (C ′ t) αn/6 , where C ′ is as in Theorem 4.
Proof of Corollary 3. We may assume that Σ is invertible: otherwise, we can just consider the span of the support of X, a subspace of R d of dimension d ′ d αn/6. Now, let X = Σ −1/2 X; by definition, E[ X X ⊤ ] = I d , and X satisfies Assumption 1 since X does. By Theorem 4, with probability at least 1 − (C ′ t) αn/6 , λ min (Σ −1/2 Σ n Σ −1/2 ) t, which amounts to Σ −1/2 Σ n Σ −1/2 tI d , and thus Σ n tΣ.
It is worth noting that Theorem 4 does not require any condition on the upper tail of XX ⊤ , aside from the assumption E[XX ⊤ ] = I d . Indeed, as noted in Remark 4, it only requires the necessary Assumption 1. In particular, it does not require any sub-Gaussian assumption on X, similarly to the results from [KM15,Oli16,vdGM14,Yas14,Yas15]; this owes to the fact that lower bounds for sums of non-negative random variables hold under weak assumptions.
Remark 5 (Extension to random quadratic forms). Theorem 4 extends (up to straightforward changes in notations) to random quadratic forms v → A i v, v where A 1 , . . . , A n are positive semi-definite and i.i.d., with E[A i ] = I d (Theorem 4 corresponds to the rank 1 case where
A i = X i X ⊤ i ).
On the other hand, the lower bound of Proposition 4 is specific to rank 1 matrices, as can be seen by considering the counterexample where A i = I d almost surely.
Remark 6 (Gaussian case). It may be worth comparing the bound (27) to known estimates in the special case of the Gaussian distribution, namely X ∼ N (0, I d ). In this case, the joint density of eigenvalues of Σ n admits a closed-form expression, which provides by marginalization the density of λ min ( Σ n ) [Ede88, p. 533]. From this expression, the following bound is deduced in [WV12, eq. (99)]:
P λ min ( Σ n ) t 2(n/2) (n−d+1)/2 n − d + 1 √ πΓ( n+1 2 ) Γ( d 2 )Γ( n−d+1 2 )Γ( n−d+2 2 ) t n−d+1 .
Letting d = d n such that d n /n → α ∈ (0, 1) and applying Stirling's approximation, this implies the following large deviation estimate [WV12, Lemma 1]: for any fixed t ∈ (0, 1),
P λ min ( Σ n ) t n d d/2 √ et 1 − d/n n−d+o(n)
.
The bound (27) is of this form; it holds for general distributions of X, at the cost of worst constants in the Gaussian case.
Idea of the proof. The proof of Theorem 4 is provided in Section 4. It builds on the analysis of [Oli16], who obtains sub-Gaussian deviation bounds under fourth moment assumptions (Assumption 3), although some refinements are needed to handle our considered regime (with t small). We now discuss some general ideas about the proof technique. The proof starts with the representation of λ min ( Σ n ) as the infimum of an empirical process:
λ min ( Σ n ) = inf θ∈S d−1 Σ n θ, θ = inf θ∈S d−1 Z(θ) := 1 n n i=1 θ, X i 2 .(29)
In order to control this infimum, a natural approach is to first control Z(θ) on a suitable finite ε-covering of S d−1 using Assumption 1, independence, and a union bound, and then to extend this control to S d−1 by approximation. However, this approach (see e.g. [Ver12, Theorem 5.39] for a use of this argument) fails here, since the control of the approximation term would require an exponential upper bound on Σ n op , which does not hold for heavy-tailed distributions. Instead, as in [Oli16], we use the so-called PAC-Bayesian inequality for empirical processes [McA99,McA03,LST03,Cat07,AC11], which is based on a variational representation of the relative entropy. This technique enables one to control a smoothed version of the process Z(θ), namely
Z(ρ) := R d Z(θ)ρ(dθ) ,
indexed by probability distributions ρ on Θ. Specifically, let π be a probability distribution on some subset Θ ⊂ R d containing S d−1 . In addition, let ψ : R * + → R be a bound on the moment generating function of − θ, X 2 , such that for all λ > 0 and θ ∈ Θ,
E exp − λ θ, X 2 e −ψ(λ) , so that E exp − λnZ(θ) − nψ(λ) 1.
The PAC-Bayes variational inequality (see Lemma 4 for a general statement) allows to turn this (pointwise, for every θ) bound on the moment generating function into a uniform bound for the smoothed process: for every t > 0,
P ∀ρ, −λn Z(ρ) + ψ(λ) KL(ρ, π) + t 1 − e −t ,
where ρ spans all distributions over Θ and KL(ρ, π) = log dρ dπ dρ is the relative entropy between ρ and π. One then deduce from these inequalities the following decomposition. To each θ ∈ S d−1 , we associate a smoothing distribution ρ θ around θ; then, with probability at least 1 − e −t , for every θ ∈ S d−1 ,
Z(θ) = Z(θ) − Θ Z(θ ′ )ρ θ (dθ ′ ) + Θ Z(θ ′ )ρ θ (dθ ′ ) Z(θ) − Θ Z(θ ′ )ρ θ (dθ ′ ) approximation term − KL(ρ θ , π) λn entropy term − ψ(λ) + t λn .
The proof then involves controlling (i) the Laplace transform of the process; (ii) the approximation term; and (iii) the entropy term. In order to control the last two, a careful choice of smoothing distribution (and prior) is needed.
Remark 7 (PAC-Bayes vs. ε-net argument). As indicated above, the use of an ε-net argument would fail here, since it would lead to an approximation term depending on Σ n op . On the other hand, the use of a smoothing distribution which is "isotropic" and centered at a point θ enables one to obtain an approximation term in terms of Tr( Σ n )/d, which can be bounded after proper truncation of X (in a way that does not overly degrade Assumption 1).
Remark 8 (Choice of prior and posteriors: entropy term). The PAC-Bayesian technique is classically employed in conjunction with Gaussian prior and smoothing distribution [LST03, AC11,Oli16]. This choice is convenient, since both the approximation and entropy term have closedform expressions (in addition, a Gaussian distribution centered at θ yields the desired "isotropic" approximation term). However, in order to obtain non-vacuous bounds for small t, we need the approximation term (and thus the "radius" γ of the smoothing distribution) to be small. But as γ → 0, the entropy term for Gaussian distributions grows rapidly (as d/γ 2 , instead of the d log(1/γ) rate suggested by covering numbers), which ultimately leads to vacuous bounds. In order to bypass this difficulty, we employ a more refined choice of prior and smoothing distributions, leading to an optimal entropy term of d log(1/γ). In addition, symmetry arguments show that this choice of smoothing also leads to an "isotropic" approximation term controlled by Tr( Σ n )/d instead of Σ n op .
Formulation in terms of moments. The statements of this section on the lower tail of λ min ( Σ n ) can equivalently be rephrased in terms of its negative moments. For q 1, we denote Z L q := E[|Z| q ] 1/q ∈ [0, +∞] the L q norm of a real random variable Z.
Corollary 4. Under the assumptions of Theorem 4 and for n 12/α, for any 1 q αn/12,
max(1, λ min ( Σ n ) −1 ) L q 2 1/q · C ′ .(30)
Conversely, the previous inequality implies that P(λ min ( Σ n ) t) (2C ′ t) αn/12 for all t ∈ (0, 1). Finally, for any random vector X in R d , d 2, such that E[XX ⊤ ] = I d , we have for any q n/2:
λ min ( Σ n ) −1 L q = +∞ .
The proof of Corollary 4 is provided in Section 6.2.
The small-ball condition for independent covariates
We now discuss conditions under which Assumption 1 holds in the case of independent coordinates. In this section, we assume that the coordinates X j , 1 j d, of X = X are independent and centered. Note that the condition E[XX ⊤ ] = I d means that the X j have unit variance. Let us introduce the Lévy concentration function Q Z : R + → [0, 1] of a real random variable Z defined by, for t 0, Q Z (t) := sup a∈R P(|Z − a| t) .
Anti-concentration (or small ball) estimates [NV13] refer to nonvacuous upper bounds on this function. Here, in order to establish Assumption 1, it suffices to show that Q θ,X (t) (Ct) α for all t > 0 and θ ∈ S d−1 . This amounts to establishing anti-concentration of linear combinations of independent variables θ, X = d j=1 θ j X j , uniformly over θ ∈ S d−1 , namely to provide upper bounds on:
Q X (t) := sup θ∈S d−1 Q θ,X (t) .
Small-ball probabilities naturally appear in the study of the smallest singular value of a random matrix (see [RV10]). [TV09a, TV09b, RV08, RV09] studied anti-concentration for variables of the form θ, X , and deduced estimates of the smallest singular value of random matrices. These bounds are however slightly different from the one we need: indeed, they hold for "unstructured" vectors θ (which do not have additive structure, see [RV10]), rather than uniformly over θ ∈ S d−1 .
Here, in order to show that Assumption 1 holds, we need bounds over Q X , which requires some assumption on the distribution of the coordinates X j . Clearly, Q X max 1 j d Q X j , and in particular the coordinates X j themselves must be anticoncentrated. Remarkably, a result of [RV14] (building on a reduction by [Rog87] to uniform variables) shows that, if the X j have bounded densities, a reverse inequality holds: Theorem 1.2). Assume that X 1 , . . . , X d are independent and have density bounded by C 0 > 0. Then, for every θ ∈ S d−1 , d j=1 θ j X j has density bounded by √ 2 C 0 . In other words, Q X (t) 2 √ 2 C 0 t for every t > 0, i.e., Assumption 1 holds with α = 1 and C = 2 √ 2 C 0 .
Proposition 5 ([RV14],
Equivalently, if max 1 j d Q X j (t) Ct for all t > 0, then Q X (t) √ 2Ct for all t > 0, and the constant √ 2 is optimal [RV14]. Whether a general bound of Q X in terms of max 1 j d Q X j holds is unclear (for instance, the inequality Q X √ 2 max 1 j d Q X j does not hold, as shown by considering X 1 , X 2 independent Bernoulli 1/2 variables, and θ = (1/ √ 2, 1/ √ 2): then Q X j (3/8) = 1/2 but Q θ,X (3/8) = 3/4). While independence gives
Q θ,X (t) min 1 j d Q θ j X j (t) = min 1 j d Q X j (t/|θ j |) max 1 j d Q X j ( √ d · t) ,
this bound features an undesirable dependence on the dimension d. Another way to express the "non-atomicity" of the distributions of coordinates X j , which is stable through linear combinations of independent variables, is the rate of decay of their Fourier transform. Indeed, if X j is atomic, then its characteristic function does not vanish at infinity. Proposition 6 below (proved in Section 6.3), which follows from an inequality by Esséen, provides uniform anti-concentration for one-dimensional marginals θ, X in terms of the Fourier transform of the X j , establishing Assumption 1 beyond bounded densities. We let Φ Z be the characteristic function of a real random variable Z, defined by Φ Z (ξ) = E[e iξZ ] for ξ ∈ R.
Proposition 6. Assume that X 1 , . . . , X d are independent and that there are constants C 0 > 0 and α ∈ (0, 1) such that, for 1 j d and ξ ∈ R,
|Φ X j (ξ)| (1 + |ξ|/C 0 ) −α .(31)
Then, X = (X 1 , . . . , X d ) satisfies Assumption 1 with C = 2 1/α (2π) 1/α−1 (1 − α) −1/α C 0 .
4 Proof of Theorem 4
Truncation and small-ball condition
The first step of the proof is to replace X by the truncated vector
X ′ := 1 ∧ √ d X X; likewise, let X ′ i = 1 ∧ √ d X i X i for 1 i n, and Σ ′ n := n −1 n i=1 X ′ i (X ′ i ) ⊤ . Note that X ′ (X ′ ) ⊤ XX ⊤ and X ′ = √ d ∧ X , so that Σ ′ n Σ n and E[ X ′ 2 ] E[ X 2 ] = d.
It follows that λ min ( Σ ′ n ) λ min ( Σ n ), hence it suffices to establish a lower bound for λ min ( Σ ′ n ). In addition, for every θ ∈ S d−1 , t ∈ (0, C −1 ) and a 1,
P(| X ′ , θ | t) P (| X, θ | at) + P √ d X 1 a (Cat) α + P( X a √ d) (Cat) α + E[ X 2 ] a 2 d (32) = (Ct) α a α + 1 a 2(33)
where we applied Markov's inequality in (32). In particular, letting a = (Ct) −α/(2+α) , inequality (33) becomes
P(| X ′ , θ | t) 2(Ct) 2α/(2+α) .(34)
Concentration and PAC-Bayesian inequalities
The smallest eigenvalue λ min ( Σ ′ n ) of Σ ′ n may be written as the infimum of an empirical process indexed by the unit sphere S d−1 = {v ∈ R d : v = 1}:
λ min ( Σ ′ n ) = inf v∈S d−1 Σ ′ n v, v = inf v∈S d−1 1 n n i=1 X ′ i , v 2 .
Now, recall that the variables X ′ i , θ 2 are i.i.d. and distributed as X ′ , θ 2 for every θ ∈ S d−1 . The inequality (34) on the left tail of this variable can be expressed in terms of its Laplace transform, through the following lemma: Lemma 3. Let Z be a nonnegative random variable. Assume that there exists α ∈ (0, 1] and C > 0 such that, for every t 0, P(Z t) (Ct) α . Then, for every λ > 0,
E[exp(−λZ)] (C/λ) α .(35)
Proof of Lemma 3. Since 0 exp(−λZ) 1, we have
E[e −λZ ] = 1 0 P(e −λZ t)dt = 1 0 P Z log(1/t) λ dt 1 0 C log(1/t) λ α dt.
Now, for u > 0, the map α → u α = e α log u is convex on R, so that u α αu + (1 − α) for 0 α 1. It follows that
1 0 log α (1/t)dt α 1 0 (− log t)dt + (1 − α) = α − t log t + t 1 0 + (1 − α) = 1,
which establishes inequality (35).
Here, inequality (34) implies that, for every θ ∈ S d−1 ,
P( X ′ , θ 2 t) = P(| X ′ , θ | √ t) 2(C √ t) 2α/(2+α) = 2(C 2 t) α/(2+α) .
Hence, Lemma 3 with Z = X ′ , θ 2 implies that, for every λ > 0,
E[exp(−λ X ′ , θ 2 )] 2(C 2 /λ) α/(2+α) .
In other words, for i = 1, . . . , n, E[exp(Z i (θ))] 1, where, letting α ′ = α/(2 + α), we define
Z i (θ) = −λ X ′ i , θ 2 + α ′ log λ C 2 − log 2
with λ > 0 a fixed parameter that will be optimized later. In particular, letting Z(θ) = Z 1 (θ) + · · · + Z n (θ) = n −λ Σ ′ n θ, θ + α ′ log λ C 2 − log 2 , the independence of Z 1 (θ), . . . , Z n (θ) implies that, for every θ ∈ S d−1 ,
E[exp(Z(θ))] = E[exp(Z 1 (θ))] · · · E[exp(Z n (θ))] 1 .(36)
The bound (36) controls the upper tail of Z(θ) for fixed θ ∈ Θ. In order to obtain a uniform control over θ, similarly to [AC11,Oli16] we will use the PAC-Bayesian technique for bounding empirical processes [McA99,McA03,Cat07]. For completeness, we include a proof of Lemma 4 (which is standard) below.
Lemma 4 (PAC-Bayesian deviation bound). Let Θ be a measurable space, and Z(θ), θ ∈ Θ, be a real-valued measurable process. Assume that E[exp Z(θ)] 1 for every θ ∈ Θ. Let π be a probability distribution on Θ. Then,
P ∀ρ, Θ Z(θ)ρ(dθ) KL(ρ, π) + t 1 − e −t ,(37)
where ρ spans all probability measures on Θ, and KL(ρ, π) := Θ log dρ dπ dρ ∈ [0, +∞] is the Kullback-Leibler divergence between ρ and π, and where we define the integral in (37) to be −∞ when the negative part is not integrable.
Proof of Lemma 4. By integrating the inequality E[exp Z(θ)] 1 with respect to π and using the Fubini-Tonelli theorem, we obtain
E Θ exp Z(θ)π(dθ) 1 .(38)
In addition, using the duality between the log-Laplace transform and the Kullback-Leibler divergence (see, e.g., [Cat04,p. 159]):
log Θ exp(Z(θ))π(dθ) = sup ρ Θ Z(θ)ρ(dθ) − KL(ρ, π)
where the supremum spans over all probability distributions ρ over Θ, the inequality (38) writes
E exp sup ρ Θ Z(θ)ρ(dθ) − KL(ρ, π) 1 .(39)
Applying Markov's inequality to (39) yields the desired bound (37).
Here, we let Θ = S d−1 and Z(θ) as defined above. In addition, we take π to be the uniform distribution on S d−1 , and for v ∈ S d−1 and γ > 0 we define Θ(v, γ) := {θ ∈ S d−1 : θ − v γ} and let π v,γ = π(Θ(v, γ)) −1 1(Θ(v, γ)) · π be the uniform distribution over Θ(v, γ). In this case, the PAC-Bayesian bound of Lemma 4 writes: for every t > 0, with probability at least 1 − e −t , for every v ∈ S d−1 and γ > 0,
n −λF v,γ ( Σ ′ n ) + α ′ log λ C 2 − log 2 KL(π v,γ , π) + t ,(40)
where we define for every symmetric matrix Σ:
F v,γ (Σ) := Θ Σθ, θ π v,γ (dθ) .(41)
Control of the approximation term
Now, using the symmetries of the smoothing distributions π v,γ , we will show that, for every γ > 0, v ∈ S d−1 and symmetric matrix Σ,
F v,γ (Σ) = 1 − φ(γ) Σv, v + φ(γ) · 1 d Tr(Σ) ,(42)
where for γ > 0,
φ(γ) := d d − 1 Θ 1 − θ, v 2 π v,γ (dθ) ∈ [0, d/(d − 1)γ 2 ] .(43)
First, note that
F v,γ (Σ) = Tr(ΣA v,γ ) , where A v,γ := Θ θθ ⊤ π v,γ (dθ) .
In addition, for every isometry U ∈ O(d) of R d and v ∈ S d−1 , γ > 0, the image measure U * π v,γ of π v,γ under U is π U v,γ (since U sends Θ(v, γ) to Θ(U v, γ) and preserves the uniform distribution π on S d−1 ). It follows that
U A v,γ U −1 = Θ (U θ)(U θ) ⊤ π v,γ (dθ) = Θ θθ ⊤ π U v,γ (dθ) = A U v,γ .(44)
In particular, A v,γ commutes with every isometry U ∈ O(d) such that U v = v. Taking U to be the orthogonal reflection with respect to H v := (Rv) ⊥ , A v,γ preserves ker(U − I d ) = Rv and is therefore of the form φ 1 (v, γ)vv ⊤ + C v,γ where φ 1 (v, γ) ∈ R and C v,γ is a symmetric operator with C v,γ H v ⊂ H v and C v,γ v = v. Next, taking U = vv ⊤ + U v where U v is an arbitrary isometry of H v , it follows that C v,γ commutes on H v with all isometries U v , and is therefore of
the form φ 2 (v, γ)P v , where P v = I d − vv ⊤ is the orthogonal projection on H v and φ 2 (v, γ) ∈ R.
To summarize, we have:
A v,γ = φ 1 (v, γ)vv ⊤ + φ 2 (v, γ)(I d − vv ⊤ ) .
Now, the identity (44) shows that, for every U ∈ O(d) and v, γ, φ 1 (U v, γ) = φ 1 (v, γ) and φ 2 (U v, γ) = φ 2 (v, γ); hence, these constants do not depend on v and are simply denoted φ 1 (γ), φ 2 (γ). Defining φ(γ) := d · φ 2 (γ) and φ(γ) := φ 1 (γ) − φ 2 (γ), we therefore have:
A v,γ = φ(γ)vv ⊤ + φ(γ) · 1 d I d .(45)
Next, observe that
S d−1 π v,γ π(dv) = π ;(46)
this follows from the fact that the measure π ′ on the left-hand side of (46) is a probability distribution on S d−1 invariant under any U ∈ O(d), since
U * π ′ = S d−1 U * π v,γ π(dv) = S d−1 π U v,γ π(dv) = S d−1 π v,γ π(dv) = π ′ .
Equation (46), together with Fubini's theorem, implies that
S d−1 A v,γ π(dv) = S d−1 S d−1 θθ ⊤ π v,γ (dθ)π(dv) = S d−1
θθ ⊤ π(dθ) =: A .
Since A commutes with isometries (by invariance of π), it is of the form cI d with c = Tr(A)/d = (1/d) S d−1 θ 2 π(dθ) = 1/d. Plugging (45) into the previous equality, we obtain
1 d I d = S d−1 φ(γ)vv ⊤ + φ(γ) · 1 d I d π(dv) = 1 d φ(γ)I d + 1 d φ(γ)I d ,
so that φ(γ) = 1 − φ(γ). The decomposition (45) then writes:
A v,γ = 1 − φ(γ) vv ⊤ + φ(γ) · 1 d I d .
Recalling that F v,γ (Σ) = Tr(ΣA v,γ ), we obtain the desired expression (42) for F v,γ . Finally, note that on the one hand,
A v,γ v, v = (1 − φ(γ)) v 2 + φ(γ) · 1 d v 2 = 1 − d − 1 d φ(γ) ,
while on the other hand:
A v,γ v, v = S d−1 θ, v 2 π v,γ (dθ) ,
so that
φ(γ) = d d − 1 S d−1 1 − θ, v 2 π v,γ (dθ) 0 ,
where we used that θ, v 2 1 by the Cauchy-Schwarz inequality. Now, let α denote the angle between θ and v. We have θ, v = cos α and θ − v 2 = (1 − cos α) 2 + sin 2 α = 2(1 − cos α), so that θ, v = 1 − 1 2 θ − v 2 . Since π v,γ (dθ)-almost surely, θ − v γ, this implies
1 − θ, v 2 = 1 − 1 − 1 2 θ − v 2 2 = θ − v 2 − 1 4 θ − v 4 γ 2 .
Integrating this inequality over π v,γ yields φ(γ) d/(d − 1)γ 2 ; this establishes (43).
Control of the entropy term
We now turn to the control of the entropy term in (40). Specifically, we will show that, for every v ∈ S d−1 and γ > 0,
KL(π v,γ , π) d log 1 + 2 γ .(47)
First, since dπ v,γ /dπ = π[Θ(v, γ)] −1 π v,γ -almost surely, KL(π v,γ , π) = log π[Θ(v, γ)] −1 . Now, let N = N c (γ, S d−1 ) denote the γ-covering number of S d−1 , namely the smallest N 1 such that there exists θ 1 , . . . , θ N ∈ S d−1 with
S d−1 = N i=1 Θ(θ i , γ) .(48)
Applying a union bound to (48) and using the fact that
π[Θ(θ i , γ)] = π[Θ(v, γ)] yields 1 N π[Θ(v, γ)], namely KL(π v,γ , π) log N .(49)
Now, let N p (γ, S d−1 ) denote the γ-packing number of S d−1 , which is the largest number of points in S d−1 with pairwise distances at least γ. We have, denoting
B d = {x ∈ R d : x 1}, N N p (γ, S d−1 ) N p (γ, B d ) 1 + 2 γ d ,(50)
Conclusion of the proof
First note that, since X ′ i 2 = X i 2 ∧ d d for 1 i n,
Tr( Σ ′ n ) = 1 n n i=1 X ′ i 2 d .(51)
Putting together the previous bounds (40), (42), (47) and (51), we get with probability 1 − e −nu , for every v ∈ S d−1 , γ ∈ (0, 1/2],
α ′ log λ C 2 − log 2 − d n log 1 + 2 γ − u λF v,γ ( Σ ′ n ) = λ (1 − φ(γ)) Σ ′ n v, v + φ(γ) · 1 d Tr( Σ ′ n ) λ (1 − φ(γ)) Σ ′ n v, v + φ(γ)
In particular, rearranging, and using the fact that φ(γ) 1/2 for γ 1/2, as well as φ(γ) γ 2 and λ min ( Σ ′ n ) = inf v Σ ′ n v, v , we get with probability 1 − e −nu ,
λ min ( Σ ′ n ) 2 λ α ′ log λ C 2 − log 2 − d n log 1 + 2 γ − u − 2γ 2(52)
We first approximately maximize the above lower bound in γ, given λ. Since γ 1/2, 1 + 2/γ 1 + 1/γ 2 5/(4γ 2 ). We are therefore led to minimize 2d λn log 5 4γ 2 + 2γ 2 over γ 2 1/4. Now, let γ 2 = d/(2λn), which belongs to the prescribed range if
λ 2d n .(53)
For this choice of γ, the lower bound (52) becomes
λ min ( Σ ′ n ) 2 λ α ′ log λ C 2 − log 2 − d n log 5λn 2d − u − d λn = 2 λ α ′ − d n log λ − α ′ log C 2 − log 2 + d n log 5n 2d + d 2n − u
Now, recall that by assumption, d/n α/6 1/6, so that (by monotonicity of x → −x log x on (0, e −1 ], replacing d/n by 1/6) the term inside braces is smaller than c 0 = 1.3. In addition, assume that λ C 4 , so that log(λ/C 4 ) 0; in this case, condition (53) is automatically satisfied, since 2d/n 1/3 C 4 . Finally, since α ′ = α/(2 + α) α/3 and d/n α/6, α ′ 2(α ′ − d/n) and α ′ − d/n α/6, so that
α ′ − d n log λ − α ′ log C 2 α ′ − d n log λ C 4 α 6 log λ C 4 ,
the previous inequalities implies that, for every λ C 4 and u > 0, with probability at least 1 − e −nu ,
λ min ( Σ ′ n ) 2 λ α 6 log λ C 4 − c 0 − u = α 3C 4 log λ ′ − 6α −1 (c 0 + u) λ ′
where λ ′ = λ/C 4 1. A simple analysis shows that for c ∈ R, the function λ ′ → (log λ ′ − c)/λ ′ admits a maximum on (0, +∞) of e −c−1 , reached at λ ′ = e c+1 . Here c = 6α −1 (c 0 + u) > 0, so that λ ′ > e > 1. Hence, for every u > 0, with probability at least 1 − e −nu ,
λ min ( Σ ′ n ) α 3C 4 exp −1 − 6(c 0 + u) α C ′−1 e −6u/α =: t ,(54)
where we let C ′ := 3C 4 e 1+9/α (using the fact that 6c 0 8 and 1/α e 1/α ). Inverting the bound (54), we obtain that for every t < C ′−1 ,
P λ min ( Σ ′ n ) t (C ′ t) αn/6 .
Since λ min ( Σ n ) λ min ( Σ ′ n ), and since the bound trivially holds for t C ′−1 , this concludes the proof.
Proofs from Section 2
In this section, we gather the remaining proofs of results from Section 2 on least squares regression, namely those of Proposition 1, Theorem 1, Proposition 2, Theorem 3 and Proposition 3.
Preliminary: risk of Ridge and OLS estimators
We start with general expressions for the risk, which will be used several times in the proofs. Here, we assume that (X, Y ) is as in Section 2, namely E[Y 2 ] < +∞, E[ X 2 ] < +∞ and Lemma 5 (Risk of the Ridge estimator). Assume that (X, Y ) is of the previous form. Let λ 0, and assume that either λ > 0 or that P X is non-degenerate and n d. The risk of the Ridge estimator β λ,n , defined by β λ,n := arg min
Σ := E[XX ⊤ ] is invertible. Letting ε := Y − β * , X denote the error, where β * := Σ −1 E[Y X] isβ∈R d 1 n n i=1 (Y i − β, X i ) 2 + λ β 2 = Σ n + λI d −1 · 1 n n i=1 Y i X i ,(55)
equals
E E( β λ,n ) = E 1 n n i=1 m(X i )X i − λβ * 2 ( Σn+λI d ) −1 Σ( Σn+λI d ) −1 + + 1 n 2 E n i=1 σ 2 (X i ) X i 2 ( Σn+λI d ) −1 Σ( Σn+λI d ) −1 .(56)
Proof. Since Y i = β * , X i + ε i for i = 1, . . . , n, and since β * ,
X i X i = X i X ⊤ i β * , we have 1 n n i=1 Y i X i = Σ n β * + 1 n n i=1 ε i X i .(57)
Hence, the excess risk of β λ,n (which is well-defined by the assumptions) is
E E( β λ,n ) = E ( Σ n + λI d ) −1 Σ n β * + 1 n n i=1 ε i X i − β * 2 Σ = E ( Σ n + λI d ) −1 · 1 n n i=1 ε i X i − λ( Σ n + λI d ) −1 β * 2 Σ = E E 1 n n i=1 ε i X i − λβ * 2 ( Σn+λI d ) −1 Σ( Σn+λI d ) −1 X 1 , . . . , X n = E 1 n n i=1 m(X i )X i − λβ * 2 ( Σn+λI d ) −1 Σ( Σn+λI d ) −1 + + 1 n 2 E n i=1 σ 2 (X i ) X i 2 ( Σn+λI d ) −1 Σ( Σn+λI d ) −1(58)
where (58) is obtained by expanding and using the fact that, for i = j, E ε i ε j |X 1 , . . . , X n = m(X i )m(X j ) , E ε 2 i |X 1 , . . . , X n = m(X i ) 2 + σ 2 (X i ) .
In the special case where λ = 0, the previous risk decomposition becomes:
Lemma 6 (Risk of the OLS estimator). Assume that P X is non-degenerate and n d. Then,
E E( β LS n ) = E 1 n n i=1 m(X i ) X i 2 Σ −2 n + 1 n 2 E n i=1 σ 2 (X i ) X i 2 Σ −2 n ,(59)
where we let X i = Σ −1/2 X i and Σ n = Σ −1/2 Σ n Σ −1/2 .
Proof. This follows from Lemma 5 and the fact that, when λ = 0, for every x ∈ R d ,
x ( Σn+λI d ) −1 Σ( Σn+λI d ) −1 = Σ −1/2 x Σ 1/2 Σ −1 n Σ Σ −1 n Σ 1/2 = Σ −1/2 x Σ −2 n .
Proof of Theorem 1 and Proposition 1
Upper bound on the minimax risk. We start with an upper bound on the risk the leastsquares estimator over the class P well (P X , σ 2 ). As in Theorem 1, we assume that n d and that P X is non-degenerate. Let (X, Y ) ∼ P ∈ P well (P X , σ 2 ), so that m(X) = 0 and σ 2 (X) σ 2 . It follows from Lemma 6 that
E E( β LS n ) σ 2 n 2 E n i=1 σ 2 (X i ) X i 2 Σ −2 n = σ 2 n 2 E Tr Σ −2 n n i=1 X i X ⊤ i = σ 2 n ETr( Σ −1 n ) .
Hence, the maximum risk of the OLS estimator β LS n over the class P well (P X , σ 2 ) (and thus the minimax risk over this class) is at most σ 2 E[Tr( Σ −1 n )]/n.
Lower bound on the minimax risk. We now provide a lower bound on the minimax risk over P Gauss (P X , σ 2 ). We will in fact establish the lower bound both in the setting of Theorem 1 (namely, P X is non-degenerate and n d) and that of Proposition 1 (the remaining cases). In particular, we do not assume for now that P X is non-degenerate or that n d.
For β * ∈ R d , let P β * denote the joint distribution of (X, Y ) where X ∼ P X and Y = β * , X +ε with ε ∼ N (0, σ 2 ) independent of X. Now, consider the decision problem with model P Gauss (P X , σ 2 ) = {P β * : β * ∈ R d }, decision space R d and loss function L(β * , β) = E P β * (β) = β − β * 2 Σ . Let R(β * , β n ) = E β * [L(β * , β n )] denote the risk under P β * of a decision rule β n (that is, an estimator of β * using an i.i.d. sample of size n from P β * ), namely its expected excess risk. Consider the prior Π λ = N (0, σ 2 /(λn)I d ) on P Gauss (P X , σ 2 ). A standard computation (see, e.g., [GCS + 13]) shows that the posterior Π λ (·|(X 1 , Y 1 ), . . . , (X n , Y n )) is N ( β λ,n , (σ 2 /n)·( Σ n +λI d ) −1 ). Since the loss function L is quadratic, the Bayes estimator under Π λ is the expectation of the posterior, which is β λ,n . Hence, using the comparison between minimax and Bayes risks:
inf βn sup P β * ∈P Gauss (P X ,σ 2 ) R(β * , β n ) inf βn E β * ∼Π λ R(β * , β n ) = E β * ∼Π λ R(β * , β λ,n ) ,(60)
where the infimum is over all estimators β n . Note that the left-hand side of (60) is simply the minimax excess risk over P Gauss (P X , σ 2 ). On the other hand, applying Lemma 5 with m(X) = 0 and σ 2 (X) = σ 2 and noting that
E n i=1 X i 2 ( Σn+λI d ) −1 Σ( Σn+λI d ) −1 = E Tr ( Σ n + λI d ) −1 Σ( Σ n + λI d ) −1 n i=1 X i X ⊤ i = n E Tr ( Σ n + λI d ) −1 Σ( Σ n + λI d ) −1 Σ n , we obtain R(β * , β λ,n ) = λ 2 E β * 2 ( Σn+λI d ) −1 Σ( Σn+λI d ) −1 + σ 2 n E Tr ( Σ n + λI d ) −1 Σ( Σ n + λI d ) −1 Σ n .
This implies that
E β * ∼Π λ R(β * , β λ,n ) = E β * ∼Π λ λ 2 E β * 2 ( Σn+λI d ) −1 Σ( Σn+λI d ) −1 + + σ 2 n E Tr ( Σ n + λI d ) −1 Σ( Σ n + λI d ) −1 Σ n(61)
where E simply denotes the expectation with respect to (X 1 , . . . , X n ) ∼ P n X . Now, by Fubini's theorem, and since E β * ∼Π λ [β * (β * ) ⊤ ] = σ 2 /(λn)I d , we have
E β * ∼Π λ λ 2 E β * 2 ( Σn+λI d ) −1 Σ( Σn+λI d ) −1 = λ 2 · E E β * ∼Π λ Tr ( Σ n + λI d ) −1 Σ( Σ n + λI d ) −1 β * (β * ) ⊤ = σ 2 n E Tr ( Σ n + λI d ) −1 Σ( Σ n + λI d ) −1 λI d .(62)
Plugging (62) into (61) shows that the Bayes risk under Π λ equals
σ 2 n E Tr ( Σ n + λI d ) −1 Σ( Σ n + λI d ) −1 ( Σ n + λI d ) = σ 2 n E Tr ( Σ n + λI d ) −1 Σ .(63)
Hence, by (60) the minimax risk is larger than (σ 2 /n) · E[Tr{( Σ n + λI d ) −1 Σ}] for every λ > 0. We now distinguish the settings of Theorem 1 and Proposition 1. Degenerate case. First, assume that P X is degenerate or that n < d. By Fact 1, with probability p > 0, the matrix Σ n is non-invertible. When this occurs, let θ ∈ R d be such that θ = 1 and Σ n (Σ −1/2 θ) = 0. We then have, for every λ > 0, Σ −1/2 ( Σ n + λI d )Σ −1/2 θ, θ = 0 + λ Σ −1/2 θ 2 λ · λ −1 min , where λ min = λ min (Σ) denotes the smallest eigenvalue of Σ. This implies that
Tr{Σ 1/2 ( Σ n + λI d ) −1 Σ 1/2 } λ max (Σ 1/2 ( Σ n + λI d ) −1 Σ 1/2 ) = λ −1 min (Σ −1/2 ( Σ n + λI d )Σ −1/2 ) λ min λ so that σ 2 n E Tr ( Σ n + λI d ) −1 Σ σ 2 n · p · λ min λ .(64)
Recalling that the left-hand side of equation (64) is a lower bound on the minimax risk for every λ > 0, and noting that the right-hand side tends to +∞ as λ → 0, the minimax risk is infinite as claimed in Proposition 1. Non-degenerate case. Now, assume that P X is non-degenerate and that n d. By Fact 1, Σ n is invertible almost surely. In addition, Tr{( Σ n + λI d ) −1 Σ} = Tr{(Σ −1/2 Σ n Σ −1/2 + λΣ −1 ) −1 } is decreasing in λ (since λ → Σ −1/2 Σ n Σ −1/2 + λΣ −1 is increasing in λ), positive, and converges as λ → 0 + to Tr( Σ −1 n ). By the monotone convergence theorem, it follows that
lim λ→0 + σ 2 n E Tr ( Σ n + λI d ) −1 Σ = σ 2 n E Tr( Σ −1 n ) ,(65)
where the limit in the right-hand side belongs to (0, +∞]. Since the left-hand side is a lower bound on the minimax risk, the minimax risk over P Gauss (P X , σ 2 ) is larger than (σ 2 /n)E[Tr( Σ −1 n )].
Conclusion. Since P Gauss (P X , σ 2 ) ⊂ P well (P X , σ 2 ), the minimax risk over P well (P X , σ 2 ) is at least as large as that over P Gauss (P X , σ 2 ). When P X is degenerate or n < d, we showed that the minimax risk over P Gauss (P X , σ 2 ) is infinite, establishing Proposition 1. When P X is non-degenerate and n d, the minimax risk over P well (P X , σ 2 ) is smaller, and the minimax risk over P Gauss (P X , σ 2 ) larger, than (σ 2 /n)E[Tr( Σ −1 n )], so that these quantities agree and equal (σ 2 /n)E[Tr( Σ −1 n )], as claimed in Theorem 1.
Proof of Theorem 3
The proof starts with the following lemma.
Lemma 7. For any positive symmetric d × d matrix A and p ∈ [1, 2],
Tr(A −1 ) + Tr(A) − 2d max(1, λ min (A) −1 ) · Tr |A − I d | 2/p .(66)
Proof of Lemma 7. Let us start by showing that, for every a > 0,
a −1 + a − 2 max(1, a −1 ) · |a − 1| 2/p .(67)
Multiplying both sides of (67) by a > 0, it amounts to (a − 1) 2 = 1 + a 2 − 2a max(a, 1) · |a − 1| 2/p , namely to |a − 1| 2−2/p max(a, 1). For a ∈ (0, 2], this inequality holds since |a − 1| 1 and 2 − 2/p 0, so that |a − 1| 2−2/p 1 max(a, 1). For a 2, the inequalities |a − 1| 2 and 2 − 2/p 1 imply that |a − 1| 2−2/p |a − 1| a max(a, 1). This establishes (67). Now, let a 1 , . . . , a d > 0 be the eigenvalues of A. Without loss of generality, assume that a d = min j (a j ) = λ min (A). Then, by inequality (67) and the bound max(1, a −1 j ) max(1, a −1 d ), we have
Tr(A −1 ) + Tr(A) − 2d = d j=1 (a −1 j + a j − 2) max(1, a −1 d ) d j=1 |a j − 1| 2/p ,
which is precisely the desired inequality (66).
Proof of Theorem 3. Let p ∈ (1, 2] which will be determined later, and denote q := p/(p − 1) its complement. Applying Lemma 7 to A = Σ n yields:
Tr( Σ −1 n ) + Tr( Σ n ) − 2d max(1, λ min ( Σ n ) −1 ) · Tr | Σ n − I d | 2/p .
Since E[Tr( Σ n )] = d, taking the expectation in the above bound and dividing by d yields:
1 d E Tr( Σ −1 n ) − 1 E max(1, λ min ( Σ n ) −1 ) · 1 d Tr | Σ n − I d | 2/p E max(1, λ min ( Σ n ) −1 ) q 1/q · E 1 d Tr | Σ n − I d | 2/p p 1/p (68) E max(1, λ min ( Σ n ) −q ) 1/q · E 1 d Tr ( Σ n − I d ) 2 1/p(69)
where (68) comes from Hölder's inequality, while (69) is obtained by noting that x → x p is convex and that (1/d)Tr(A) is the average of the eigenvalues of the symmetric matrix A. Next,
E 1 d Tr ( Σ n − I d ) 2 = 1 d Tr E 1 n n i=1 ( X i X ⊤ i − I d ) 2 = 1 n 2 d Tr 1 i,j n E ( X i X ⊤ i − I d )( X j X ⊤ j − I d ) = 1 nd Tr E ( X X ⊤ − I d ) 2 ,(70)
where we used in (70) the fact that,
for i = j, E ( X i X ⊤ i − I d )( X j X ⊤ j − I d ) = E[ X i X ⊤ i − I d ]E[ X j X ⊤ j − I d ] = 0. Now, for x ∈ R d ,Tr{(xx ⊤ − I d ) 2 } = Tr{ x 2 xx ⊤ − 2xx ⊤ + I d } = x 4 − 2 x 2 + d ,
so that (70) becomes, as E[ X 2 ] = d and E[ X 4 ] κd 2 (Assumption 2),
E 1 d Tr ( Σ n − I d ) 2 = 1 nd E X 4 − 2E X 2 + d = 1 n 1 d E X 4 − 1 κd n .(71)
In addition, recall that X satisfies Assumption 1 and that n max(6d/α, 12/α). Hence, letting C ′ 1 be the constant in Theorem 4, we have by Corollary 4:
E max(1, λ min ( Σ n ) −q ) 2C ′q .(72)
Finally, plugging the bounds (71) and (72) into (69) and letting q = α ′ n/2, so that 1/p = 1 − 1/q = 1 − 2/(α ′ n), we obtain
1 d · E Tr( Σ −1 n ) − 1 (2C ′q ) 1/q · κd n 1/p 2C ′ · κd n · n κd 2/(α ′ n) .(73)
Now, since κ = E[ X 4 ]/E[ X 2 ] 2 1 and d 1, n κd 2/(α ′ n) n 2/(α ′ n) = exp 2 log n α ′ n .
An elementary analysis shows that the function g : x → log x/x is increasing on (0, e] and decreasing on [e, +∞). Hence, if x, y > 1 satisfy x y log y e, then log x x log y + log log y y log y
1 + e −1 y
where we used log log y/ log y g(e) = e −1 . Here by assumption n 12α −1 log(12α −1 ) = 2α ′−1 log(2α ′−1 ), and thus log n/n (1 + e −1 )/(2/α ′ ), so that
n κd 2/(α ′ n) exp 2 α ′ · 1 + e −1 2/α ′ = exp 1 + e −1 4 .
Plugging this inequality into (73) yields the desired bound (21). Equation (22) then follows by Theorem 1.
Proof of Proposition 3
Recall that, by Lemma 6, we have
E E( β LS n ) = E 1 n n i=1 m(X i )Σ −1/2 X i 2 Σ −2 n + 1 n 2 E n i=1 σ 2 (X i ) Σ −1/2 X i 2 Σ −2 n .(74)
Now, since Σ −2 n λ min ( Σ n ) −2 I d , we have for every random variable V n :
E V n 2 Σ −2 n E V n 2 + E λ min ( Σ n ) −2 − 1 + · V n 2 E V n 2 + E {λ min ( Σ n ) −2 − 1} 2 + 1/2 · E V n 4 1/2 ,(75)
where (75) follows from the Cauchy-Schwarz inequality. Letting V n = σ(X i )Σ −1/2 X i , we obtain from (75)
1 n 2 E n i=1 σ 2 (X i ) Σ −1/2 X i 2 Σ −2 n 1 n E σ 2 (X) Σ −1/2 X 2 + 1 n E {λ min ( Σ n ) −2 − 1} 2 + 1/2 E σ 4 (X) Σ −1/2 X 4 1/2 .(76)
On the other hand, let
V n = n −1 n i=1 m(X i )Σ −1/2 X i ; we have, since E[m(X i )X i ] = E[ε i X i ] = 0, E V n 2 = E 1 n n i=1 m(X i )X i 2 Σ −1 = 1 n 2 1 i,j n E m(X i )X i , m(X j )X j Σ −1 = 1 n 2 n i=1 E m(X i ) 2 Σ −1/2 X i 2 + 1 n 2 i =j E[m(X i )X i ], E[m(X j )X j ] Σ −1 = 1 n E m(X) 2 Σ −1/2 X 2 .(77)
In addition,
E V n 4 = 1 n 4 1 i,j,k,l n E m(X i )X i , m(X j )X j Σ −1 m(X k )X k , m(X l )X l Σ −1 .
Now, by independence and since E[m(X)X] = 0, each term in the sum above where one index among i, j, k, l is distinct from the others cancels. We therefore have
E V n 4 = 1 n 4 n i=1 E m(X i )X i 4 Σ −1 + 2 n 4 i<j E m(X i )X i 2 Σ −1 m(X j )X j 2 Σ −1 + + 4 n 4 1 i<j n E m(X i )X i , m(X j )X j 2 Σ −1 1 n 4 n i=1 E m(X i )X i 4 Σ −1 + 6 n 4 i<j E m(X i )X i 2 Σ −1 m(X j )X j 2 Σ −1(78)
= 1 n 3 · E m(X) 4 Σ −1/2 X 4 + 6 n 4 · n(n − 1) 2 · E m(X) 2 X 2 Σ −1 2 1 n 3 · E m(X) 4 Σ −1/2 X 4 + 3 n 2 · E m(X) 2 Σ −1/2 X 2 2 4 n 2 · E m(X) 4 Σ −1/2 X 4
where (78) and (79) rely on the Cauchy-Schwarz inequality. Hence, it follows from (75), (77) and (79) that
E 1 n n i=1
m(X i )Σ −1/2 X i 2 Σ −2 n 1 n E m(X) 2 Σ −1/2 X 2 + E {λ min ( Σ n ) −2 − 1} 2 + 1/2 · 4 n 2 · E m(X) 4 Σ −1/2 X 4 1/2 1 n E m(X) 2 Σ −1/2 X 2 + 2 n E {λ min ( Σ n ) −2 − 1} 2 + 1/2 E m(X) 4 Σ −1/2 X 4 1/2 .
Plugging (76) and (80) into the decomposition (74) yields:
E E( β LS n )
1 n E m(X) 2 + σ 2 (X) Σ −1/2 X 2 + 1 n E {λ min ( Σ n ) −2 − 1} 2 + 1/2 × × E σ 4 (X) Σ −1/2 X 4 1/2 + 2E m(X) 4 Σ −1/2 X 4 1/2 (81)
Oliveira's lower tail bound. [Oli16] showed that, under Assumption 3, we have P λ min ( Σ n ) 1 − ε 1 − δ provided that n 81κ(d + 2 log(2/δ)) ε 2 .
This can be rewritten as:
P λ min ( Σ n ) < 1 − 9κ 1/2 d + 2 log(2/δ) n δ .
Bound on the remaining term. Since the function x → x 2 is 2-Lipschitz on [0, 1], we have (x −2 − 1) + = (1 − x 2 ) + /x 2 2(1 − x) + /x 2 for x > 0, so that by Cauchy-Schwarz,
Now, let v 1/2 = 9κ 1/2 [d + 2 log(2/δ)]/n, so that the bound (82) yields P(λ min ( Σ n ) 1 − v 1/2 ) δ. We have, equivalently, Final bound. Now, let χ > 0 as in Proposition 3. Since E[ε 2 |X] = m(X) 2 + σ 2 (X) max(m(X) 2 , σ 2 (X)) ,
δ = 2 exp − n 162κ v −81κd
we have
max E m(X) 4 Σ −1/2 X 4 , E σ 4 (X) Σ −1/2 X 4 E[E[ε 2 |X] 2 Σ −1/2 X 4 ] = χd 2 .(87)
Putting the bounds (86) and (87) inside (81) yields
E E( β LS n )
1 n E m(X) 2 + σ 2 (X) Σ −1/2 X 2 + 1 n · 92C ′2 κd n · 3 √ χd = 1 n E (Y − β * , X ) 2 Σ −1/2 X 2 + 276C ′2 √ κχ d n
3/2 ,(88)
where we used the fact that E[(Y − β * , X ) 2 |X] = m(X) 2 + σ 2 (X). This establishes (23). Finally, if P ∈ P mis (P X , σ 2 ), then E[ε 2 |X] σ 2 , so that
χ = E[E[ε 2 |X] 2 Σ −1/2 X 4 ]/d 2 σ 4 E[ Σ −1/2 X 4 ]/d 2 σ 4 κ ,
where we used the fact that E[ Σ −1/2 X 4 ] κd 2 by Assumption 3 (see Remark 3). Plugging this inequality, together with E[(Y − β * , X ) 2 Σ −1/2 X 2 ] σ 2 d, inside (88), yields the upper bound (24). This concludes the proof.
6 Remaining proofs from Section 3
In this section, we gather the proofs of remaining results from Section 3, namely Proposition 4 and Corollary 4. For u C q , we bound P(min(1, Z) u −1/q ) 1, while for u C q (so that u −1/q C −1 1), we bound P(min(1, Z) u −1/q ) = P(Z u −1/q ) (Cu −1/q ) a . We then conclude that
max(1, Z −1 ) q L q C q + ∞ C q (C −q u) −a/q du = C q 1 + ∞ 1 v −a/q dv 2C q ,
where we let v = C −q u and used the fact that ∞ 1 v −a/q dv ∞ 1 v −2 dv = 1 since q a/2. The second point follows from Markov's inequality: for every t > 0, P(Z t) = P(Z −q t −q ) t q · E[Z −q ] (Ct) q .
Finally, for the third point, since P(Z u −1/q ) (cu −1/q ) a for u > 1, we have for q a:
Proof of Proposition 6
The proof relies on the following lemma.
Lemma 9. Let X 1 , . . . , X d be independent real random variables. Assume that there exists a sub-additive function g : R + → R such that, for every j = 1, . . . , d and ξ ∈ R, |Φ X j (ξ)| exp(−g(ξ 2 )) .
Then, for every t ∈ R, Q X (t) t · 2π/t −2π/t exp(−g(ξ 2 )) dξ .
(96)
Proof of Lemma 9. For every θ ∈ S d−1 and ξ ∈ R, we have, by independence of the X j , |Φ θ,X (ξ)| = E e iξ(θ 1 X 1 +···+θ d X d ) = E e iξθ 1 X 1 · · · E e iξθ d X d exp − g(θ 2 1 ξ 2 ) + · · · + g(θ 2 d ξ 2 ) exp(−g(ξ 2 )) ,
where the last inequality uses the sub-additivity of g and the fact that θ 2 1 + · · · + θ 2 d = θ 2 = 1. Lemma 9 then follows from Esséen's inequality [Ess66], which states that for any real random variable Z,
Q Z (t) t · 2π/t −2π/t |Φ Z (ξ)| dξ .
Proof of Proposition 6. The functions g 1 : u → α log(1 + u) and g 2 : u → C −1 0 √ u are concave functions on R + taking the value 0 at 0, and therefore sub-additive. Since g 1 is also increasing, the function g : u → g 1 • g 2 (u) = α log(1 + C −1 0 √ u) is also sub-additive. Condition (31) simply writes Φ X j (ξ) exp(−g(ξ 2 )), so that by Lemma 9 Q X (t) t which implies that Q X (t) (Ct) α , concluding the proof.
Conclusion
We analyzed random-design linear prediction from a minimax perspective, by obtaining matching upper and lower bounds on the risk under weak conditions. This revealed that the hardness of the problem is characterized by the distribution of leverage scores, and that Gaussian design is almost the most favorable one in high dimension. The upper bounds relied on a study of the lower tail and negative moments of empirical covariance matrices. We showed a general lower bound on this lower tail in dimension d 2, as well as a matching upper bound under a necessary regularity condition on the design. The proof of this result relied on the use of PAC-Bayesian smoothing of empirical processes, with refined non-Gaussian smoothing distributions.
It is worth noting that the upper bound of Theorem 4 on the lower tail of λ min ( Σ n ) requires n 6d; the approach used here is not sufficient to obtain meaningful bounds for (nearly) square matrices, whose aspect ratio d/n is close to 1. It could be interesting to see if the bound of Theorem 4 can be extended to this case (for instance in the case of centered, variance 1 independent coordinates with bounded density, as in Section 3.3), by leveraging the techniques from [RV08,RV09,TV09b,TV09a].
the risk minimizer, we let m(X) := E[ε|X] = E[Y |X] − β * , X denote the misspecification (or approximation) error of the linear model, and σ 2 (X) := Var(ε|X) = Var(Y |X) denote the conditional variance of the noise.
E 2 −
2{λ min ( Σ n ) −1} min ( Σ n ) 1 − v 1/2 2vdv .
min(1, Z) u −1/q ) du .
Z u −1/q )du ∞ 1 c a u −a/q du c a ∞ 1 u −1 du = +∞ .
). In the random design setting considered here, a classical result[GKKW02, Theorem 11.3] states that, if Var(Y |X) σ 2 and the true regression function g *
as long as v 162κd/n. Plugging this inequality into (84) yieldsE {1 − λ min ( Σ n )} 4so that, using the inequality (x + y) 1/4 x 1/4 + y 1/4 , Also, by Corollary 4 and the fact that αn/12 8, E[λ min ( Σ n ) −8 ] 2C ′8 , so that inequality (83) becomesE {λ min ( Σ n ) −2 − 1} 2n
2 exp −
n
324κ
v
+
min(162κd/n,1)
0
2vdv +
1
min(162κd/n,1)
2 exp −
n
324κ
v 2vdv
162κd
n
2
+
324κ
n
2
∞
0
4 exp(−w)wdw
=
162κd
n
2
+ 4
324κ
n
2
E {1 − λ min ( Σ n )} 4
+
1/4
9
2κd
n
+ 18
2κ
n
27
2κd
n
.
(85)
+
1/2
2 × 27
2κd
n
× 2 1/4 C ′2 92C ′2 κd
n
.
(86)
. Conversely, if Z −1 L q C for some constants q 1 and C > 0, then P(Z t) (Ct) q for all t > 0.
Acknowledgements. The author would like to thank two anonymous referees and an associate editor for very helpful comments that improved the quality of this paper.Funding. Part of this work was carried at Centre de Mathématiques Appliquées, École polytechnique, France, and supported by a public grant as part of the Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH. Part of this work was carried out at the Machine Learning Genoa center, Università di Genova, Italy.Proof of Proposition 4Let Θ be a random variable distributed uniformly on the unit sphere S d−1 and independent of X. We have sup θ∈S d−1 P(| θ, X | t) E P(| Θ, X | t|Θ) = E P(| Θ, X | t|X) .Next, note that for every x ∈ R d , Θ, x is distributed as x · Θ 1 , where Θ 1 denotes the first coordinate of Θ. Since X is independent of Θ, the above inequality becomesLet us now derive the distribution of |Θ 1 |. Let φ : S d−1 → R be the projection on the first coordinate:(1, 0, . . . , 0)), ∇φ(θ) ∈ (Rθ) ⊥ is the orthogonal projection of e 1 on (Rθ) ⊥ , namely e 1 − θ 1 θ, with norm ∇φ(θ) = 1 − θ 2 1 . Fix t ∈ (0, 1] and define g(θ) = 1(|θ 1 | t)/ 1 − θ 2 1 , which equals1)), and such that g(θ) · ∇φ(θ) = 1(|θ 1 | t). Hence, the coarea formula [Fed96, Theorem 3.2.2] implies that, for every t ∈ (0, 1],If d = 2, (91) implies that |Θ 1 | has density (2/π)/ √ 1 − t 2 2/π on [0, 1], and hence for t ∈ [0, 1]:If d = 3, (91) implies that |Θ 1 | is uniformly distributed on [0, 1], so that for t ∈ [0, 1]Now, assume that d 4. Letting t = 1 in (91) yields the value of the constant C d , which normalizes the right-hand side: since 1 − u 2 e −u 2 ,
. Finally, if there exist constants c, a > 0 such that P(Z t. ct) a for all t ∈ (0, 1), thenFinally, if there exist constants c, a > 0 such that P(Z t) (ct) a for all t ∈ (0, 1), then
For the first point, since max(1, Z −q ) is nonnegative. we have E[max(1, Z −q )Proof. For the first point, since max(1, Z −q ) is nonnegative, we have E[max(1, Z −q )] =
Linear regression through PAC-Bayesian truncation. Jean- , Yves Audibert, Olivier Catoni, 1010.0072arXiv preprintJean-Yves Audibert and Olivier Catoni. Linear regression through PAC-Bayesian truncation. arXiv preprint 1010.0072, 2010.
Robust linear least squares regression. Jean- , Yves Audibert, Olivier Catoni, Ann. Statist. 395Jean-Yves Audibert and Olivier Catoni. Robust linear least squares regression. Ann. Statist., 39(5):2766-2794, 2011.
An introduction to random matrices. Greg W Anderson, Alice Guionnet, Ofer Zeitouni, Cambridge University PressGreg W. Anderson, Alice Guionnet, and Ofer Zeitouni. An introduction to random matrices. Cambridge University Press, 2010.
Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles. Radosław Adamczak, Alexander Litvak, Alain Pajor, Nicole Tomczak-Jaegermann, J. Amer. Math. Soc. 232Radosław Adamczak, Alexander Litvak, Alain Pajor, and Nicole Tomczak- Jaegermann. Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles. J. Amer. Math. Soc., 23(2):535-561, 2010.
An Introduction to Multivariate Statistical Analysis. Theodore W Anderson, WileyNew YorkTheodore W. Anderson. An Introduction to Multivariate Statistical Analysis. Wiley New York, 2003.
Relative loss bounds for on-line density estimation with the exponential family of distributions. Katy S Azoury, Manfred K Warmuth, Mach. Learn. 433Katy S. Azoury and Manfred K. Warmuth. Relative loss bounds for on-line density estimation with the exponential family of distributions. Mach. Learn., 43(3):211- 246, 2001.
How many variables should be entered in a regression equation?. Leo Breiman, David Freedman, J. Amer. Statist. Assoc. 78381Leo Breiman and David Freedman. How many variables should be entered in a regression equation? J. Amer. Statist. Assoc., 78(381):131-136, 1983.
Minimax fixed-design linear regression. Rajendra Bhatia, ; Peter, L Bartlett, M Wouter, Alan Koolen, Eiji Malek, Manfred K Takimoto, Warmuth, Proc. 28th Conference on Learning Theory. 28th Conference on Learning TheoryPrinceton University PressPositive Definite MatricesRajendra Bhatia. Positive Definite Matrices. Princeton University Press, 2009. [BKM + 15] Peter L. Bartlett, Wouter M. Koolen, Alan Malek, Eiji Takimoto, and Manfred K. Warmuth. Minimax fixed-design linear regression. In Proc. 28th Conference on Learning Theory, pages 226-239, 2015.
Concentration Inequalities: A Nonasymptotic Theory of Independence. Stéphane Boucheron, Gábor Lugosi, Pascal Massart, Oxford University PressOxfordStéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration Inequali- ties: A Nonasymptotic Theory of Independence. Oxford University Press, Oxford, 2013.
Spectral Analysis of Large Dimensional Random Matrices. Zhidong Bai, Jack W Silverstein, Springer Series in Statistics. Springer-VerlagZhidong Bai and Jack W. Silverstein. Spectral Analysis of Large Dimensional Ran- dom Matrices. Springer Series in Statistics. Springer-Verlag, 2010.
Aggregation for gaussian regression. Florentina Bunea, Alexandre B Tsybakov, Marten H Wegkamp, Ann. Statist. 354Florentina Bunea, Alexandre B. Tsybakov, and Marten H. Wegkamp. Aggregation for gaussian regression. Ann. Statist., 35(4):1674-1697, 2007.
Convex Optimization. Stephen Boyd, Lieven Vandenberghe, Cambridge University PressStephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge Uni- versity Press, 2004.
Statistical Learning Theory and Stochastic Optimization: Ecole d'Eté de Probabilités de Saint-Flour XXXI -2001. Olivier Catoni, Lecture Notes in Mathematics. Springer-VerlagOlivier Catoni. Statistical Learning Theory and Stochastic Optimization: Ecole d'Eté de Probabilités de Saint-Flour XXXI -2001. Lecture Notes in Mathematics. Springer-Verlag, 2004.
PAC-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning. Olivier Catoni, IMS Lecture Notes Monograph Series. Institute of Mathematical Statistics. 56Olivier Catoni. PAC-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning, volume 56 of IMS Lecture Notes Monograph Series. Institute of Mathematical Statistics, 2007.
Optimal rates for the regularized leastsquares algorithm. Andrea Caponnetto, Ernesto De Vito, Found. Comput. Math. 73Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least- squares algorithm. Found. Comput. Math., 7(3):331-368, 2007.
Sensitivity analysis in linear regression. Samprit Chatterjee, Ali S Hadi, Wiley Series in Probability and Statistics. 327John Wiley & SonsSamprit Chatterjee and Ali S. Hadi. Sensitivity analysis in linear regression, volume 327 of Wiley Series in Probability and Statistics. John Wiley & Sons, New York, 1988.
Best choices for regularization parameters in learning theory: on the bias-variance problem. Felipe Cucker, Steve Smale, Found. Comput. Math. 24Felipe Cucker and Steve Smale. Best choices for regularization parameters in learn- ing theory: on the bias-variance problem. Found. Comput. Math., 2(4):413-428, 2002.
On the mathematical foundations of learning. Felipe Cucker, Steve Smale, Bull. Amer. Math. Soc. 391Felipe Cucker and Steve Smale. On the mathematical foundations of learning. Bull. Amer. Math. Soc., 39(1):1-49, 2002.
Ridge regression and asymptotic minimax estimation over spheres of growing dimension. H Lee, Dicker, Bernoulli. 221Lee H. Dicker. Ridge regression and asymptotic minimax estimation over spheres of growing dimension. Bernoulli, 22(1):1-37, 2016.
High dimensional robust m-estimation: asymptotic variance via approximate message passing. David Donoho, Andrea Montanari, Probab. Theory Related Fields. 1663David Donoho and Andrea Montanari. High dimensional robust m-estimation: asymptotic variance via approximate message passing. Probab. Theory Related Fields, 166(3):935-969, 2016.
Model selection for regularized least-squares algorithm in learning theory. Ernesto De Vito, Andrea Caponnetto, Lorenzo Rosasco, Found. Comput. Math. 51Ernesto De Vito, Andrea Caponnetto, and Lorenzo Rosasco. Model selection for reg- ularized least-squares algorithm in learning theory. Found. Comput. Math., 5(1):59- 85, 2005.
High-dimensional asymptotics of prediction: Ridge regression and classification. Edgar Dobriban, Stefan Wager, Ann. Statist. 461Edgar Dobriban and Stefan Wager. High-dimensional asymptotics of prediction: Ridge regression and classification. Ann. Statist., 46(1):247-279, 2018.
Eigenvalues and condition numbers of random matrices. Alan Edelman, SIAM J. Matrix Anal. Appl. 94Alan Edelman. Eigenvalues and condition numbers of random matrices. SIAM J. Matrix Anal. Appl., 9(4):543-560, 1988.
Asymptotic behavior of unregularized and ridge-regularized high-dimensional robust regression estimators: rigorous results. Noureddine El Karoui, arXiv:1311.2445Noureddine El Karoui. Asymptotic behavior of unregularized and ridge-regularized high-dimensional robust regression estimators: rigorous results. arXiv:1311.2445, 2013.
On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators. Noureddine El Karoui, Probab. Theory Related Fields. 1701Noureddine El Karoui. On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators. Probab. Theory Related Fields, 170(1):95-175, 2018.
Geometric sensitivity of random matrix results: consequences for shrinkage estimators of covariance and related statistical methods. Noureddine El Karoui, Holger Kösters, 1105.1404arXiv preprintNoureddine El Karoui and Holger Kösters. Geometric sensitivity of random matrix results: consequences for shrinkage estimators of covariance and related statistical methods. arXiv preprint 1105.1404, 2011.
On the Kolmogorov-Rogozin inequality for the concentration function. Carl G Esseen, 5Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte GebieteCarl G. Esseen. On the Kolmogorov-Rogozin inequality for the concentration func- tion. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 5(3):210- 216, 1966.
Geometric measure theory. Herbert Federer, SpringerHerbert Federer. Geometric measure theory. Springer, 1996.
Prediction in the worst case. Dean P Foster, Ann. Statist. Andrew Gelman, John B. Carlin, Hal S. Stern, David B. Dunson, Aki Vehtari, and Donald B. Rubin. Bayesian Data Analysis. Chapman and Hall/CRC19GCS + 13Dean P. Foster. Prediction in the worst case. Ann. Statist., 19:1084-1090, 1991. [GCS + 13] Andrew Gelman, John B. Carlin, Hal S. Stern, David B. Dunson, Aki Vehtari, and Donald B. Rubin. Bayesian Data Analysis. Chapman and Hall/CRC, 2013.
A distribution-free theory of nonparametric regression. László Györfi, Michael Kohler, Adam Krzyzak, Harro Walk, Springer Science & Business MediaLászló Györfi, Michael Kohler, Adam Krzyzak, and Harro Walk. A distribution-free theory of nonparametric regression. Springer Science & Business Media, 2002.
A Roger, Charles R Horn, Johnson, Matrix Analysis. Cambridge University PressRoger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1990.
Random design analysis of ridge regression. Daniel Hsu, M Sham, Tong Kakade, Zhang, Found. Comput. Math. 143Daniel Hsu, Sham M. Kakade, and Tong Zhang. Random design analysis of ridge regression. Found. Comput. Math., 14(3):569-600, 2014.
Application of ridge analysis to regression problems. Arthur E Hoerl, Chemical Engineering Progress. 58Arthur E. Hoerl. Application of ridge analysis to regression problems. Chemical Engineering Progress, 58:54-59, 1962.
Loss minimization and parameter estimation with heavy tails. Daniel Hsu, Sivan Sabato, J. Mach. Learn. Res. 1718Daniel Hsu and Sivan Sabato. Loss minimization and parameter estimation with heavy tails. J. Mach. Learn. Res., 17(18):1-40, 2016.
Robust regression: asymptotics, conjectures and Monte Carlo. J Peter, Huber, Ann. Statist. 15Peter J. Huber. Robust regression: asymptotics, conjectures and Monte Carlo. Ann. Statist., 1(5):799-821, 1973.
Robust statistics. J Peter, Huber, John Wiley and SonsPeter J. Huber. Robust statistics. John Wiley and Sons, 1981.
The hat matrix in regression and ANOVA. C David, Roy E Hoaglin, Welsch, Amer. Statist. 321David C. Hoaglin and Roy E. Welsch. The hat matrix in regression and ANOVA. Amer. Statist., 32(1):17-22, 1978.
Gaussian estimation: Sequence and wavelet models. Draft version. Iain M Johnstone, Iain M. Johnstone. Gaussian estimation: Sequence and wavelet models. Draft version, September 16, 2019, 2019.
Concentration inequalities and moment bounds for sample covariance operators. Vladimir Koltchinskii, Karim Lounici, Bernoulli. 231Vladimir Koltchinskii and Karim Lounici. Concentration inequalities and moment bounds for sample covariance operators. Bernoulli, 23(1):110-133, 2017.
Bounding the smallest singular value of a random matrix without concentration. Vladimir Koltchinskii, Shahar Mendelson, Int. Math. Res. Not. IMRN. 23Vladimir Koltchinskii and Shahar Mendelson. Bounding the smallest singular value of a random matrix without concentration. Int. Math. Res. Not. IMRN, 2015(23):12991-13008, 2015.
Theory of Point Estimation. Erich L Lehmann, George Casella, SpringerErich L. Lehmann and George Casella. Theory of Point Estimation. Springer, 1998.
The Concentration of Measure Phenomenon. Michel Ledoux, American Mathematical SocietyMichel Ledoux. The Concentration of Measure Phenomenon. American Mathemat- ical Society, 2001.
Performance of empirical risk minimization in linear aggregation. Guillaume Lecué, Shahar Mendelson, Bernoulli. 223Guillaume Lecué and Shahar Mendelson. Performance of empirical risk minimiza- tion in linear aggregation. Bernoulli, 22(3):1520-1534, 2016.
Mean estimation and regression under heavytailed distributions: a survey. Gábor Lugosi, Shahar Mendelson, Found. Comput. Math. 19Gábor Lugosi and Shahar Mendelson. Mean estimation and regression under heavy- tailed distributions: a survey. Found. Comput. Math., 19:1145-1190, 2019.
Über monotone matrixfunktionen. Karl Löwner, Math. Z. 381Karl Löwner. Über monotone matrixfunktionen. Math. Z., 38(1):177-216, 1934.
PAC-Bayes & margins. John Langford, John Shawe-Taylor, Advances in Neural Information Processing Systems 15. John Langford and John Shawe-Taylor. PAC-Bayes & margins. In Advances in Neural Information Processing Systems 15, pages 439-446, 2003.
Some PAC-Bayesian theorems. David A Mcallester, Mach. Learn. 373David A. McAllester. Some PAC-Bayesian theorems. Mach. Learn., 37(3):355-363, 1999.
PAC-Bayesian stochastic model selection. David A Mcallester, Mach. Learn. 511David A. McAllester. PAC-Bayesian stochastic model selection. Mach. Learn., 51(1):5-21, 2003.
. Shahar Mendelson. Learning without concentration. J. ACM. 62321Shahar Mendelson. Learning without concentration. J. ACM, 62(3):21, 2015.
Distribution of eigenvalues for some sets of random matrices. Alexandrovich Vladimir, Leonid Andreevich Marchenko, Pastur, Matematicheskii Sbornik. 1144Vladimir Alexandrovich Marchenko and Leonid Andreevich Pastur. Distribution of eigenvalues for some sets of random matrices. Matematicheskii Sbornik, 114(4):507- 536, 1967.
On the singular values of random matrices. Shahar Mendelson, Grigoris Paouris, J. Eur. Math. Soc. 164Shahar Mendelson and Grigoris Paouris. On the singular values of random matrices. J. Eur. Math. Soc., 16(4):823-834, 2014.
Topics in non-parametric statistics. Ecole d'Ete de Probabilites de Saint-Flour XXVIII-1998. Arkadi Nemirovski, 28Arkadi Nemirovski. Topics in non-parametric statistics. Ecole d'Ete de Probabilites de Saint-Flour XXVIII-1998, 28:85-277, 2000.
Small ball probability, inverse theorems, and applications. H Hoi, Van H Nguyen, Vu, Erdös Centennial. SpringerHoi H. Nguyen and Van H. Vu. Small ball probability, inverse theorems, and appli- cations. In Erdös Centennial, pages 409-463. Springer, 2013.
The lower tail of random quadratic forms with applications to ordinary least squares. Roberto I Oliveira, Probab. Theory Related Fields. 1663Roberto I. Oliveira. The lower tail of random quadratic forms with applications to ordinary least squares. Probab. Theory Related Fields, 166(3):1175-1194, 2016.
A statistical perspective on randomized sketching for ordinary least-squares. Garvesh Raskutti, Michael W Mahoney, J. Mach. Learn. Res. 171Garvesh Raskutti and Michael W. Mahoney. A statistical perspective on randomized sketching for ordinary least-squares. J. Mach. Learn. Res., 17(1):7508-7538, 2016.
The estimate of the maximum of the convolution of bounded densities. Boris A Rogozin, Teor. Veroyatn. Primen. 321Boris A. Rogozin. The estimate of the maximum of the convolution of bounded densities. Teor. Veroyatn. Primen., 32(1):53-61, 1987.
The Littlewood-Offord problem and invertibility of random matrices. Mark Rudelson, Roman Vershynin, Adv. Math. 2182Mark Rudelson and Roman Vershynin. The Littlewood-Offord problem and invert- ibility of random matrices. Adv. Math., 218(2):600-633, 2008.
Smallest singular value of a random rectangular matrix. Mark Rudelson, Roman Vershynin, Comm. Pure Appl. Math. 6212Mark Rudelson and Roman Vershynin. Smallest singular value of a random rectan- gular matrix. Comm. Pure Appl. Math., 62(12):1707-1739, 2009.
Non-asymptotic theory of random matrices: extreme singular values. Mark Rudelson, Roman Vershynin, Proc. International Congress of Mathematicians. International Congress of Mathematicians3Mark Rudelson and Roman Vershynin. Non-asymptotic theory of random matri- ces: extreme singular values. In Proc. International Congress of Mathematicians, volume 3, pages 1576-1602, 2010.
Small ball probabilities for linear images of high-dimensional distributions. Mark Rudelson, Roman Vershynin, Int. Math. Res. Not. IMRN. 19Mark Rudelson and Roman Vershynin. Small ball probabilities for linear images of high-dimensional distributions. Int. Math. Res. Not. IMRN, 2015(19):9594-9617, 2014.
Bootstrapping and sample splitting for high-dimensional, assumption-lean inference. Alessandro Rinaldo, Larry Wasserman, Max G' Sell, Ann. Statist. 476Alessandro Rinaldo, Larry Wasserman, and Max G'Sell. Bootstrapping and sample splitting for high-dimensional, assumption-lean inference. Ann. Statist., 47(6):3438- 3469, 2019.
The sample complexity of learning linear predictors with the squared loss. Ohad Shamir, J. Mach. Learn. Res. 16108Ohad Shamir. The sample complexity of learning linear predictors with the squared loss. J. Mach. Learn. Res., 16(108):3475-3486, 2015.
Optimal rates for regularized least squares regression. Ingo Steinwart, Don Hush, Clint Scovel, Proc. 22nd Conference on Learning Theory. 22nd Conference on Learning TheoryIngo Steinwart, Don Hush, and Clint Scovel. Optimal rates for regularized least squares regression. In Proc. 22nd Conference on Learning Theory, pages 79-93, 2009.
Multiple regression. Charles Stein, Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling. Stanford University PressCharles Stein. Multiple regression. In Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling. Stanford University Press, 1960.
Covariance estimation for distributions with 2 + ε moments. Nikhil Srivastava, Roman Vershynin, Ann. Probab. 415Nikhil Srivastava and Roman Vershynin. Covariance estimation for distributions with 2 + ε moments. Ann. Probab., 41(5):3081-3111, 2013.
Learning theory estimates via integral operators and their approximations. Steve Smale, Ding-Xuan Zhou, Constr. Approx. 262Steve Smale and Ding-Xuan Zhou. Learning theory estimates via integral operators and their approximations. Constr. Approx., 26(2):153-172, 2007.
Topics in random matrix theory. Terence Tao, American Mathematical SocietyTerence Tao. Topics in random matrix theory. American Mathematical Society, 2012.
Solution of incorrectly formulated problems and the regularization method. Andrey N Tikhonov, Soviet Mathematics Doklady. 4Andrey N. Tikhonov. Solution of incorrectly formulated problems and the regular- ization method. Soviet Mathematics Doklady, 4:1035-1038, 1963.
Sample covariance matrices of heavy-tailed distributions. Konstantin Tikhomirov, Int. Math. Res. Not. IMRN. 20Konstantin Tikhomirov. Sample covariance matrices of heavy-tailed distributions. Int. Math. Res. Not. IMRN, 2018(20):6254-6289, 2018.
Optimal rates of aggregation. Alexandre B Tsybakov, Learning Theory and Kernel Machines. SpringerAlexandre B. Tsybakov. Optimal rates of aggregation. In Learning Theory and Kernel Machines, Lecture Notes in Artificial Intelligence, pages 303-313. Springer, 2003.
Introduction to nonparametric estimation. Alexandre B Tsybakov, SpringerAlexandre B. Tsybakov. Introduction to nonparametric estimation. Springer, 2009.
From the Littlewood-Offord problem to the circular law: universality of the spectral distribution of random matrices. Terence Tao, Van Vu, Bull. Amer. Math. Soc. 463Terence Tao and Van Vu. From the Littlewood-Offord problem to the circular law: universality of the spectral distribution of random matrices. Bull. Amer. Math. Soc., 46(3):377-396, 2009.
Inverse Littlewood-Offord theorems and the condition number of random discrete matrices. Terence Tao, H Van, Vu, Ann. of Math. 1692Terence Tao and Van H. Vu. Inverse Littlewood-Offord theorems and the condition number of random discrete matrices. Ann. of Math., 169(2):595-632, 2009.
On higher order isotropy conditions and lower bounds for sparse quadratic forms. Alan Sara Van De Geer, Muro, Electron. J. Stat. 82Sara van de Geer and Alan Muro. On higher order isotropy conditions and lower bounds for sparse quadratic forms. Electron. J. Stat., 8(2):3031-3061, 2014.
Introduction to the non-asymptotic analysis of random matrices. Roman Vershynin, Compressed Sensing: Theory and Applications. CambridgeRoman Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Compressed Sensing: Theory and Applications, pages 210-268. Cambridge, 2012.
High-dimensional probability: An introduction with applications in data science. Roman Vershynin, Cambridge University PressRoman Vershynin. High-dimensional probability: An introduction with applications in data science. Cambridge University Press, 2018.
Competitive on-line statistics. Volodya Vovk, Int. Stat. Rev. 692Volodya Vovk. Competitive on-line statistics. Int. Stat. Rev., 69(2):213-248, 2001.
Optimal phase transitions in compressed sensing. Yihong Wu, Sergio Verdú, IEEE Trans. Inform. Theory. 5810Yihong Wu and Sergio Verdú. Optimal phase transitions in compressed sensing. IEEE Trans. Inform. Theory, 58(10):6241-6263, 2012.
Lower bounds on the smallest eigenvalue of a sample covariance matrix. Pavel Yaskov, Electron. Commun. Probab. 19Pavel Yaskov. Lower bounds on the smallest eigenvalue of a sample covariance matrix. Electron. Commun. Probab., 19, 2014.
Sharp lower bounds on the least singular value of a random matrix without the fourth moment condition. Pavel Yaskov, Electron. Commun. Probab. 20Pavel Yaskov. Sharp lower bounds on the least singular value of a random matrix without the fourth moment condition. Electron. Commun. Probab., 20, 2015.
|
[] |
[] |
[
"Tingbin Cao \nDepartment of Mathematics\nDepartment of Physics and Mathematics\nNanchang University\n330031NanchangJiangxiP. R. China\n",
"Risto Korhonen \nUniversity of Eastern Finland\nP.O. Box 111FI-80101JoensuuFinland\n"
] |
[
"Department of Mathematics\nDepartment of Physics and Mathematics\nNanchang University\n330031NanchangJiangxiP. R. China",
"University of Eastern Finland\nP.O. Box 111FI-80101JoensuuFinland"
] |
[] |
A new version of the second main theorem for meromorphic mappings intersecting hyperplanes in several complex variables ✩AbstractLet c ∈ C m , f : C m → P n (C) be a linearly nondegenerate meromorphic mapping over the field P c of c-periodic meromorphic functions in C m , and let H j (1 ≤ j ≤ q) be q(> 2N − n + 1) hyperplanes in N-subgeneral position of P n (C). We prove a new version of the second main theorem for meromorphic mappings of hyperorder strictly less than one without truncated multiplicity by considering the Casorati determinant of f instead of its Wronskian determinant. As its applications, we obtain a defect relation, a uniqueness theorem and a difference analogue of generalized Picard theorem.
|
10.1016/j.jmaa.2016.06.050
|
[
"https://arxiv.org/pdf/1601.05716v1.pdf"
] | 119,328,956 |
1601.05716
|
988ee59c1b2713efcf87bff9f8cedd664e7100cb
|
21 Jan 2016
Tingbin Cao
Department of Mathematics
Department of Physics and Mathematics
Nanchang University
330031NanchangJiangxiP. R. China
Risto Korhonen
University of Eastern Finland
P.O. Box 111FI-80101JoensuuFinland
21 Jan 2016Preprint submitted to the arXiv January 22, 2016Meromorphic mappingNevanlinna theoryDifference operatorCasorati determinant 2010 MSC: Primary 32H30Secondary 30D35
A new version of the second main theorem for meromorphic mappings intersecting hyperplanes in several complex variables ✩AbstractLet c ∈ C m , f : C m → P n (C) be a linearly nondegenerate meromorphic mapping over the field P c of c-periodic meromorphic functions in C m , and let H j (1 ≤ j ≤ q) be q(> 2N − n + 1) hyperplanes in N-subgeneral position of P n (C). We prove a new version of the second main theorem for meromorphic mappings of hyperorder strictly less than one without truncated multiplicity by considering the Casorati determinant of f instead of its Wronskian determinant. As its applications, we obtain a defect relation, a uniqueness theorem and a difference analogue of generalized Picard theorem.
Email addresses: [email protected] (Tingbin Cao), [email protected] (Risto Korhonen)
Introduction
The Picard's theorem says that all holomorphic mappings f : C 1 → P 1 (C) \ {a, b, c} are constants. Since Nevanlinna [1] established the second main theorem for meromorphic functions in the complex plane in 1925 and Ahlfors did it for meromorphic curves in 1941, many forms of the second main theorem for holomorphic maps, as well as meromorphic maps, on various contexts were found. They are powerful generalizations of the Picard's theorem, and are also applied to defect relations and uniqueness problems. By Weyl-Ahlfors' method Chen [2] proved a second main theorem as follows. The case of m = 1 is proved by H. Cartan [3] when hyperplanes H j (1 ≤ j ≤ q) are in general position.
Theorem 1.1 ( [2,3]). Let f : C m → P n (C) be a linearly nondegenerate meromorphic mapping over C 1 , and let H j (1 ≤ j ≤ q) be q(> 2N − n + 1) hyperplanes in N-subgeneral position in P n (C). Then we have
(q − 2N + n − 1)T f (r) ≤ q j=1 N(r, ν 0 (f,H j ) ) − N + 1 n + 1 N(r, ν 0 W (f ) ) + o(T f (r))
for all r > 0 outside of a possible exceptional set E ⊂ [1, +∞) of finite Lebesgue measure, where W (f ) is the Wronskian determinant of f.
Let c ∈ C m . Throughout this paper, we denote by M m the set of all meromorphic functions on C m , by P c the set of all meromorphic functions of M m periodic with period c, and by P λ c the set of all meromorphic functions of M m periodic with period c and having their hyperorders strictly less than λ. Obviously, M m ⊃ P c ⊃ P λ c . In 2006, R. G. Halburd and R. J. Korhonen [4] considered the second main theorem for complex difference operator with finite order in the complex plane. Later, in [5] and [6, Theorem 2.1] difference analogues of the second main theorem for holomorphic curves in P n (C) were obtained independently, and in [7,Theorem 3.3] and [8, Theorems 1.6, 1.7] difference analogues of the second main theorem for meromorphic functions on C m were obtained. In this paper, we will obtain a new natural difference analogue of Theorem 1.1, in which the counting function N(r, ν 0 W (f ) ) of the Wronskian determinant of f is replaced by the counting function N(r, ν 0 C(f ) ) of the Casorati determinant of f (it was called the finite difference Wronskian determinant in [5]). The hyperorder ζ 2 (f ) of meromorphic mapping f : C m → P n (C) is strictly less than one. Theorem 1.2. Let c ∈ C m , let f : C m → P n (C) be a linearly nondegenerate meromorphic mapping over P c with hyperorder ζ = ζ 2 (f ) < 1, and let H j (1 ≤ j ≤ q) be q(> 2N − n + 1) hyperplanes in N-subgeneral position in P n (C). Then we have
(q − 2N + n − 1)T f (r) ≤ q j=1 N(r, ν 0 (f,H j ) ) − N n N(r, ν 0 C(f ) ) + o T f (r) r 1−ζ−ε for all r > 0 outside of a possible exceptional set E ⊂ [1, +∞) of finite logarithmic measure, where C(f ) is the Casorati determinant of f.
The remainder of this paper is organized in the following way. In Section 2, some notations and basic results of Nevanlinna theory are introduced briefly. In Section 3, we adopt the Cartan-Nochka's method [9] and use the Casorati determinant to prove Theorem 1.2, from which a defect relation is obtained in Section 4. In Section 5, we show a uniqueness theorem for meromorphic mappings intersecting hyperplanes in N-subgeneral position with counting multiplicities, which can be seen as a Picard-type theorem, and will be proved as a special case from a difference analogue of generalized Picard theorem [10,11] in Section 6.
Preliminaries
2.1. Set z = (|z 1 | 2 + · · · + |z m | 2 ) for z = (z 1 , · · · , z m ) ∈ C m , for r > 0, define
B m (r) := {z ∈ C m : z ≤ r}, S m (r) := {z ∈ C m : z = r}. Let d = ∂ + ∂, d c = (4π √ −1) −1 (∂ + ∂). Write σ m (z) := (dd c z 2 ) m−1 , η m (z) := d c log z 2 ∧ (dd c z 2 ) m−1 for z ∈ C m \ {0}.
For a divisor ν on C m we define the following counting functions of ν by
n(t) = |ν|∩B(t) ν(z)σ m (z), if m ≥ 2; |z|≤t ν(z), if m = 1, and N(r, ν) = r 1 n(t) t 2m−1 dt (1 < r < ∞).
Let ϕ( ≡ 0) be an entire holomorphic function on C m . For a ∈ C m , we write ϕ(z) = ∞ i=0 P i (z − a), where the term P i is a homogeneous polynomial of degree i. We denote the zero-multiplicity of ϕ at a by ν ϕ (a) = min {i : P i ≡ 0}. Thus we can define a divisor ν ϕ such that ν ϕ (z) equals the zero multiplicity of ϕ at z in the sense of [12, Definition 2.1] whenever z is a regular point of an analytic set |ν ϕ | := {z ∈ C m : ν ϕ (z) = 0}.
Letting h be a nonzero meromorphic function on
C m with h = h 0 h 1 on C m and dim(h −1 0 (0) ∩ h −1 1 (0)) ≤ m − 2, we define ν 0 h := ν h 0 , ν ∞ h := ν h 1 .
For a meromorphic function h on C m , we have the Jensen's theorem:
N(r, ν 0 h ) − N(r, ν ∞ h ) = Sm(r) log |h|η m (z) − Sm(1)
log |h|η m (z).
2.2.
A meromorphic mapping f : C m → P n (C) is a holomorphic mapping from U into P n (C), where U can be chosen so that V ≡ C m \ U is an analytic subvariety of C m of codimension at least 2. Furthermore f can be represented by a holomorphic mapping of C m to C n+1 such that
V = I(f ) = {z ∈ C m : f 0 (z) = · · · = f n (z) = 0},
where f 0 , . . . , f n are holomorphic functions on C m . We say that f = [f 0 , . . . , f n ] is a reduced representation of f (the only factors common to f 0 , . . . , f n are units). If g = hf for h any quotient of holomorphic functions on C m , then g will be called a representation of F (e.g. reduced iff h is holomorphic and a unit). Set f = ( n j=0 |f j | 2 ) 1 2 . The growth of meromorphic mapping f is measured by its characteristic function
T f (r) = r r 0 dt t 2m−1 Bm(t) dd c log f 2 ∧ σ m (z) = Sm(r) log f η m (z) − Sm(1) log f η m (z) = Sm(r) log max{|f 0 |, . . . , |f n |}η m (z) + O(1) (r > r 0 > 1).
Note that T f (r) is independent of the choice of the reduced representation of f. The order and hyper-order of f are respectively defined by where log + x := max{log x, 0} for any x > 0. We say that a meromorphic mapping f from C m into P n (C) with a reduced representation [f 0 , . . . , f n ] is linearly nondegenerate over P λ c if the entire functions f 0 , . . . , f n are linearly independent over P λ c , and say that f is linearly nondegenerate over C 1 if the entire functions f 0 , . . . , f n are linearly independent over C 1 .
2.3.
Let hyperplanes H j of P n (C) be defined by
H j : h j0 w 0 + . . . + h jn w n = 0 (1 ≤ j ≤ q),
where [w 0 , . . . , w n ] is a homogeneous coordinate system of P n (C). Suppose that [f 0 , . . . , f n ] is a reduced representation of a meromorphic mapping f : C m → P n (C), then we denote
(f, H j ) = h j0 f 0 + . . . + h jn f n which are entire functions on C m for all j ∈ {1, . . . , q}.
We say that q hyperplanes H j (1 ≤ j ≤ q) are in N-subgeneral position of P n (C) if j∈R H j = ∅ for any subset R ⊂ Q = {1, 2, . . . , q} with its cardinality |R| = N +1 ≥ n+1. This is equivalent to that for an arbitrary (N +1, n+1)-matrix (h jk ) j∈R,0≤k≤n , rank(h jk ) j∈R,0≤k≤n = n + 1.
If H j (1 ≤ j ≤ q) are in n-subgeneral position, we simply say that they are in general position.
We denote by V (R) the vector subspace spanned by (h jk w k ) 0≤k≤n , j ∈ R ⊂ Q in C n+1 , and rk(R) := dim V (R), rk(∅) = 0.
2.4.
Let a meromorphic mapping f = [f 0 , . . . , f n ] from C m into P n (C) and a hyperplane H of P n (C) satisfy (f, H) ≡ 0. The closeness of the image of a meromorphic mapping f to intersecting H is measured by the proximity function
m f,H (r) = Sm(r) log + f · H |(f, H)| η m (z) − Sm(1) log + f · H |(f, H)| η m (z).
We have the first main theorem of Nevanlinna theory
T f (r) = N(r, ν 0 (f,H) ) + m f,H (r) + O(1) (r > 1).
2.5.
Let f be a meromorphic mapping from C m into P n (C). For c = (c 1 , . . . , c m ) and z = (z 1 , . . . , z m ), we write c + z = (c 1 + z 1 , . . . , c m + z m ), cz = (c 1 z 1 , . . . , c m z m ). Denote the c-difference operator by
∆ c f (z) := f (c + z) − f (z).
We use the short notations
f (z) ≡ f := f [0] , f (z+c) ≡ f := f [1] , f (z+2c) ≡ f ≡ f [2] , . . . , f (z+kc) ≡ f [k]
.
Assume that f has a reduced representation [f 0 , . . . , f n ]. Let
D (j) = ∂ ∂z 1 α 1 (j) · · · ∂ ∂z m αm(j)
be a partial differentiation operator of order at most j = m k=1 α k (j). Similarly as the Wronskian determinant
W (f ) = W (f 0 , . . . , f n ) = f 0 f 1 · · · f n D (1) f 0 D (1) f 1 · · · D (1) f n . . . . . . . . . . . . D (n) f 0 D (n) f 1 · · · D (n) f n ,
the Casorati determinant is defined by
C(f ) = C(f 0 , . . . , f n ) = f 0 f 1 · · · f n f 0 f 1 · · · f n . . . . . . . . . . . . f [n] 0 f [n] 1 · · · f [n] n .
For a subset R ⊂ Q = {1, . . . , q} such that |R| = n + 1, we denote by
C(((f, H j ), j ∈ R))
the Casorati determinant of (f, H j ), j ∈ R with increasing order of indices.
Proof of Theorem 1.2
We recall two lemmas due to Nochka (see [2,13,14,9]) as follows. 13,14,9]). Let H j , j ∈ Q = {1, 2, . . . , q} be hyperplanes of P n (C) in N-subgeneral position, and assume that q > 2N − n + 1. Then there are positive rational constants ω(j), j ∈ Q satisfying the following:
Lemma 3.1 ([2,(i) 0 < ω(j) ≤ q for all j ∈ Q. (ii) Settingω = max j∈Q ω(j), one gets q j=1 ω(j) =ω(q − 2N + n − 1) + n + 1. (iii) n+1 2N −N +1 ≤ω ≤ n N . (iv) For R ⊂ Q with 0 < |R| ≤ N + 1, j∈R ω(j) ≤ rk(R).
The above ω(j) andω are called the Nochka weights and the Nochka constant, respectively. 13,14,9]). Let H j , j ∈ Q = {1, 2, . . . , q}, be hyperplanes of P n (C) in N-subgeneral position, and assume that q > 2N − n + 1. Let {ω(j)} be their Nochka weights.
Lemma 3.2 ([2,
Let E j ≥ 1, j ∈ Q be arbitrarily given numbers. Then for every subset
R ⊂ Q with 0 < |R| ≤ N + 1, there are distinct indices j 1 , . . . , j rk(R) ∈ Q such that rk({j l } rk(R) l=1 ) = rk(R) and j∈R E ω(j) j ≤ rk(R) l=1 E jl .
It is known that holomorphic functions g 0 , . . . , g n on C m are linearly dependent over C m if and only if their Wronskian determinant W (g 0 , . . . , g n ) vanishes identically [15,Prop. 4.5]. It was mentioned in [5, Remark 2.6] without proof that holomorphic functions g 0 , . . . , g n on C are linearly dependent over P c if and only if their Casorati determinant C(g 0 , . . . , g n ) vanishes identically. The proof of this fact can be seen in the proof of [6, Lemma 3.2] which, in fact, is a more accurate result because it takes into account the growth order of functions. Here we introduce extensions of these results for the case of several complex variables.
Lemma 3.3. (i) Let c ∈ C m . A meromorphic mapping f : C m → P n (C) with a reduced representation [f 0 , . . . , f n ] satisfies C(f 0 , . . . , f n ) ≡ 0 if and only if f is linearly nondegenerate over the field P c . (ii) Let c ∈ C m . If a meromorphic mapping f : C m → P n (C) with a re- duced representation [f 0 , . . . , f n ] satisfies ζ 2 (f ) < λ < +∞, then C(f 0 , . . . , f n ) ≡ 0 if and only if f is linearly nondegenerate over the field P λ c (⊂ P c ).
Proof. By the definition of the characteristic function of f and using similar discussion as in [16,Page 47], it is not difficult to get that for any meromorphic function g on C m and c ∈ C m
T g(z+c) (r) = O T g(z) (r + ||c||) .
Then considering the above fact and making use of almost the same discussion as in [6, Lemma 3.2], one can complete the proof of (ii). To prove (i) it is just not necessary to consider the growth of f in the proof of (ii). We omit the details.
Lemma 3.4. Let q > 2N − n + 1, Q = {1, . . . , q}. Suppose that f : C m → P n (C)
is a linearly nondegenerate meromorphic mapping over P c , and H j (j ∈ Q) are hyperplanes of P n (C) in N-subgeneral position. Let ω(j),ω be the Nochka weights and Nochka constant of {H j } j∈Q respectively. Then we get that
f ω(q−2N +n−1) ≤ K · t j ∈R |(f [j] , H t j )| ω(t j ) · j∈S |(f, H j )| ω(j) |C(f 0 , . . . , f n )| · |C(((f, H j ), j ∈ R o ))| |(f, H t 0 )(f , H t 1 ) · · · (f [n] , H tn )|
for an arbitrary
z ∈ C m \ z ∈ C m : t j ∈R |g [j] t j | ω(t j ) · j∈S |(f, H j )| ω(j) = 0 ∪ I(f ) , where K depends on {H j } j∈Q , and R o , R, S are some subsets of Q such that R o = {t 0 , t 1 , . . . , t n } ⊂ R = {t 0 , t 1 , . . . , t n , t n+1 , . . . , t N } ⊂ Q \ S. Proof. Since the hyperplanes {H j } q j=1 are in N-subgeneral position of P n (C), we have j∈R H j = ∅ for any R ⊂ Q with |R| = N + 1. This implies that there exists a subset S ⊂ Q with |S| = q − N − 1 such that j∈S H j (w) = 0. Let I(f ) = {z : f 0 (z) = f 2 (z) = · · · = f n (z) = 0} with its codimension ≥ 2. For arbitrary fixed point z ∈ C m \ ∪ j∈Q {z ∈ C m : (f [k j ] , H j ) = 0} ∪ I(f ) (so, f (z) ∈ P n (C) and (f [k j ] , H j ) ∈ C 1 ), there is a positive constant K jk which depends on H j and k j ∈ N ∪ {0} such that 1 |K jk | ≤ |(f [k j ] , H j )| f (z) ≤ |K jk | (3.1) for j ∈ S.
Below we set R = Q \ S. Then we have |R| = N + 1 and rk(R) = n + 1. Then
j∈S (f [k j ] , H j ) f (z) |K jk | ω(j) = j∈R f (z) |K jk | (f [k j ] , H j ) ω(j) · j∈Q |(f [k j ] , H j )| ω(j) ( f (z) |K jk |) j∈Q ω(j) . (3.2) By Lemma 3.1 (ii), for R = Q \ S we have q j=1 ω(j) =ω(q − 2N + n − 1) + n + 1. (3.3) Replacing E j by f (z) |K jk | |(f [k j ] ,H j )| and making use of Lemma 3.2, for R = Q \ S there is a subset R o = {j 1 , . . . , j rk(R) } ⊂ R such that |R o | = rk({j l } rk(R) l=1 ) = rk(R) = n + 1 and j∈R f (z) |K jk | |(f [k j ] , H j )| ω(j) ≤ j∈R o f (z) |K jk | |(f [k j ] , H j )| . (3.4) Since f is linearly non-degenerate over P c , by Lemma 3.3 we get C(f 0 , . . . , f n ) ≡ 0. Since {H j } q j=1 are in N-subgeneral position, there exists a non-singular matrix B depending on {H j } j∈R o such that C(((f, H j ), j ∈ R o )) = C ((f 0 , f 1 , . . . , f n )B) = C(f 0 , f 1 , . . . , f n ) × det B. So, C(((f, H j ), j ∈ R o )) ≡ 0. Hence, there is a positive constant K R o de- pending on H j such that |K R o | |C(((f, H j ), j ∈ R o ))| |C(f 0 , . . . , f n )| = 1. (3.5)
For the above R o , R, S, Q, we may rewrite their elements as follows:
Q = {1, 2, . . . , q} := {t 0 , t 1 . . . , t q }, R o = {t 0 , t 1 , . . . , t n }, R = {t 0 , t 1 , . . . , t n , t n+1 , . . . , t N }. Denote g [k] j := (f [k] , H j ), j ∈ Q.
Then it follows from (3.1) and (3.2) that for
any z ∈ G := C m \ {z ∈ C m : t j ∈R |g [j] t j | ω(t j ) · j∈S |(f, H j )| ω(j) = 0} ∪ I(f ) , j∈S 1 |K j | 2 ω(j) ≤ j∈S (f, H j )(z) f (z) |K j | ω(j) ≤ t j ∈R f (z) |K t j | |g [j] t j | ω(t j ) · t j ∈R |g [j] t j | ω(t j ) · j∈S |(f, H j )| ω(j) ( f (z) | min{K 1 , . . . , K q }|) j∈Q ω(j) .
Then together with (3.4), the above inequality becomes
j∈S 1 |K j | 2 ω(j) ≤ t j ∈R o f (z) |K t j | |g [j] = t j ∈R o |K t j | |g t 0 g t 1 · · · g [n] tn | · t j ∈R |g [j] t j | ω(t j ) · j∈S |(f, H j )| ω(j) ( f (z) | min{K 1 , . . . , K q }|)ω (q−2N +n−1) .
By (3.5), the last line in the above inequalities is equal to
|K R o | t j ∈R o |K t j | | min{K 1 , . . . , K q }|ω (q−2N +n−1) · 1 f (z) ω(q−2N +n−1) · t j ∈R |g [j] t j | ω(t j ) · j∈S |(f, H j )| ω(j) |C(f 0 , . . . , f n )| · |C(((f, H j ), j ∈ R o ))| |g t 0 g t 1 · · · g [n]
tn | .
So, we get from the above discussion that for any z ∈ G,
f (z) ω(q−2N +n−1) ≤ |K R o | t j ∈R o |K t j | j∈S |K j | 2ω(j) | min{K 1 , . . . , K q }|ω (q−2N +n−1) · t j ∈R |g [j] t j | ω(t j ) · j∈S |(f, H j )| ω(j) |C(f 0 , . . . , f n )| · |C(((f, H j ), j ∈ R o ))| |g t 0 g t 1 · · · g [n]
tn | Therefore, the inequality in the assertion of this lemma is obtained immediately by setting
K = |K R o | t j ∈R o |K t j | j∈S |K j | 2ω(j) | min{K 1 , . . . , K q }|ω (q−2N +n−1)
which is a positive constant depending on {H j } j∈Q .
The following result is a difference analogue of the lemma on the logarithmic derivative in several complex variables. It generalizes the one dimensional results [ [7] a difference analogue of the lemma on the logarithmic derivatives was obtained for meromorphic functions in several variables of hyperorder strictly less that 2/3. The following lemma extends this result for the case hyperorder < 1. Now let the constant c :=cξ, wherec
log + f (z + c) f (z) η m (z) = o T f (r) r 1−ζ−ε∈ C 1 \ {0}. For any ξ ∈ S m (1) \ E 1 , considering the meromorphic function f ξ (u) := f (ξu) of C 1 , we get from (3.7) that m r, f (z + c) f (z) = Sm(r) log + f (z + c) f (z) η m (z) = Sm(1) 1 2π 2π 0 log + f ξ (re iθ +c) f ξ (re iθ ) dθ η m (z),
where we denote z = uξ for any ξ ∈ S m (1). By [6, Lemma 8.2], we get that for all r > 0, δ ∈ (0, 1) and α > 1,
m r, f ξ (re iθ +c) f ξ (re iθ ) = 1 2π 2π 0 log + f ξ (re iθ +c) f ξ (re iθ ) dθ ≤ K(α, δ,c) r δ T f ξ (α(r + |c|)) + log + 1 |f ξ (0)| ,
where K(α, δ,c) = 4|c| δ (4α+αδ+δ) δ(1−δ)(α−1) , r = |u| = uξ = z . Therefore, together with (3.6), it follows from the two inequalities above that
m r, f (z + c) f (z) = Sm(r) log + f (z + c) f (z) η m (z) ≤ Sm(1) K(α, δ,c) r δ T f ξ (α(r + |c|)) + log + 1 |f ξ (0)| η m (z) = K(α, δ,c) r δ Sm(1) T f ξ (α(r + |c|))η m (z) + O(1), namely, m r, f (z + c) f (z) ≤ K(α, δ,c) r δ T f (α(r + |c|)) + O(1). (3.8)
The following part of the proof is dealt with similarly as in [6, Theorem 5.1]. Choose p(r) := r, h(x) := (log x) 1+ ε 3 and α := 1 + p(r + | c|) (r + | c|)h(T f (r + | c|)) , and thus
ρ = α(r + | c|) = r + | c| + r + | c| (log T f (r + | c|)) 1+ ε 3 .
By [19, Lemma 4] we have
T f (ρ) = T f s + p(s) h(T f (s)) ≤ KT f (s) (s = r + | c|) (3.9)
for all s outside of a set E satisfying
E∩[s 0 ,R] ds p(s) ≤ 1 log K T f (R) e dx xh(x) + O(1) < ∞
where R < +∞ and K is a positive real constant. Since ς = ς 2 (f ) < 1, by [6,Lemma 8.3] we have
T f (r + | c|) = T f (r) + o T f (r) r 1−ς−ε (3.10)
for all r > 0 outside of a possible exceptional set F ⊂ [1, ∞) of finite logarithmic measure F dt t < ∞. We can choose suitable δ ∈ (0, 1) such that
T f (s) r δ = o T f (r + | c|) r 1−ς−ε for all r ∈ F ∪ E.
Hence it follows from (3.8), (3.9) and (3.10) that
m r, f (z + c) f (z) = o T f (r) r 1−ς−ε for all r > 0 outside of a possible exceptional set, still say E ⊂ [1, ∞), of finite logarithmic measure E dt t < ∞. Since f (z) f (z + c) = f [(z + c) − c] f (z + c) , f [k] f (z) = f [k] f [k−1] · f [k−1] f [k−2] · · · f f (z) (k ∈ N),
it follows immediately from Lemma 3.5 that
Sm(r) log + f (z) f (z + c) η m (z) = o T f (r) r 1−ζ−ε , Sm(r) log + f [k] f (z) η m (z) = o T f (r) r 1−ζ−ω(q − 2N + n − 1) log f ≤ t j ∈R ω(t j ) log |(f [j] , H t j )| + j∈S ω(j) log |(f, H j )| − log |C(f 0 , f 1 , . . . , f n )| + log C(((f, H j ), j ∈ R o )) |(f, H t 0 )(f , H t 1 ) · · · (f [n] , H tn )| + O(1) for some subsets R o , R, S of Q such that R o = {t 0 , t 1 , . . . , t n } ⊂ R = {t 0 , t 1 , . . . , t n , t n+1 , . . . , t N } ⊂ Q \ S.
Integrating both sides of this inequality, we havẽ
ω(q − 2N + n − 1) Sm(r) log f σ m (z) ≤ t j ∈R ω(t j ) Sm(r) log |(f [j] , H t j )|η m (z) + j∈S ω(j) Sm(r) log |(f, H j )|η m (z) − Sm(r) log |C(f 0 , f 1 , . . . , f n )|η m (z) + Sm(r) log + C(((f, H j ), j ∈ R o )) |(f, H t 0 )(f , H t 1 ) · · · (f [n] , H tn )| η m (z) + O(1).
By the definition of the characteristic function of f and together with the Jensen's theorem,
ω(q − 2N + n − 1)T f (r) ≤ t j ∈R ω(t j )N(r, ν 0 (f [j] ,Ht j ) ) + j∈S ω(j)N(r, ν 0 (f,H j ) ) − N(r, ν 0 C(f 0 ,f 1 ,...,fn) ) + Sm(r) log + C(((f, H j ), j ∈ R o )) |(f, H t 0 )(f , H t 1 ) · · · (f [n] , H tn )| η m (z) + O(1) ≤ t j ∈R ω(t j )N(r + j|c|, ν 0 (f,Ht j ) ) + j∈S ω(j)N(r, ν 0 (f,H j ) ) − N(r, ν 0 C(f 0 ,f 1 ,...,fn) ) + Sm(r) log + C(((f, H j ), j ∈ R o )) |(f, H t 0 )(f , H t 1 ) · · · (f [n] , H tn )| η m (z) + O(1).
By the Jensen's theorem and the definition of characteristic function, we have N(r, ν 0 (f,Ht j ) ) =
Sm(r) log |(f, H t j )|η m (z) + O(1) ≤ Sm(r) log f η m (z) + O(1) = T f (r) + O(1).
Thus the hyperorder of N(r, ν 0 (f,H j ) ) satisfies
λ t j := lim sup r→∞ log log N(r, ν 0 (f,Ht j ) ) log r ≤ ζ 2 (f ) := ζ < 1.
Then by [6,Lemma 8.3] we obtain
N(r + j|c|, ν 0 (f,Ht j ) ) ≤ N(r, ν 0 (f,Ht j ) ) + o N(r, ν 0 (f,Ht j ) ) r 1−λt j −ε ≤ N(r, ν 0 (f,H j ) ) + o T f (r) r 1−ζ−ε .
So, it follows that
ω(q − 2N + n − 1)T f (r) ≤ j∈R ω(j)N(r, ν 0 (f,H j ) ) + j∈S ω(j)N(r, ν 0 (f,H j ) ) − N(r, ν 0 C(f 0 ,f 1 ,...,fn) ) + Sm(r) log + C(((f, H j ), j ∈ R o )) |(f, H t 0 )(f , H t 1 ) · · · (f [n] , H tn )| η m (z) + o T f (r) r 1−ζ−ε ≤ j∈Q ω(j)N(r, ν 0 (f,H j ) ) − N(r, ν 0 C(f 0 ,f 1 ,...,fn) ) + Sm(r) log + C(((f, H j ), j ∈ R o )) |(f, H t 0 )(f , H t 1 ) · · · (f [n] , H tn )| η m (z) + o T f (r) r 1−ζ−ε . Denote g [j] t j := (f [j] , H t j ), t j ∈ R o . We have C(((f, H j ), j ∈ R o )) |(f, H t 0 )(f , H t 1 ) · · · (f [n] , H tn )| = g t 0 g t 1 · · · g tn g t 0 g t 1 · · · g tn . . . . . . . . . . . . g [n] t 0 g [n] t 1 · · · g [n] tn |g t 0 g t 1 · · · g [n] tn | = 1 gt 1 gt 0 · · · gt n gt 0 1 g t 1 g t 0 · · · g tn g t 0 . . . . . . . . . . . . 1 g [n] t 1 g [n] t 0 · · · g [n] tn g [n] t 0 | g t 1 g t 0 · · · g [n] tn g n t 0 | = 1 1 · · · 1 1 g t 1 g t 0 / gt 1 gt 0 · · · g tn g t 0 / gt n gt 0 . . . . . . . . . . . . 1 g [n] t 1 g [n] t 0 / gt 1 gt 0 · · · g [n] tn g [n] t 0 / gt n gt 0 g t 1 g t 0 / gt 1 gt 0 · · · g [n] tn g [n] t 0 / gt n gt 0 .
By the definition of the characteristic function, one can deduce (or by [20,21]), for i = j
T (f,H i ) (f,H j ) (r) ≤ T f (r) + O(1),
and thus ζ 2 ( (f,H i ) (f,H j ) ) ≤ ζ 2 (f ) := ζ < 1. Hence by Lemma 3.5 we have
Sm(r) log + C(((f, H j ), j ∈ R o )) |(f, H t 0 )(f , H t 1 ) · · · (f [n] , H tn )| η m (z) = n j=1 o T g t j g t 0 (r) r 1−ζ 2 ( g t j g t 0 )−ε = o T f (r) r 1−ζ−ε
for all r > 0 outside of a possible exceptional set E ⊂ [1, ∞) of finite logarithmic measure E dt t < ∞. Therefore, the above inequalities implies that
(q − 2N + n − 1)T f (r) ≤ j∈Q ω(j) ω N(r, ν 0 (f,H j ) ) − 1 ω N(r, ν 0 C(f 0 ,f 1 ,...,fn) ) + o T f (r) r 1−ζ−ε for all r > 0 outside of a possible exceptional set E ⊂ [1, ∞) of finite loga- rithmic measure E dt t < ∞. From Lemma 3.1,ω = max j∈Q {ω(j)} ≤ n N . Then it follows that (q − 2N + n − 1)T f (r) ≤ j∈Q N(r, ν 0 (f,H j ) ) − N n N(r, ν 0 C(f 0 ,f 1 ,...,fn) ) + o T f (r) r 1−ζ−ε .
Thus Theorem 1.2 is proved.
Defect relation
The defects δ(f, H) and δ W (f ) of a meromorphic mapping f : C m → P n (C) for a hyperplane H in P n (C) are defined by
δ(f, H) = 1 − lim sup r→∞ N(r, ν 0 (f,H) ) T f (r)
,
δ W (f ) = lim inf r→∞ N(r, ν 0 W (f 0 ,...,fn) ) T f (r) .
From the Chen's version of the second main theorem (Theorem 1.1), there exists a defect relation such that
N + 1 n + 1 δ W (f ) + q j=1 δ(f, H) ≤ 2N − n + 1
for q hyperplanes {H j } q j=1 . Similary, the difference defect δ C(f ) of a meromorphic mapping f : C m → P n (C) with reduced representation [f 0 , . . . , f n ] is defined by
δ C(f ) = lim inf r→∞ N(r, ν 0 C(f 0 ,...,fn) ) T f (r) .
Hence, by Theorem 1.2 we obtain a defect relation as follows, which is an extension of [5,Corollary 3.4].
Uniqueness of meromorphic mappings
The uniqueness problem for meromorphic mappings under some conditions on the inverse images of divisors was first investigated by R. Nevanlinna. He [22] proved that if two nonconstant meromorphic functions f and g on the complex plane C 1 have the same inverse images ignoring multiplicities for five distinct values in P 1 (C), then f ≡ g. In 1975, H. Fujimoto [23] generalized Nevanlinna's five-value theorem to the case of higher dimension by showing that if two linearly nondegenerate meromorphic mappings f, g : C m → P n (C) have the same inverse images counted with multiplicities for q ≥ 3n + 2 hyperplanes in general position in P n (C), then f ≡ g. For basic results in the uniqueness theory of meromorphic functions and mappings, we refer to two books [24,25].
By considering the uniqueness problem for f (z) and f (z + c) intersecting hyperplanes in N-subgeneral position, we obtain the following uniqueness theorem. We say that the pre-image of (f, H) for a meromorphic mapping f :
C m → P n (C) intersecting a hyperplane H of P n (C) is forward invariant with respect to the translation τ = z + c if τ ((f, a) −1 ) ⊂ (f, a) −1 where τ ((f, a) −1 )
and (f, a) −1 are considered to be multi-sets in which each point is repeated according to its multiplicity. By this definition the (empty and thus forward invariant) pre-images of the usual Picard exceptional values become special cases of forward invariant pre-images. Then Theorem 5.1 is an extension of the Picard's theorem under the growth condition "hyperorder < 1". Actually, Theorem 5.1 is proved from a generalized Picard-type theorem which will be shown in the next section.
Difference analogue of a generalized Picard-type theorem
Fujimoto [10] and Green [11] gave a natural generalization of the Picard's theorem by showing that if f : C → P n (C) omits n+p hyperplanes in general position where p ∈ {1, . . . , n + 1}, then the image of f is contained in a linear subspace of dimension at most [ n p ]. Recently, Halburd, Korhonen and Tohge [6] proposed a difference analogue of the general Picard-type theorem for homomorphic curves with hyperorder strictly less than one.
(z) = z + c. If f i f j ∈ P λ c for all i, j ∈ {0, .
. . , n} such that i = j, then f is linearly nondegenerate over P λ c . Proof. Assume that the conclusion is not true, that is there exist A 0 , . . . , A n ∈ P λ c such that A 0 f 0 + · · · + A n−1 f n−1 = A n f n and such that not all A j are identically zero. Without loss of generality we may assume that none of A j are identically zero. Since all zeros of f 0 , . . . , f n are forward invariant with respect to the translation τ (z) = z + c and since A 0 , . . . , A n ∈ P λ c , we can choose a meromorphic function F on C m such that F A 0 f 0 , . . . , F A n f n are holomorphic functions on C m without common zeros and such that the preimages of all zeros of F A 0 f 0 , . . . , F A n f n are forward invariant with respect to the translation τ (z) = z + c. Then we have lim sup
r→∞ log + log + (N(r, ν 0 F ) + N(r, ν ∞ F )) log r < λ ≤ 1 (6.1)
and F A 0 f 0 , . . . , F A n−1 f n−1 cannot have any common zeros. Denote g j := F A j f j for 0 ≤ j ≤ n. Then T G (r) is well defined for G = [g 0 , . . . , g n−1 ] which is a holomorphic mapping from C m into P n−1 (C). Then by the definition of characteristic function and the Jensen's theorem we have
T G(r) = Sm(r) log G η m (z) + O(1) ≤ Sm(r) log |F |η m (z) + Sm(r) log( n−1 j=0 |f j | 2 ) 1 2 η m (z) + n−1 j=0 Sm(r) log + |A j |η m (z) + O(1) ≤ N(r, ν 0 F ) − N(r, ν ∞ F ) + T f (r) + n−1 j=0 T A j (r)
which together with (6.1) imply that the hyperorder satisfies ζ 2 (G) < λ ≤ 1. Assume that the meromorphic mapping G : C m → P n−1 (C) is linearly nondegenerate over P λ c (⊂ P c ). Then by Lemma 3.3, it follows that C(g 0 , . . . , g n−1 ) ≡ 0. Define the following hyperplanes
H 0 : w 0 = 0, H 1 : w 1 = 0, . . .w 0 + w 1 + . . . + w n−1 = 0,
where [w 0 , . . . , w n−1 ] is a homogeneous coordinate system of P n−1 (C). So, (G, H j ) = g j for 0 ≤ j ≤ n − 1 and (G, H n ) = g 0 + . . . + g n−1 = F A n f n = g n . Obviously, the q = n + 1 hyperplanes H 0 , . . . , H n are in (n − 1)-subgeneral position of P n−1 (C). Hence by Theorem 1.2 we have
T G (r) = ((n + 1) − 2(n − 1) + (n − 1) − 1) T G (r) ≤ n j=0 N(r, ν 0 g j ) − N(r, ν 0 C(g 0 ,...,g n−1 ) ) + o(T G (r))
for all r outside of a possible exceptional set of finite logarithmic measure. Then using the same discussion as in the proof of [6, Lemma 3.3] we have n j=1 N(r, ν 0 g j ) ≤ N(r, ν 0 C(g 0 ,...,g n−1 ) ).
Hence, it follows T G (r) = o(T G (r)) which is an contradiction. Therefore, the meromorphic mapping G : C m → P n−1 (C) is linearly degenerate over P λ c , and thus there exist B 0 , . . . , B n−1 ∈ P λ c such that B 0 f 0 + · · · + B n−2 f n−2 = B n−1 f n−1 and such that not all B j are identically zero. By repeating similar discussions as above it follows that there exist L i , L j ∈ P λ c such that
L i f i = L j f j
for some i = j and not all L i and L j are identically zero. This contradicts the condition that f i f j ∈ P λ c for all {i, j} ⊂ {0, . . . , n}. Therefore, the proof is complete.
The following lemma is an extension of the difference analogue of Borel's theorem [6, Theorem 3.1]. Proof. Suppose that i ∈ S k , k ∈ {0, . . . , l}. Then by the condition of the lemma, f i = A i,j k f j k for some A i,j k ∈ P λ c whenever the indexes i and j k are in the same class S k . This implies that
0 = n k=0 f k = l k=1 i∈S k A i,j k f j k = l k=1 B k f j k where B k = i∈S k A i,j k ∈ P λ c .
This says that f j 1 , . . . , f j l are linearly degenerate over P λ c . Hence by Lemma 6.3 all B k (k = 1, . . . , l) are identically zero. Thus it follows where [w 0 , . . . , w n ] is a homogeneous coordinate system of P n (C). Since {H j } j∈Q are in N-subgeneral position of P n (C), any N +2 of H j satisfy a linear relation with nonzero coefficients in C 1 . By conditions of the theorem, holomorphic functions g j := (f, H j ) = h j0 f 0 + . . . + h jn f n satisfy {τ (g −1 j ({0}))} ⊂ {g −1 j ({0})} for all j ∈ Q, where {·} denotes a multiset with counting multiplicities of its elements. We say that i ∼ j if g i = αg j for some α ∈ P 1 c \ {0}. Hence Q = l j=1 S j for some l ∈ Q. Firstly, assume that the complement of S k has at least N + 1 elements for some k ∈ {1, . . . l}. Choose an element s 0 ∈ S k , and denote U = (Q \ S k ) ∪ {s 0 }. Then U contains at least N + 2 elements, and thus there is a subset U 0 ⊂ U such that U 0 ∩ S k = {s 0 } and |U 0 | = N + 2. Therefore there exists α j ∈ C \ {0} such that j∈U 0 α j H j = 0.
Hence,
j∈U 0 α j g j = j∈U 0 α j (f, H j ) = j∈U 0 α j H j (f ) = 0.
Without loss of generality, we may assume that U 0 = {s 1 , . . . , s N +1 } ∪ {s 0 }. It is easy to see from above discussion that all of zeros of α j g j (j ∈ U 0 ) are forward invariant with respect to the translation τ (z) = z + c, and G := [α s 0 g s 0 : α s 1 g s 1 : · · · : α s N+1 g s N+1 ] is a meromorphic mapping from C m into P N +1 (C) with its hyperorder ζ 2 (G) ≤ ζ 2 (f ) < 1. Furthermore, α i g i αs 0 gs 0 ∈ P 1 c for any i ∈ U 0 \ {s 0 }, thus i ∼ s 0 . Hence by Lemma 6.4 we have α s 0 g s 0 = 0, and thus (f, H s 0 ) ≡ 0. This means that the image f (C m ) is included in the hyperplane H s 0 of P n (C).
Secondly, assume that the set Q \ S k has at most N elements. Then S k has at least n + p − N elements for all k = 1, . . . , l. This implies that l ≤ n + p n + p − N .
Let V be any subset of Q with |V | = N + 1. Then {H j } j∈V are linearly independent. Denote V k := V ∩ S k . Then we have
V = l k=1 V k .
Since each set V k gives raise to |V k − 1| equations over the field P 1 c , it follows that there are at least
NSFC(no.11461042), CPSF(no.2014M551865), CSC(no.201308360070), PSF of Jiangxi(no.2013KY10). The second author was supported in part by the Academy of Finland grant(#286877) and (#268009).
log + T f (r) log r and ζ 2 (f ) := lim sup r→∞ log + log + T f (r) log r ,
Lemma 3. 5 .
5Let f be a nonconstant meromorphic function on C m such that f (0) = 0, ∞, and let ε > 0. If ζ 2 (f ) := ζ < 1, then
for all r > 0
0outside of a possible exceptional set E ⊂ [1, ∞) of finite logarithmic measure E dt t < ∞. Proof. Let E 1 be the set of all points ξ ∈ S m (1) such that {z = uξ : |u| < +∞} ⊂ I(f ) which is of measure zero in S m (1). For any ξ ∈ S m (1) \ E 1 , considering the meromorphic function f ξ (u) := f (ξu) of C 1 , we have T f ξ (r|f ξ (re iθ )|dθ − log |f (0)|,and thus by [17, Lemmas 1.1-1.2] it follows (see also [18, pages 33-34]) the proximity function of a meromorphic function φ on C m is defined ([18, Definition 5.5]) by m(r, φ) := Sm(r) log + |φ| η m (z).Let E 2 be the set of all points ξ ∈ S m (1) such that {uξ : |u| < +∞} ⊂ I(φ) which is of measure zero in S m (1). For any ξ ∈ S m (1) \ E 2 , considering the meromorphic function φ ξ (u) := φ(ξu) of C 1 , we have + |φ ξ (re iθ )|dθ.
Theorem 4. 1 .
1Under the conditions of Theorem 1.2 we have the defect ref, H) ≤ 2N − n + 1.
Theorem 5 . 1 .
51Let f be a meromorphic mapping with hyper-order ς(f ) < 1 from C m into P n (C), and let τ (z) = z + c, where c ∈ C m . If τ ((f, H j ) −1 ) ⊂ (f, H j ) −1 (counting multiplicity) hold for n + p distinct hyperplanes {H j } n+p j=1in N-subgeneral position in P n (C), and if p > N N −n+1 + N − n, then f (z) = f (z + c).
.
Let f : C → P n (C) be a holomorphic curve such that hyperorder ζ 2 (f ) < 1, let c ∈ C, and let p ∈ {1, . . . , n+1}. If p+n hyperplanes in general position in P n (C) have forward invariant preimages under f with respect to the translation τ (z) = z + c, then the image of f is contained in a projective linear subspace over P 1 c of dimension ≤ [ n p ]. In this section we extend Theorem 6.1 to the case of meromorphic mappings f : C m → P n (C) of hyperorder strictly less than one and hyperplanes in N-subgeneral position.Theorem 6.2. Let c ∈ C m , let p ∈ {1, . . . , N N −n+1 +N −n+1}, n ≤ N < n+p.Assume that f is a meromorphic mapping from C m into P n (C) such that hyperorder ζ 2 (f ) < 1.If p + n hyperplanes in N-subgeneral position in P n (C) have forward invariant preimages under f with respect to the translation τ (z) = z + c, then the image of f is contained in a projective linear subspace over P 1 c of dimension ≤ [ N n+p−N − N + n]. Before proving Theorem 6.2, we need two lemmas as follows. The first one is an extension of [6, Lemma 3.3]. Lemma 6.3. Let c ∈ C m , and f = [f 0 , . . . , f n ] be a meromorphic mapping from C m into P n (C) such that hyperorder ζ 2 (f ) < λ ≤ 1 and all zeros of f 0 , . . . , f n are forward invariant with respect to the translation τ
Lemma 6 . 4 .
64Let c ∈ C m , and f = [f 0 , . . . , f n ] be a meromorphic mapping from C m into P n (C) such that hyperorder ζ 2 (f ) < λ ≤ 1 and all zeros of f 0 , . . . , f n are forward invariant with respect to the translation τ (z) = z + c.Let S 1 ∪ · · · ∪ S l be the partition of {0, 1, . . . , n} formed in such a way that i and j are in the same class S k if and only if f i f j ∈ P λ c . If f 0 + . . . + f n = 0, then j∈S k f j = 0 for all k ∈ {1, . . . , l}.
,j k f j k = B k f j k ≡ 0 for all k = {1, . . . , l}.Proof of Theorem 6.2. Denote Q = {1, . . . , n + p}. Let H j be defined by H j : h j0 (z)w 0 + . . . + h jn (z)w n = 0 (j ∈ Q)
(
|V k | − 1) = N + 1 − l ≥ N + 1 − n + p n + p − N = n − (n − N + N n + p − N )linear independent relations over the field P 1 c . This means that the image of f is contained in a linear subspace over P 1c of dimension ≤ [ N n+p−N − N + n].The proof of the theorem is complete. Proof of Theorem 5.1. By Theorem 6.2, the image of f is contained in a projective linear subspace over P 1 c of dimension ≤ [ N n+p−N − N + n]. By the assumption p > N N −n+1 + N − n it follows [ N n+p−N − N + n] = 0. Hence f (z) = f (z + c). The proof of Theorem 5.1 is thus complete.
6, Theorem 5.1], [4, Theorem 2.1] and the high dimensional result [7, Theorem 3.1]. In
Proof of Theorem 1.2. Assume that q > 2N − n + 1. Let Q = {1, , 2 . . . , q}. By Lemma 3.4, for r > 1 we havẽε
for all r > 0 outside of a possible exceptional set E ⊂ [1, ∞) of finite loga-
rithmic measure E
dt
t < ∞.
t j | · t j ∈R |g [j] t j | ω(t j ) · j∈S |(f, H j )| ω(j) ( f (z) | min{K 1 , . . . , K q }|) j∈Q ω(j)for any z ∈ G. Then by (3.3), we get from the above inequality that for any z ∈ G,j∈S 1 |K j | 2 ω(j) ≤ t j ∈R o f (z) |K t j | |g [j] t j | · t j ∈R |g [j] t j | ω(t j ) · j∈S |(f, H j )| ω(j) ( f (z) | min{K 1 , . . . , K q }|)ω (q−2N +n−1)+n+1
Zur Theorie der Meromorphen Funktionen. R Nevanlinna, 10.1007/BF02543858Acta Math. 461-2R. Nevanlinna, Zur Theorie der Meromorphen Funktionen, Acta Math. 46 (1-2) (1925) 1-99. doi:10.1007/BF02543858. URL http://dx.doi.org/10.1007/BF02543858
Defect relations for degenerate meromorphic maps. W X Chen, 10.2307/2001251Trans. Amer. Math. Soc. 3192W. X. Chen, Defect relations for degenerate meromorphic maps, Trans. Amer. Math. Soc. 319 (2) (1990) 499-515. doi:10.2307/2001251. URL http://dx.doi.org/10.2307/2001251
Sur lés zeros des combinaisons linéaires de p fonctions holomorphes données. H Cartan, Mathematica Cluj. 7H. Cartan, Sur lés zeros des combinaisons linéaires de p fonctions holo- morphes données, Mathematica Cluj 7 (1933) 5-31.
Nevanlinna theory for the difference operator. R G Halburd, R J Korhonen, Ann. Acad. Sci. Fenn. Math. 312R. G. Halburd, R. J. Korhonen, Nevanlinna theory for the difference operator, Ann. Acad. Sci. Fenn. Math. 31 (2) (2006) 463-478.
A second main theorem on P n for difference operator. P.-M Wong, H.-F Law, P P W Wong, 10.1007/s11425-009-0213-5Sci. China Ser. A. 5212P.-M. Wong, H.-F. Law, P. P. W. Wong, A second main theorem on P n for difference operator, Sci. China Ser. A 52 (12) (2009) 2751-2758. doi:10.1007/s11425-009-0213-5. URL http://dx.doi.org/10.1007/s11425-009-0213-5
Holomorphic curves with shiftinvariant hyperplane preimages. R Halburd, R Korhonen, K Tohge, 10.1090/S0002-9947-2014-05949-7Trans. Amer. Math. Soc. 3668R. Halburd, R. Korhonen, K. Tohge, Holomorphic curves with shift- invariant hyperplane preimages, Trans. Amer. Math. Soc. 366 (8) (2014) 4267-4298. doi:10.1090/S0002-9947-2014-05949-7. URL http://dx.doi.org/10.1090/S0002-9947-2014-05949-7
A difference Picard theorem for meromorphic functions of several variables. R Korhonen, 10.1007/BF03321831Comput. Methods Funct. Theory. 121R. Korhonen, A difference Picard theorem for meromorphic functions of several variables, Comput. Methods Funct. Theory 12 (1) (2012) 343- 361. doi:10.1007/BF03321831. URL http://dx.doi.org/10.1007/BF03321831
Difference analogues of the second main theorem for meromorphic functions in several complex variables. T.-B Cao, 10.1002/mana.201200234Math. Nachr. 287T.-B. Cao, Difference analogues of the second main theorem for mero- morphic functions in several complex variables, Math. Nachr. 287 (5-6) (2014) 530-545. doi:10.1002/mana.201200234. URL http://dx.doi.org/10.1002/mana.201200234
A note on entire pseudo-holomorphic curves and the proof of Cartan-Nochka's theorem, Kodai Math. J Noguchi, 10.2996/kmj/1123767014J. 282J. Noguchi, A note on entire pseudo-holomorphic curves and the proof of Cartan-Nochka's theorem, Kodai Math. J. 28 (2) (2005) 336-346. doi:10.2996/kmj/1123767014. URL http://dx.doi.org/10.2996/kmj/1123767014
On holomorphic maps into a taut complex space. H Fujimoto, Nagoya Math. J. 46H. Fujimoto, On holomorphic maps into a taut complex space, Nagoya Math. J. 46 (1972) 49-61.
Holomorphic maps into complex projective space omitting hyperplanes. M L Green, Trans. Amer. Math. Soc. 169M. L. Green, Holomorphic maps into complex projective space omitting hyperplanes, Trans. Amer. Math. Soc. 169 (1972) 89-103.
On meromorphic maps into the complex projecive space. H Fujimoto, J. Math. Soc. Japan. 26H. Fujimoto, On meromorphic maps into the complex projecive space, J. Math. Soc. Japan 26 (1974) 272-288.
H Fujimoto, 10.1007/978-3-322-80271-2Value distribution theory of the Gauss map of minimal surfaces in R m. Braunschweig21Friedr. Vieweg & SohnH. Fujimoto, Value distribution theory of the Gauss map of minimal surfaces in R m , Aspects of Mathematics, E21, Friedr. Vieweg & Sohn, Braunschweig, 1993. doi:10.1007/978-3-322-80271-2. URL http://dx.doi.org/10.1007/978-3-322-80271-2
On the theory of meromorphic functions. E Nochka, Sov. Math., Dokl. 27E. Nochka, On the theory of meromorphic functions., Sov. Math., Dokl. 27 (1983) 377-381.
Nonintegrated defect relation for meromorphic maps of complete Kähler manifolds into P N 1 (C)×· · ·×P N k (C). H Fujimoto, Japan. J. Math. (N.S.). 112H. Fujimoto, Nonintegrated defect relation for meromorphic maps of complete Kähler manifolds into P N 1 (C)×· · ·×P N k (C), Japan. J. Math. (N.S.) 11 (2) (1985) 233-264.
A A Goldberg, I V Ostrovskii, Value distribution of meromorphic functions. Alexandre Eremenko and James K. LangleyProvidence, RIAmerican Mathematical Society236A. A. Goldberg, I. V. Ostrovskii, Value distribution of meromorphic functions, Vol. 236 of Translations of Mathematical Monographs, Amer- ican Mathematical Society, Providence, RI, 2008, translated from the 1970 Russian original by Mikhail Ostrovskii, With an appendix by Alexandre Eremenko and James K. Langley.
Normal families of non-negative divisors. W Stoll, Math. Z. 84W. Stoll, Normal families of non-negative divisors, Math. Z. 84 (1964) 154-218.
On families of meromorphic maps into the complex projective space. H Fujimoto, Nagoya Math. J. 54H. Fujimoto, On families of meromorphic maps into the complex pro- jective space, Nagoya Math. J. 54 (1974) 21-51.
A sharp form of Nevanlinna's second fundamental theorem. A Hinkkanen, 10.1007/BF02100617Invent. Math. 1083A. Hinkkanen, A sharp form of Nevanlinna's second fundamental theo- rem, Invent. Math. 108 (3) (1992) 549-574. doi:10.1007/BF02100617. URL http://dx.doi.org/10.1007/BF02100617
Uniqueness problem with truncated multiplicities in value distribution theory. H Fujimoto, Nagoya Math. J. 152H. Fujimoto, Uniqueness problem with truncated multiplicities in value distribution theory, Nagoya Math. J. 152 (1998) 131-152. URL http://projecteuclid.org/euclid.nmj/1118766414
Nevanlinna theory and its relation to Diophantine approximation. M Ru, 10.1142/9789812810519World Scientific Publishing Co., IncRiver Edge, NJM. Ru, Nevanlinna theory and its relation to Diophantine approxi- mation, World Scientific Publishing Co., Inc., River Edge, NJ, 2001. doi:10.1142/9789812810519. URL http://dx.doi.org/10.1142/9789812810519
Einige Eindeutigkeitssätze in der Theorie der Meromorphen Funktionen. R Nevanlinna, 10.1007/BF02565342Acta Math. 483-4R. Nevanlinna, Einige Eindeutigkeitssätze in der Theorie der Meromorphen Funktionen, Acta Math. 48 (3-4) (1926) 367-391. doi:10.1007/BF02565342. URL http://dx.doi.org/10.1007/BF02565342
The uniqueness problem of meromorphic maps into the complex projective space. H Fujimoto, Nagoya Math. J. 58H. Fujimoto, The uniqueness problem of meromorphic maps into the complex projective space, Nagoya Math. J. 58 (1975) 1-23.
C.-C Yang, H.-X Yi, 10.1007/978-94-017-3626-8Uniqueness theory of meromorphic functions. DordrechtKluwer Academic Publishers Group557of Mathematics and its ApplicationsC.-C. Yang, H.-X. Yi, Uniqueness theory of meromorphic functions, Vol. 557 of Mathematics and its Applications, Kluwer Academic Publishers Group, Dordrecht, 2003. doi:10.1007/978-94-017-3626-8. URL http://dx.doi.org/10.1007/978-94-017-3626-8
P.-C Hu, P Li, C.-C Yang, 10.1007/978-1-4757-3775-2of Advances in Complex Analysis and its Applications. DordrechtKluwer Academic Publishers1Unicity of meromorphic mappingsP.-C. Hu, P. Li, C.-C. Yang, Unicity of meromorphic mappings, Vol. 1 of Advances in Complex Analysis and its Applications, Kluwer Academic Publishers, Dordrecht, 2003. doi:10.1007/978-1-4757-3775-2. URL http://dx.doi.org/10.1007/978-1-4757-3775-2
|
[] |
[
"Probabilistic estimation of the rank 1 cross approximation accuracy",
"Probabilistic estimation of the rank 1 cross approximation accuracy"
] |
[
"Osinsky A I \nInstitute of Numerical Mathematics RAS\nGubkina str., 8MoscowRussia\n\nInstitutsky per\nMoscow Institute of Physics and Technology\nDolgoprudny\n\nRussia\n"
] |
[
"Institute of Numerical Mathematics RAS\nGubkina str., 8MoscowRussia",
"Institutsky per\nMoscow Institute of Physics and Technology\nDolgoprudny",
"Russia"
] |
[] |
In the construction of low-rank matrix approximation and maximum element search it is effective to use maxvol algorithm[5]. Nevertheless, even in the case of rank 1 approximation the algorithm does not always converge to the maximum matrix element, and it is unclear how often close to the maximum element can be found. In this article it is shown that with a certain degree of randomness in the matrix and proper selection of the starting column, the algorithm with high probability in a few steps converges to an element, which module differs little from the maximum. It is also shown that with more severe restrictions on the error matrix no restrictions on the starting column need to be introduced.
| null |
[
"https://arxiv.org/pdf/1706.10285v1.pdf"
] | 119,151,308 |
1706.10285
|
9846fbb5615d4229409587ea064feceef70bf98c
|
Probabilistic estimation of the rank 1 cross approximation accuracy
Osinsky A I
Institute of Numerical Mathematics RAS
Gubkina str., 8MoscowRussia
Institutsky per
Moscow Institute of Physics and Technology
Dolgoprudny
Russia
Probabilistic estimation of the rank 1 cross approximation accuracy
AMS classification: 65F30, 65F99, 65D05 Keywords: Low rank approximationsPseudoskeleton approximationsMaximum volume principle
In the construction of low-rank matrix approximation and maximum element search it is effective to use maxvol algorithm[5]. Nevertheless, even in the case of rank 1 approximation the algorithm does not always converge to the maximum matrix element, and it is unclear how often close to the maximum element can be found. In this article it is shown that with a certain degree of randomness in the matrix and proper selection of the starting column, the algorithm with high probability in a few steps converges to an element, which module differs little from the maximum. It is also shown that with more severe restrictions on the error matrix no restrictions on the starting column need to be introduced.
approximations [1].
The accuracy of the skeleton and associated pseudoskeleton CGR decompositions is guaranteed in case of submatrix close to maximum volume [1,3] or, more generally, maximum projective volume [4]. However, in general case, the search of the maximum volume submatrix is a NP-hard problem, and these estimates are not directly applicable.
One of the most popular methods for constructing cross low-rank approximation is the algorithm maxvol [5]. In the particular case of rank 1 approximation it finds the maximum in modulus element in a randomly chosen column, then in the corresponding row (with the maximum element), and so on. Finally the resulting element is maximal in modulus element of its row and column. Unfortunately, this does not guarantee that it is maximal in the whole matrix (or even close to it). Therefore we can not guarantee that the obtained approximation will be accurate enough.
However, in practice, the obtained by maxvol algorithm cross approximation is often a good approximation of the original matrix. This probably means that if the matrix elements are in some sense random, the element found with the help of maxvol is likely to be close to the maximum.
Estimates even for the rank 1 particular case are very important, since, for example, we can construct an approximation of rank k by applying an algorithm k times.
In Section 2 some probability estimates are obtained. In Section 3 we prove the theorems that guarantee a high probability of obtaining sufficiently accurate approximation of rank 1. Finally, in Section 4 the results of numerical experiments with random matrices are shown and analized.
Probability estimates for some important distributions
First of all, we need some general propositions about random variables. Proposition 1. Let the random variable x have distribution χ 2 with n > 2 degrees of freedom. Then for a constant c (the relevant values of c will be determined later) the following holds
P(x > n − 2 + 2 c(n − 2) ln n) αn −c , α = 1 π(n − 2) + 1 2 √ cπ ln n e 4 c 3 ln 3 n n−2 3 .
Proof. We use the expression for the density distribution χ 2 and integrate, evaluating the Gamma function from below using Stirling's formula (n! √ 2πn n e n ): Thus, the probability is of the order n −c only if c ln n n−2 1. However, the condition on c is pretty weak, so we further suppose c to be large enough (e.g. c > 1).
P(x > n − 2 + 2 c(n − 2) ln n) = ∞ n−2+2 √ c(n−2) ln n x n 2 −1 e − x 2 2 n 2 Γ n 2 dx ∞ n−2+2 √ c(n−2) ln n x n 2 −1 e − x 2 2 n 2 π(n − 2) n−2 2e n 2 −1 dx = /y = x − n + 2/ = ∞ 2 √ c(n−2) ln n (n − 2 + y) n 2 −1 e − n−2 2 − y 2 2 π(n − 2) n−2 e n 2 −1 dy = ∞ 2 √ c(n−2) ln n 1 + y n−2 n−2 2 e − y 2 2 π(n − 2) dy = /z = y − (n − 2) ln 1 + y n − 2 , dy = 1 + y n−2 y n−2 dz/ = ∞ 2 √ c(n−2) ln n−(n−2) ln 1+2 c ln n n−2 1 + y n−2 e − z
Using Proposition 1 we can prove the following Lemma.
Lemma 1. Let the random vector v be uniformly distributed on the sphere in the space C n . Then, with probability 1 − αn −c − β k there is at least one among any k preselected elements that is not less in absolute value than τ , with
α = 1 π(n − 2) + 1 2 √ cπ ln n e 4 3 c 3 ln 3 n n−2 , β = 2τ 2 n − 2 + 2 c(n − 2) ln n π .
Proof. Such a vector can be obtained by taking normally distributed random variables, choosing a random rotation of each component in C and normalizing. Thus, if as a basis we take the values x i , then |x i | 2 ∼ χ 2 (1), and
|v i | 2 = |x i | 2 n j=1 |x j | 2 .
From proposition 1 we see that
P( n i=1 |x i | 2 > n − 2 + 2 c(n − 2) ln n) αn −c ,
Besides,
P |x i | 2 < t 2 = P (|x i | < t) = 2 t 0 e − x 2 2 √ 2π dx 2t 2 π . P |x i | 2 < t 2 , i = 1, k 2t 2 π k 2 .
Eventually, by choosing t = τ n − 2 + 2 c(n − 2) ln n, we find that
P |v i | < τ, i = 1, k αn −c + 2τ 2 n − 2 + 2 c(n − 2) ln n π k 2 .
3. Theorems on the probability of receiving a good low-rank approximations Theorem 1. Let A = σuv * + E, σ > 0, u 2 = v 2 = 1, A ∈ C m×n . Let vector v be uniformly distributed on the sphere in C n , n > 2. Let us denote
δ = E C . Let ε = E C A − E C = E C σ u ∞ v ∞ 1 8 . (1) Let α = 1 π(n − 2) + 1 2 √ cπ ln n eβ v = 1 − √ 1 − 8ε v ∞ n − 2 + 2 c(n − 2) ln n √ 2π .
Let the algorithm maxvol [5], on the first step of which we choose a maximal element among the first k columns of the matrix return element a on the intersection of the row r and column c.
Then with probability 1 − αn −c − β k v A − ca −1 r C 8δ 1 + ε 1 + √ 1 − 8ε − 2ε .(2)
Proof. Consider an arbitrary element of the matrix A:
a ij = σu i v j + e ij .
Fix the corresponding j-th column. Let
µ = |v j | v ∞ .
Consider the maximum in modulus element a sj in this column. It is easy to see that
|a sj | σµ u ∞ v ∞ − δ.
(This estimate can be obtained by taking into account the row s 0 with |u s0 | = u ∞ ), and |a sj | |a s0j |). We will find conditions on µ, which guarantee that the following inequality holds
|u s | > µ u ∞ .
If this is not true (if |u s | µ u ∞ ), we can get the inequality
|a sj | σµ 2 u ∞ v ∞ + δ.
From the above two estimations for |a sj | we get the condition
σµ 2 u ∞ v ∞ + δ < σµ u ∞ v ∞ − δ µ 2 − µ + 2ε < 0.
Solving the quadratic equation, we obtain the following condition on µ:
µ 1 = 1 − √ 1 − 8ε 2 < µ < 1 + √ 1 − 8ε 2 = µ 2 . 5 If µ µ 2 , then |u s | µ 2 u ∞ .
Indeed, in this case it is necessary to verify the inequality
σµµ 2 u ∞ v ∞ + δ σµ u ∞ v ∞ − δ, µµ 2 + ε µ − ε
Noting that µ 2 + µ 1 = 1, we get µµ 1 2ε.
Since
µ 1 µ 2 = 2ε,
the resulting inequality is equivalent to the following:
1 µ µ 2 ,
which is true when µ µ 2 . We will get the same conditions, if we swap rows and columns.
These estimates allow us to understand the conditions of halting the algorithm maxvol . Indeed, let a ij be an element of the matrix A, which is the maximum in the i-th row and j-th column of A (the element on which the algorithm maxvol stops).
Denote
µ u = |ui| u ∞ and µ v = |vj | v ∞ .
We prove that µ u and µ v at the same time satisfy one of two conditions: they both are either not greater than µ 1
µ u µ 1 , µ v µ 1 , or not less than µ 2 µ u µ 2 , µ v µ 2 .
First, suppose, for example, µ 1 < µ v < µ 2 . Then, as proved earlier for the element u i corresponding to the maximum in modulus element of the column, the following inequality is satisfied
|u i | = µ u u ∞ > µ v u ∞ .
Since a ij is also maximal in the row, by repeating the reasoning, we come to the contradiction
|v j | = µ v v ∞ > µ u v ∞ > µ v v ∞ .
Thus, neither µ u , nor µ v can be inside the interval (µ 1 , µ 2 ) . It remains to prove the impossibility of the fact that µ u and µ v are separated by the interval (µ 1 , µ 2 ) . Assume, for example, that µ u µ 1 and µ v µ 2 . Since a ij is the maximum in j-th column, and µ v µ 2 , then, as proved earlier, µ u µ 2 , which contradicts the assumption.
From this we conclude that if at the first step we got to the element with |v j | > µ 1 v ∞ , then the value of µ will increase and eventually will not be less than µ 2 .
Let's call the columns (rows) with |v j | > µ 1 v ∞ (|u i | µ 1 u ∞ ) "good", and the others "bad".
By Lemma 1 with τ = µ 1 v ∞ there is at least one "good" column among the first k columns with high probability. We will show that in this case the maximum in modulus element among these k columns needs to belong to a "good" column. 6
In any "good" column (with |v j0 | > µ 1 v ∞ ) there is an element corresponding to
|u i0 | = u ∞ . Thus |a i0j0 | > σµ 1 u ∞ v ∞ − δ,
Then for the maximum in modulus element a ij among these k columns the inequality holds even more so:
|a ij | σµ 1 u ∞ v ∞ − δ,(3)
From the equation for µ 1 ,
σµ 1 u ∞ v ∞ − δ = σµ 2 1 u ∞ v ∞ + δ,
we substitute the right-hand side in (3) to get
|a ij | > σµ 2 1 u ∞ v ∞ + δ.(4)
A consequence of (4) is the inequality on the product of µ u and µ v
µ u · µ v > µ 2 1 .
Thus, if a ij does not belong to the "good" column (µ v µ 1 ), then µ u > µ 1 and a ij belongs to the "good" row.
However, in this case due to the fact that a ij is the maximum in modulus in its column, then, as proved earlier, either µ v µ 2 > µ 1 , or µ v > µ u > µ 1 , and, on the contrary to the initial assumption, the column containing a ij is "good".
Thus, as a result of the procedure maxvol, we get the element with modulus not less than
σµ 2 u ∞ v ∞ − δ = σµ 2 2 u ∞ v ∞ + δ. Consider a submatrix 2 × 2 of A:Â = a b c d ,
where a is the element found with maxvol. For the absolute values of the elements a and d following estimates hold
|d| σ u ∞ v ∞ + δ = σ u ∞ v ∞ (1 + ε) , |a| σµ 2 u ∞ v ∞ − δ = σ u ∞ v ∞ (µ 2 − ε) .
Using Theorem 1 from [1], we get that even if |d| > |a|
d − ba −1 c = a − bd −1 c |d| |a| 4δ 1 + ε µ 2 − ε .
Substituting the expression for µ 2 and taking into account that submatrix is arbitrary, we obtain the estimate (2).
Remark 1. The theorem remains true with probability 1 − γ k , if the vector v has no more than γn elements, which differ from the maximum more than µ 1 times. This allows us to use the result for different distributions of v. 7 Corollary 1. Under the conditions of Theorem 1:
1.
A − ca −1 r C 4δ(1 + 16ε) 12δ.
2.
β
v 8ε v ∞ n − 2 + 2 c(n − 2) ln n √ 2π .
3. In order to make the error satisfy (2) with the probability not exceeding (α + 1)n −c , it is sufficient to take k = c ln n ln 1 βv .
Corollary 2. If in Theorem 1 the matrix is real, then
A − ca −1 r C 4δ(1 + 4ε) 6δ.
Proof. Changes to the proof can be made when considering the submatrix
A = a b c d .
The result can be more than 4δ only if |d| > |a|, but in this case, as µ 2 > ε, the matrix E does not affect the sign of a and, more so, the signs on b, c and d. Therefore
sign(d) = sign(ba −1 c). Finally, |d − ba −1 c| |d| − |ba −1 c| σ u ∞ v ∞ (1 + ε) − σ u ∞ v ∞ (µ 2 − ε) 2δ + σ u ∞ v ∞ (1 − µ 2 ) 4δ(1 + 4ε) 6δ.
Theorem 2. Let under the conditions of Theorem 1
β v = 8ε v ∞ n − 2 + 2 c(n − 2) ln n √ 2π .
Let the algorithm maxvol perform just 4 steps, and return an element regardless of whether it is maximal in its column, or not. Then
A − ca −1 r C 4δ (1 + 16ε) .
Proof. It turns out that with the probability
1 − β k v |v j | > 4ε v ∞ = ν 1 v ∞ ,
(it is easy to obtain by replacing µ 1 by ν 1 ).
Let the element found in column (row) be ν k of the maximum in u (v). Then, if the next found element is ν k+1 from maximum in column (row), the following inequality must be satisfied
ν k ν k+1 σ u ∞ v ∞ + δ ν k σ u ∞ v ∞ − δ. Therefore ν k+1 1 − 2ε ν k .
Substituting ν 1 = 4ε, we find that
ν 2 > 1 2 , ν 3 > 1 − 4ε.
After the fourth step, both elements will be at least ν 3 . Indeed, µ 2 1−4ε 4ε µ 1 , so, as shown in the proof of Theorem 1, when µ = 1 − 4ε is between µ 1 and µ 2 , every next coefficient cannot be less. Analogously to the Theorem 1, we estimate the error of approximation in an arbitrary submatrix of A. This will give us the desired estimate for C-norm of the error:
|d − ba −1 c| 4δ 1 + ε ν 3 − ε = 4δ 1 + ε 1 − 5ε 4δ(1 + ε)(1 + 40 3 ε) 4δ(1 + 16ε).
Here we have taken into account that ε 1 8 .
Thus, in order to find the element, which is close to the maximum, with the prescribed probability, it suffices to compare only (k + 1)m + 2n elements of A.
In all the above estimates u ∞
1 √ m , v ∞ 1 √ n .
For upper bounds we can again use a probabilistic approach.
Definition 1.
Vector v ∈ C n is called µ-coherent with the parameter µ > 0, when v ∞ µ n .
Proposition 2. Let random vector v be uniformly distributed on the sphere in C n , n > 1.
Then with probability 1 − n −c(1− 1 n ) √ c ln n it is µ-coherent with the parameter µ = 2c ln n. Proof. We construct the vector v as in the Proposition 1. Then
P(|v i | 2 < t) = P |x i | 2 n j=1 |x j | 2 < t = P |x i | 2 n j=1 j =i |x j | 2 < t 1 − t , P(|v i | 2 < t, i = 1, n) nP |x 1 | 2 n j=2 |x j | 2 < t 1 − t .
Random value |x1| 2 n j=2 |xj | 2 has Fisher distribution with degrees of freedom 1 and n − 1. Now we can estimate the probability using the density function:
P( v 2 ∞ < t) n ∞ (n−1) t 1−t x(n−1) n−1 (x+n−1) n xB 1 2 , n−1 2 dx = n ∞ (n−1) t 1−t (n−1) n−1 (x+n−1) n √ xB 1 2 , n−1 2 dx /x 0 = (n − 1) t 1 − t / n ∞ x0 √ x 0 + n − 1 (n−1) n−1 (x+n−1) n x 0 (x + n − 1)B 1 2 , n−1 2 dx = = n(n − 1) n−1 2 √ x 0 + n − 1 √ x 0 B 1 2 , n−1 2 ∞ x0 dx (x + n − 1) n+1 2 n(n − 1) n−1 2 √ x 0 + n − 1 √ x 0 n−1 2 √ π 2 n − 1 (x 0 + n − 1) − n−1 2 = = 2 π n(n − 1) n−2 2 √ x 0 (x 0 + n − 1) − n−2 2 = = 2 π n(n − 1) n−2 2 (n − 1) t 1−t (n − 1) t 1 − t + n − 1 − n−2 2 = = 2 π n (n − 1)t (1 − t) n−1 2 2 π n (n − 1)t e −t n−1 2 2 π n µ(n − 1) e − µ 2 (1− 1 n ) = n πc(n − 1) ln n n −c(1− 1 n ) n −c(1− 1 n ) √ c ln n .
Condition of µ-coherence can be used in case of hard-to-estimate C-norm of the matrix E. Even if we demand its fulfilment for all rows and columns of a random unitary matrix, we can ensure with high probability that µ ∼ ln n.
Corollary 3. Let the conditions of Theorem 1 be fulfilled. Also let the rows U ∈ C m×m and columns V ∈ C n×n from singular value decomposition A = U ΣV be µ-coherent, and σ = σ 1 (A) be the maximum singular value of the matrix, with the corresponding singular vectors u and v. Then
δ µ √ mn min(m,n) j=2 σ j (A).(5)
If U is a random unitary matrix, then with the probability
1 − nm −c(1− 1 m ) √ c ln m , δ 2c ln m m σ 2 (A).(6)
Proof.
E C = max i,k min(m,n) j=2 u ij σ j v jk max i,j |u ij | max j,k |v jk | min(m,n) j=2 σ j µ √ mn min(m,n) j=2 σ j ,
which proves the inequality (5).
To prove (6) consider the vector e with components e k = σ j v jk , e 1 = 0. Its Euclidean norm is not greater than σ 2 (A). By selecting it as one of the basis vectors (with any other orthonormal vectors), we get in the above product simply an element of a random vector u i , but in the new basis. We need to apply the condition on mn different components, which is equivalent to n uses of µ-coherence. As a result, we obtain the required inequality.
In addition, from the probability estimate of µ-coherence it is clear that in order to guarantee, that the value of β v is less than 1, it is required that ε ∼ 1 √ ln n . And, although after entering the "good" column, it will require very few steps to get a good estimate, it may be necessary to view a lot of columns to ensure that the column or row will actually be "good".
In practice, of course, algorithm is used without viewing the columns. This is, firstly, due to the fact that each step of the algorithm is roughly equivalent to increasing k by 1.
In addition, selecting an element corresponding to large value in σuv * is more probable than selecting the one for a smaller value. However, analysis of such probabilities is much more difficult. Nevertheless, it can be done by imposing additional restrictions on the matrix E. and v are uniformly distributed on the sphere in R n ,
β u = 1 − √ 1 − 8ε u ∞ n − 2 + 2 c(n − 2) ln n √ 2π ,
and the matrix E consists of independent (including the u and v) random variables with uniformly distributed on the interval [0; δ] modules. Suppose that at the beginning algorithm maxvol instead of viewing k random columns at least k steps are made, and the maximum element among the viewed ones is selected (if there are less than k steps, the algorithm continues with the next to the maximum element).
Then the estimate (2) holds with the probability Thus, each step of the algorithm reduces the probability of error in almost n times when c 0 is large enough (about µ in the case of µ-coherence).
1 − 2αn −c − α 0 n −γk − c 0 ln n n k . where γ = 1 − β − 2ε u ∞ v ∞ c 0 · 2 n − 2 + 2 c (n − 2) ln n π , α = 1 π(n − 2) + 1 2 √ cπ ln n e
Proof. Firstly, we immediately note that due to the independence of the elements of σuv and E the probability to get to the large elements in σuv * after each step of the algorithm is not lower than simply by browsing a random column or row. Thus, not less beneficial is just to make n steps than to seek at the beginning maximum in n rows or columns. This "benefit" can be evaluated quantitatively.
Taking into account that the elements of the matrices σuv * and E are independent random variables, consider for each pair of indices (i, j) four conditions
|v j | > µ 0 ,(7)(σuv * ) ij E ij > 0,(8)σ|u i |µ 0 σµ 1 u ∞ µ 0 + ε 0 δ,(9)|E ij | (1 − ε 0 ) δ,(10)
where µ 0 > 0 and 0 < ε 0 < 1 -are some parameters that will be determined later.
Let us make some observations. Firstly, if the element a ij = (σuv * ) ij + E ij satisfies the conditions (7) -(10), then this element definitely belongs to the "good" row. Indeed, from the following chain of equalities and inequalities (9))
|a ij | = |(σuv * ) ij + E ij | (from (8)) = |(σuv * ) ij | + |E ij | (from (10)) |σ|u i ||v j | + (1 − ε 0 ) δ = σ|u i |µ 0 |vj | µ0 + (1 − ε 0 ) δ (fromσµ 1 u ∞ |v j | + |vj | µ0 ε 0 δ + (1 − ε 0 ) δ (from (7)) > σµ 1 u ∞ |v j | + δ,
it follows that |u i | > µ 1 u ∞ , and the row number i is "good".
Secondly, if the column has at least one element that matches the conditions (7) -(10), then the maximum in modulus element of this column belongs to the "good" row too. Indeed, suppose that a is the maximum in modulus element in column j, and a ij satisfies (7) -(10). Then
|a| |a ij | > σµ 1 u ∞ |v j | + δ,
which is equivalent to the fact that a belongs to the "good" row. At the same time, generally speaking, for the maximum in modulus element in the column all of the conditions need not to be fulfilled.
Finally, we note that (8) -(9) define independent events on the set of matrix elements. Let us fix the index j. Suppose, that the condition (7) holds for a column number j (that is, |v j | µ 0 ). We estimate the probability that at least one element in this column fulfils the other three conditions (8) -(10). In view of the above, this assessment will also estimate the probability that the maximum element of j-th column belongs to the "good" row or equivalently that one step of the maxvol algorithm gives an element in a "good" row.
First of all, we estimate the probability that exactly k elements in the j-th column of the matrix E are within ε 0 δ of the maximum. For any element this probability is not less then ε 0 . So, for a set of k elements we can take
P k = C k n (ε 0 ) k (1 − ε 0 ) n−k .
From the independence of matrix elements in σuv * and E, we get the probability of fulfilling (8) equal to 1 2 . Let the number of elements that satisfy the condition (9) be equal to l. Under this condition, the probability (in fact this is a conditional probability) of fulfilment (9) is not less than l n P (σ|u i |µ 0 σµ 1 u ∞ µ 0 + ε 0 δ) l n .
Thus, for the considered random realizations the probability of an arbitrary element of j-th column to simultaneously satisfy the conditions (8) and (9) is not less than l 2n , and the probability of violation of at least one of them is not more than 1 − l 2n .
for u and v. The steps for the rows and columns also should be considered separately due to their different sizes.
It can also be generalized to the complex case: it is enough to take the value of ε 0 a little more and replace (8) by the condition on the smallness of the phase.
Numerical experiments
Before proceeding to the calculations, it is important to understand what happens if the inequality (1) is not satisfied. In this case, the error will be about C-norm of the whole matrix. In the worst case, it is
|d − ba −1 c| |d| + |a| σ u ∞ v ∞ + δ + |a|.
If |a| is sufficiently large, then, as we already know, the error can not exceed |d| |a| 4δ. Taking the minimum of 4δ |d| |a| and |d| + |a| and substituting the estimate for d, we find that the error will not exceed
1 + δ + (1 + δ)(1 + 17δ) 2 .(12)
To verify the accuracy of the estimates, the calculations were carried out for the random matrices. Namely, the matrix was set by its singular value decomposition. The left and right singular vectors were randomly selected, the first singular value was selected from the equation
x = σ u ∞ v ∞ δ ,
and all the rest singular values were set to 1.
The value of x was placed on the horizontal axis. If x 8, then we verified that the column is "good" before applying maxvol. Figure 1 illustrates the relationship between the found element and the maximum element of the matrix. Figure 2 shows the approximation error. Figure 3 shows the probability of hitting a "bad" column.
It is seen that the maximum error is different from our estimate about 2 times. This is probably due to the fact that the selected matrix E is the best approximation in 2-norm,
but not in C-norm.
The last figure shows that the probability of not getting into the "good" column almost vanishes after the application of the algorithm. This means that the matrix of . We show the mean, and the minimum for 1000 matrix generations, and the lower bound estimate which is equal to σµ 2 2 u ∞ v ∞ + δ.
the best rank 1 approximation and its error are not closely related, and even if we start from a "bad" column there is a great chance to get eventually to a "good" one.
Conclusion
We proved that in the important particular case of rank one approximations algorithm maxvol applied for the random matrices finds the element close to the maximum with the probability close to 1. This guarantees the high accuracy of the cross approximations.
Acknowledgments
The work was supported by the Russian Science Foundation, Grant 14-11-00806. . We show the mean, and the minimum for 1000 matrix generations, and the estimate of the error. If 1 ε < 8 then the expression (12) was used. Figure 3: The dependence of the Value = probability to get into the "bad" column from the Ratio
Bibliography
σ u ∞ v ∞ δ .
The results are obtained for the 1000 matrix generations. The probabilities are shown for randomly selected columns and the columns obtained after applying the algorithm.
Theorem 3 .
3Let under the conditions of Theorem 2 for a matrix A ∈ R n×n , vectors u
the arbitrary constants c and c 0 .
Figure 1 :
1The dependence of the Value = ratio between the found element and the maximum from the Ratio = σ u ∞ v ∞ δ
Figure 2 :
2The dependence of the Value = ratio between approximation error with respect to δ from the Ratio = σ u ∞ v ∞ δ
Now we evaluate the probability P 1 that the column with number j has no elements satisfying all the conditions. This assessment can be in obvious way written asAfter k steps, this probability will not exceedNow we need to sum this value for all values of l, thereby calculating the total probability. Denoting by γ the probability of satisfying (9) independently for all elements (analogue to 1 − β), we get thatThe probabilities for rows and columns can be calculated independently, so as a result, since it is still all raised to the power k, no matter how many steps has been done in rows and how many in columns.To complete the proof it remains to estimate the probability P 2 that at least half of the column elements satisfy (9). The latter is equivalent to the assertion that half of the components of the uniformly distributed on the sphere vector u satisfy the inequalityWe use lemma 1 to get the estimates on P 2 . For this purpose, we define variables τ and γ as14 By lemma 1 for an arbitrary set of k elements, the probability that all the absolute values are less thanμ does not exceed γ k . In addition, we need to take into account the requirement |v j | µ 0 . This can be done by simply adding the probability of the opposite. Even if a few steps have been made, this probability cannot reduce: if we have already reached a good column or row, this probability is just zero, so adding the condition that until now such a row or column is not found, does not change the distribution. And the fact that the column was chosen not randomly, but using the algorithm, as shown above, only reduces the probability of the opposite. As for the fact that some elements might have already been viewed, we can ignore them: if a "good" row or column has not been previously found, then they are "bad", and discarding them from the consideration only increases the probability of finding a "good" one (we evaluate the probability under this particular condition). Now we can choose ε 0 and µ 0 ε 0 = 2 ln n nThen, firstly,secondly, P 2 2α 0 n −γk and thirdly, we estimate the value of γPutting together all the probabilities, we find that the probability to get to the "good" element after k steps of the algorithm is bounded from above byIt is easy to see that for the probability of order n −k the number of steps does not depend on n.For simplicity we have taken a square matrix, but the claim is easily generalized to the case m = n: for this case, instead of just multiplying by 2 we can take separate terms
TT-cross approximation for multidimensional arrays. I V Oseledets, E E Tyrtyshnikov, Linear Algebra and Its Applications. 4327078I.V. Oseledets, E.E. Tyrtyshnikov, TT-cross approximation for multidimensional arrays, Linear Algebra and Its Applications, 432, (2010), pp. 7078.
Quasioptimality of skeleton approximation of a matrix in the chebyshev norm. S A Goreinov, E E Tyrtyshnikov, Doklady Mathematics. 83312S.A. Goreinov, E.E. Tyrtyshnikov, Quasioptimality of skeleton approximation of a matrix in the chebyshev norm, Doklady Mathematics, vol. 83, no. 3, (2011), pp. 12.
The maximal-volume concept in approximation by low-rank matrices. S A Goreinov, E E Tyrtyshnikov, Contemporary Mathematics. 2684751S.A. Goreinov, E.E. Tyrtyshnikov, The maximal-volume concept in approximation by low-rank matrices, Contemporary Mathematics, vol. 268, (2001), pp. 4751.
New accuracy estimates for pseudoskeleton approximations of matrices. N L Zamarashkin, A I Osinskiy, Doklady Mathematics. 94313N.L. Zamarashkin, A.I. Osinskiy, New accuracy estimates for pseudoskeleton approximations of matrices, Doklady Mathematics, vol. 94, no. 3, (2016), pp. 13.
How to find a good submatrix. S A Goreinov, I V Oseledets, D V Savostyanov, Matrix Methods: Theory, Algorithms, Applications. V. Olshevsky, E. TyrtyshnikovWorld Scientific PublishingS.A. Goreinov, I.V. Oseledets, D.V. Savostyanov et al., How to find a good submatrix, Matrix Meth- ods: Theory, Algorithms, Applications, Ed. by V. Olshevsky, E. Tyrtyshnikov. World Scientific Publishing, 2010, pp. 247-256.
|
[] |
[
"WEIGHTED C k ESTIMATES FOR A CLASS OF INTEGRAL OPERATORS ON NON-SMOOTH DOMAINS",
"WEIGHTED C k ESTIMATES FOR A CLASS OF INTEGRAL OPERATORS ON NON-SMOOTH DOMAINS"
] |
[
"Dariush Ehsani "
] |
[] |
[] |
We apply integral representations for (0, q)-forms, q ≥ 1, on nonsmooth strictly pseudoconvex domains, the Henkin-Leiterer domains, to derive weighted C k estimates for a given (0, q)-form, f , in terms of C k norms of∂f , and∂ * f . The weights are powers of the gradient of the defining function of the domain.
|
10.1307/mmj/1291213958
|
[
"https://arxiv.org/pdf/0903.4082v1.pdf"
] | 18,645,241 |
0903.4082
|
57e3f124a7af7463942a37f0172680af86584572
|
WEIGHTED C k ESTIMATES FOR A CLASS OF INTEGRAL OPERATORS ON NON-SMOOTH DOMAINS
24 Mar 2009
Dariush Ehsani
WEIGHTED C k ESTIMATES FOR A CLASS OF INTEGRAL OPERATORS ON NON-SMOOTH DOMAINS
24 Mar 2009
We apply integral representations for (0, q)-forms, q ≥ 1, on nonsmooth strictly pseudoconvex domains, the Henkin-Leiterer domains, to derive weighted C k estimates for a given (0, q)-form, f , in terms of C k norms of∂f , and∂ * f . The weights are powers of the gradient of the defining function of the domain.
Introduction
Let X be an n-dimensional complex manifold, equipped with a Hermitian metric, and D ⊂⊂ X a strictly pseudoconvex domain with defining function r. Here we do not assume the non-vanishing of the gradient, dr, thus allowing for the possibility of singularities in the boundary, ∂D of D. We refer to such domains as Henkin-Leiterer domains, as they were first systematically studied by Henkin and Leiterer in [2].
We shall make the additional assumtion that r is a Morse function. Let γ = |∂r|. In [1] the author established an integral representation of the form Theorem 1.1. There exist integral operatorsT q : L 2 (0,q+1) (D) → L 2 (0,q) (D) 0 ≤ q < n = dim X such that for f ∈ L 2 (0,q) ∩ Dom(∂) ∩ Dom(∂ * ) one has γ 3 f =T q∂ f +T * q−1∂ * f + error terms for q ≥ 1.
Theorem 1.1 is valid under the assumption we are working with the Levi metric. With local coordinates denoted by ζ 1 , . . . , ζ n , we define a Levi metric in a neighborhood of ∂D by
ds 2 = j,k ∂ 2 r ∂ζ j , ∂ζ k (ζ).
A Levi metric on X is a Hermitian metric which is a Levi metric in a neighborhood of ∂D. From what follows we will be working with X equipped with a Levi metric. The author then used properties of the operators in the representation to establish the estimates Theorem 1.2. For f ∈ L 2 0,q (D) ∩ Dom(∂) ∩ Dom(∂ * ), q ≥ 1,
γ 3(n+1) f L ∞ γ 2∂ f ∞ + γ 2∂ * f ∞ + f 2 .
2000 Mathematics Subject Classification. Primary 32A25, 32W05; Secondary 35B65. Partially supported by the Alexander von Humboldt Stiftung.
In this paper, we examine the operators in the integral representation, derive more detailed properties of such operators under differentiation, and use the properties to establish C k estimates. Our main theorem is Theorem 1.3. Let f ∈ L 2 0,q (D) ∩ Dom(∂) ∩ Dom(∂ * ), q ≥ 1, and α < 1/4. Then for N (k) large enough we have
γ N (k) f C k+α γ k+2∂ f C k + γ k+2∂ * f C k + f 2 .
We show we may take any N (k) > 3(n + 6) + 8k.
Our results are consistent with those obtained by Lieb and Range in the case of smooth strictly pseudoconvex domains [4], where we may take γ = 1. In, [4], an estimate as in Theorem 1.3 with γ = 1 and α < 1/2 was given.
In a separate paper we look establish C k estimates for f ∈ L 2 (D) ∩ Dom(∂), as the functions used in the construction of the integral kernels in the case q = 0 differ from those in the case q ≥ 1.
One of the difficulties in working on non-smooth domains is the problem of the choice of frame of vector fields with which to work. In the case of smooth domains a special boundary chart is used in which ω n = ∂r is part of an orthonormal frame of (1, 0)-forms. When ∂r is allowed to vanish, the frame needs to be modified. We get around this difficulty by defining a (1, 0)-form, ω n by ∂r = γω n . In the dual frame of vector fields we are then faced with factors of γ in the expressions of the vector fields with respect to local coordinates, and we deal with these terms by multiplying our vector fields by a factor of γ. This ensures that when vector fields are commuted, there are no error terms which blow up at the singularity.
We organize our paper as follows. In Section 2 we define the types of operators which make up the integral representation established in [1]. Section 3 contains the most essential properties used to obtain our results. In Section 3 we consider the properties of our integral operators under differentiation. Lastly, in Section 4 we apply the properties from Section 3 to obtain our C k estimates.
The author extends thanks to Ingo Lieb with whom he shared many fruitful discussions over the ideas presented here, and from whom he originally had the idea to extend results on smooth domains to Henkin-Leiterer domains.
Admissible operators
With local coordinates denoted by ζ 1 , . . . , ζ n , we define a Levi metric in a neighborhood of ∂D by
ds 2 = j,k ∂ 2 r ∂ζ j , ∂ζ k (ζ)dζ j dζ k .
A Levi metric on X is a Hermitian metric which is a Levi metric in a neighborhood of ∂D. We thus equip X with a Levi metric and we take ρ(x, y) to be a symmetric, smooth function on X × X which coincides with the geodesic distance in a neighborhood of the diagonal, Λ, and is positive outside of Λ.
For ease of notation, in what follows we will always work with local coordinates, ζ and z.
Since D is strictly pseudoconvex and r is a Morse function, we can take r ǫ = r +ǫ for epsilon small enough. Then r ǫ will be defining functions for smooth, strictly pseudoconvex D ǫ . For such r ǫ we have that all derivatives of r ǫ are indpendent of ǫ. In particular, γ ǫ (ζ) = γ(ζ) and ρ ǫ (ζ, z) = ρ(ζ, z).
Let F be the Levi polynomial for D ǫ :
F (ζ, z) = n j=1 ∂r ǫ ∂ζ j (ζ)(ζ j − z j ) − 1 2 n j,k=1 ∂ 2 r ǫ ∂ζ j ζ k (ζ j − z j )(ζ k − z k ).
We note that F (ζ, z) is independent of ǫ since derivatives of r ǫ are.
For ǫ small enough we can choose δ > 0 and ε > 0 and a patching function ϕ(ζ, z), independent of ǫ, on C n × C n such that
ϕ(ζ, z) = 1 for ρ 2 (ζ, z) ≤ ε 2 0 for ρ 2 (ζ, z) ≥ 3 4 ε, and defining S δ = {ζ : |r(ζ)| < δ}, D −δ = {ζ : r(ζ) < δ}, and φ ǫ (ζ, z) = ϕ(ζ, z)(F ǫ (ζ, z) − r ǫ (ζ)) + (1 − ϕ(ζ, z))ρ 2 (ζ, z),
we have the following
Lemma 2.1. On D ǫ × D ǫ S δ × D −δ , |φ ǫ | | ∂r ǫ (z), ζ − z | + ρ 2 (ζ, z),
where the constants in the inequalities are independent of ǫ.
We at times have to be precise and keep track of factors of γ which occur in our integral kernels. We shall write E j,k (ζ, z) for those double forms on open sets U ⊂ D × D such that E j,k is smooth on U and satisfies
(2.1) E j,k (ζ, z) ξ k (ζ)|ζ − z| j ,
where ξ k is a smooth function in D with the property
|γ α D α ξ k | γ k ,
for D α a differential operator of order α. We shall write E j for those double forms on open sets U ⊂ D × D such that E j is smooth on U , can be extended smoothly to D × D, and satisfies E j (ζ, z) |ζ − z| j . E * j,k will denote forms which can be written as E j,k (z, ζ).
For N ≥ 0, we let R N denote an N -fold product, or a sum of such products, of first derivatives of r(z), with the notation R 0 = 1.
Here
P ǫ (ζ, z) = ρ 2 (ζ, z) + r ǫ (ζ) γ(ζ) r ǫ (z) γ(z) . Definition 2.2. A double differential form A ǫ (ζ, z) on D ǫ × D ǫ is an admissible kernel, if it has the following properties: i) A ǫ is smooth on D ǫ × D ǫ − Λ ǫ ii) For each point (ζ 0 , ζ 0 ) ∈ Λ ǫ there is a neighborhood U × U of (ζ 0 , ζ 0 ) on which A ǫ or A ǫ has the representation (2.2) R N R * M E j,α E * k,β P −t0 ǫ φ t1 ǫ φ t2 ǫ φ * t3 ǫ φ * t4
ǫ r l ǫ r * m ǫ with N, M, α, β, j, k, t 0 , . . . , m integers and j, k, t 0 , l, m ≥ 0, −t = t 1 + · · · + t 4 ≤ 0, N, M ≥ 0, and N + α, M + β ≥ 0.
The above representation is of smooth type s for
s = 2n + j + min{2, t − l − m} − 2(t 0 + t − l − m).
We define the type of A ǫ (ζ, z) to be
τ = s − max{0, 2 − N − M − α − β}.
A ǫ has smooth type ≥ s if at each point (ζ 0 , ζ 0 ) there is a representation (2.2) of smooth type ≥ s. A ǫ has type ≥ τ if at each point (ζ 0 , ζ 0 ) there is a representation (2.2) of type ≥ τ . We shall also refer to the double type of an operator (τ, s) if the operator is of type τ and of smooth type s.
The definition of smooth type above is taken from [5]. Here and below (r ǫ (x)) * = r ǫ (y), the * having a similar meaning for other functions of one variable.
Let A ǫ j be kernels of type j. We denote by A j the pointwise limit as ǫ → 0 of A ǫ j and define the double type of A j to be the double type of the A ǫ j of which it is a limit. We also denote by A ǫ j to be operators with kernels of the form A ǫ j . A j will denote the operators with kernels A j . We use the notation A ǫ (j,k) (resp. A (j,k) ) to denote kernels of double type (j, k).
We let E i j−2n (ζ, z) be a kernel of the form
E i j−2n (ζ, z) = E m,0 (ζ, z) ρ 2k (ζ, z) j ≥ 1,
where m − 2k ≥ j − 2n. We denote by E j−2n the corresponding isotropic operator. From [1], we have
Theorem 2.3. For f ∈ L 2 (0,q) (D) ∩ Dom(∂) ∩ Dom(∂ * )
, there exist integral operators T q , S q , and P q such that
γ(z) 3 f (z) = γ * T q∂ γ 2 f + γ * S q∂ * γ 2 f + γ * P q γ 2 f .
T q , S q , and P q have the form
T q = E 1−2n + A 1 S q = E 1−2n + A 1 P q = 1 γ A ǫ (−1,1) + 1 γ * A ǫ (−1,1)
Estimates
We begin with estimates on the kernels of a certain type. In [1], we proved the Proposition 3.1. Let A j be an operator of type j. Then
A j : L p (D) → L s (D) 1 s > 1 p − j 2n + 2 .
We describe what we shall call tangential derivatives on the Henkin-Leiterer domain, D. A non-vanishing vector field, T , in R 2n will be called tangential if T r = 0 on r = 0. Near a boundary point, we choose a coordinate patch on which we have an orthogonal frame ω 1 , . . . , ω n of (1, 0)-forms with ∂r = γω n . Let L 1 , . . . , L n denote the dual frame. L 1 , . . . , L n−1 , L 1 , . . . , L n−1 , and Y = L n −L n are tangential vector fields. N = L n + L n is a normal vector field. We say a given vector field X is a smooth tangential vector field if it is a tangential field and if near each boundary point X is a combination of such vector fields L 1 , . . . , L n−1 , L 1 , . . . , L n−1 , Y , and rN with coefficients in C ∞ (D). We make the important remark here that in the coordinate patch of a critical point, the smooth tangential vector fields are not smooth combinations of derivatives with respect to the coordinate system described in Lemma 3.9. In fact, they are combinations of derivatives with respect to the coordinates of Lemma 3.9 with coefficients only in C 0 (D) due to factors of γ which occur in the denominators of such coefficients. In general a k th order derivative of such coefficients is in E 0,−k . Thus, when integrating by parts, special attention has to be paid to these non-smooth terms.
Definition 3.2. We say an operator with kernel, A, is of commutator type j if A is of type j, and if in the representation of A in (2.2) we have t 1 t 3 ≥ 0, t 2 t 4 ≥ 0,
and (t 1 + t 3 )(t 2 + t 4 ) ≤ 0. Definition 3.3. Let W be a smooth tangential vector field on D. We call W allowable if for all ζ ∈ ∂D W ζ ∈ T 1,0 ζ (∂D) ⊕ T 0,1 ζ (∂D).
The following theorem is obtained by a modification of Theorem 2.20 in [4] (see also [3]). The new details which come about from the fact that here we do not assume |∂r| = 0 require careful consideration and so we go through the calculations below.
Theorem 3.4. Let A 1 be an admissible operator of commutator type ≥ 1 and X a smooth tangential vector field. Then
γ * X z A 1 = −A 1X ζ γ + A (0) 1 + l ν=1 A (ν) 1 W ζ ν γ,
whereX is the adjoint of X, the W ν are allowable vector fields, and the A (ν) j are admissible operators of commutator type ≥ j.
Proof. We use a partition of unity and suppose X has arbitrarily small support on a coordinate patch near a boundary point in which we have an orthogonal frame ω 1 , . . . , ω n of (1, 0)-forms with ∂r = γω n , as described above with L 1 , . . . , L n comprising the dual frame. We have L 1 , . . . , L n−1 , L 1 , . . . , L n−1 , and Y = L n − L n as tangential vector fields, and N = L n + L n a normal vector field.
We have the decomposition of the tangential vector field X
X = n−1 j=0 a j L j + n−1 j=0 b j L j + aY + brN,
where the a j , b j , a, and b are smooth with compact support. We then prove the theorem for each term in the decomposition. Case 1). X = a j L j or b j L j , j ≤ n − 1, or aY . We write
γ * X z A 1 = −γX ζ A 1 + (γX ζ + γ * X z )A 1 .
Then an integration by parts gives
γ * X z A 1 f = −A 1 ( X ζ γf ) + (f, (γX ζ + γ * X z )A 1 ).
We now use the following relations
(γX ζ + γ * X z )E j,α = E j,α (3.1) (γX ζ + γ * X z )E * j,β = E * j,β (γX ζ + γ * X z )P = E 2,0 + rr * γγ * E 0,0 = E 0,0 P + E 2,0 (γX ζ + γ * X z )φ = E 1,1 + E 2,0 .
Any type 1 kernel
(3.2) A 1 (ζ, z) = R N R * M E j,α E * k,β P −t0 φ t1 φ t2 φ * t3 φ * t4 r l r * m
can be decomposed into terms
A 1 = A ′ 1 + A 2
where A ′ 1 is of pure type, meaning it has a representation as in (3.2) but with t 3 = t 4 = 0 and t 1 t 2 ≤ 0, [4]. From the relations (3.1) we have
(γX ζ + γ * X z )A 2 = γA 1 + A 2 .
In calculating (γX ζ + γ * X z )A ′ 1 , we find the term that is not immediately seen to be of type A 1 is that which results from the operator γX ζ + γ * X z falling on φ t1 , in which case we obtain the term of double type (0, 0)
B := R N R * M E j+1,α+1 E * k,β P −t0 φ t1−1 φ t2 r l r * m ,
where N + α ≥ 2, plus a term which is A 1 . We follow [3] to reduce to the case where B can be written as a sum of terms B σ such that B σ or B σ is of the form
γ 2 φ σ (φ + φ) τ1+τ2−σ R N R * M E j+1,α−1 E * k,β P −t0 r l r * m , where τ 1 + τ 2 ≤ −3 and τ 1 ≤ σ ≤ τ 1 + τ 2 or τ 2 ≤ σ ≤ τ 1 + τ 2 .
We fix a point z and choose local coordinates ζ such that
dζ j (z) = ω j (z).
Working in a neighborhood of a singularity in the boundary (where we can use a coordinate system as in (3.11) below), we see ∂ ∂ζn is a combination of derivatives with coefficients of the form ξ 0 (z), while L n is a combination of derivatives with coefficients of the form ξ 0 (ζ), where ξ 0 is defined in (2.1). We have Λ n − ∂ ∂zn is a sum of terms of the form
(ξ 0 (z) − ξ 0 (ζ))Λ ǫ = E 1,−1 Λ ǫ ,
where Λ is a first order differential operator, and the equality follows from
1 γ(ζ) − 1 γ(z) = γ(z) − γ(ζ) γ(ζ)γ(z) = 1 γ(z) γ 2 (z) − γ 2 (ζ) γ(ζ)(γ(ζ) + γ(z)) = 1 γ(z) ξ 1 (ζ)E 1 γ(ζ)(γ(ζ) + γ(z)) = 1 γ(z) E 1,0 (γ(ζ) + γ(z)) 1 γ(z) E 1,0 γ(z) = E 1,−2 .
Using these special coordinates, we note
Y φ = γ + E 1,0 + E 2,−1 Y φ = −γ + E 1,0 + E 2,−1 Y P = E 1,0 + E 0,0 γ (P + E 2,0 )
and write
B σ =γ 2 φ σ (φ + φ) τ1+τ2−σ R N R * M E j+1,α−1 E * k,β P −t0 r l r * m =γY φ σ+1 (φ + φ) τ1+τ2−σ R N R * M E j+1,α−1 E * k,β P −t0 r l r * m + γφ σ (φ + φ) τ1+τ2−σ R N R * M E j+2,α−1 E * k,β P −t0 r l r * m + γφ σ (φ + φ) τ1+τ2−σ R N R * M E j+3,α−2 E * k,β P −t0 r l r * m + γφ σ+1 (φ + φ) τ1+τ2−σ−1 R N R * M E j+2,α−1 E * k,β P −t0 r l r * m + γφ σ+1 (φ + φ) τ1+τ2−σ−1 R N R * M E j+3,α−2 E * k,β P −t0 r l r * m + γφ σ+1 (φ + φ) τ1+τ2−σ R N −1 R * M E j+1,α−1 E * k,β P −t0 r l r * m + γφ σ+1 (φ + φ) τ1+τ2−σ R N R * M E j,α−1 E * k,β P −t0 r l r * m + γφ σ+1 (φ + φ) τ1+τ2−σ R N R * M E j+2,α−1 E * k,β P −t0−1 r l r * m + φ σ+1 (φ + φ) τ1+τ2−σ R N R * M E j+1,α−1 E * k,β P −t0 r l r * m + φ σ+1 (φ + φ) τ1+τ2−σ R N R * M E j+3,α−1 E * k,β P −t0−1 r l r * m . Thus B σ = γY A (1,2) + A ′ 1 .
By the strict pseudoconvexity of D there exists allowable vector fields W 1 , W 2 , and W 3 , and a function ϕ, smooth on the interior of D which satisfies
Φ k ϕ = E 0,1−k ,
where Φ is a first order differential operator, such that Y can be written (1,2) )). W 1 and W 2 are allowable vector fields while W 2 (ϕA (1,2) ) and W 1 (ϕA (1,2) ) are of the form A ′ 1 where A ′ 1 is of commutator type, proving the theorem for Case 1.
Y = ϕ[W 1 , W 2 ] + W 3 . Thus γY A (2,1) = γϕ[W 1 , W 2 ]A (1,2) + γW 3 A (1,2) = γ[W 1 , W 2 ]ϕA (1,2) + A ′ 1 with A ′ 1 of commutator type ≥ 1. An integration by parts gives (f, γ[W 1 , W 2 ](ϕA 2 )) = ( W 1 γf, W 2 (ϕA (1,2) )) − ( W 2 γf, W 1 (ϕACase 2). X = E 0 rN .
We use
rγN ζ + r * γ * N z E j,α = E j,α (3.3) rγN ζ + r * γ * N z P = E 2,0 + r γ r * γ * E 0,0 = E 2,0 + P E 0,0 rγN ζ + r * γ * N z φ = rE 0,0 + r * E 0,0 . Thus γ * XA 1 f =(E 0 r * f, γ * N z A 1 ) =(−E 0 rf, γN ζ A 1 ) + (f, E 0 rγN ζ + r * γ * N z A 1 ) =(− N ζ (E 0 rγf ), A 1 ) + (f, E 0 rγN ζ + r * γ * N z A 1 ). We have N ζ (E 0 rγf ) = E 0,0 f + E 0 r N ζ γf
and E 0 r N ζ is an allowable vector field. The relations in (3.3) show that
rγN ζ + r * γ * N z A 1
is of commutator type ≥ 1. Case 2 therefore follows.
Below we use a criterion for Hölder continuity given by Schmalz (see Lemma 4.1 in [6]) which states
Lemma 3.5 (Schmalz). Let D ⊆ R m , m ≥ 1 be an open set and let B(D) denote the space of bounded functions on D. Suppose r is a C 2 function on R m , m ≥ 1, such that D := {r < 0} ⊆ R m .
Then there exists a constant C < ∞ such that the following holds: If a function u ∈ B(D) satisfies for some 0 < α ≤ 1/2 and for all z, w ∈ D the estimate
|u(z) − u(w)| ≤ |z − w| α + max y=z,w |∇r(y)||z − w| 1/2+α |r(y)| 1/2 then |u(z) − u(w)| ≤ C|z − w| α for all z, w ∈ D.
We will also refer to a lemma of Schmalz (Lemma 3.2 in [6]) which provides a useful coordinate system in which to prove estimates.
Lemma 3.6. Define x j by ζ j = x j +ix j+n for 1 ≤ j ≤ n. Let E δ (z) := {ζ ∈ D : |ζ− z| < δγ(z)} for δ > 0.
Then there is a constant c and numbers l, m ∈ {1, . . . , 2n} such that for all z ∈ D,
{−r(ζ), Imφ(·, z), x 1 , . . .l ,m . . . , x 2n },
where x l and x m are omitted, forms a coordinate system in E c (z) . We have the estimate
dV 1 γ(z) 2 dr(ζ) ∧ d Imφ(·, z) ∧ dx 1 ∧ . . .l ,m . . . ∧ dx 2n on E c (z),
where dV is the Euclidean volume form on R 2n .
We define the function spaces with which we will be working.
Definition 3.7. Let 0 ≤ β and 0 ≤ δ. We define f L ∞,β,δ (D) = sup ζ∈D |f (ζ)|γ β (ζ)|r(ζ)| δ . Definition 3.8. We set for 0 < α < 1 Λ α (D) = {f ∈ L ∞ (D) | f Λα := f L ∞ + sup |f (ζ) − f (z)| |ζ − z| α < ∞}.
We also define the spaces Λ α,β by
Λ α,β := {f : f Λ α,β = γ β f Λα < ∞}.
From [1], we have the
Lemma 3.9. r ǫ γ ∈ C 1 (D ǫ )
with C 1 -estimates independent of ǫ.
For our C k estimates later, we will need the following properties.
Theorem 3.10. Let T be a smooth first order tangential differential operator on D. For A an operator of type 1 we have
i) A : L ∞,2+ǫ,0 (D) → Λ α,2−ǫ ′ (D) 0 < ǫ, ǫ ′ , α + ǫ + ǫ ′ < 1/4 ii) γ * T A : L ∞,2+ǫ,0 (D) → L ∞,ǫ ′ ,δ (D) 1/2 < δ < 1, ǫ < ǫ ′ < 1 iii) A : L ∞,ǫ,δ (D) → L ∞,ǫ ′ ,0 (D) ǫ < ǫ ′ , δ < 1/2 + (ǫ ′ − ǫ)/2.
Proof. i). We will prove i) in the cases that A, the kernel of A is of double type (1, 1) satisfying the inequality
|A| γ(ζ) 2 P n−1/2−µ |φ| µ+1 µ ≥ 1 and A is of double type (1, 2) satisfying |A| γ(ζ) P n−1−µ |φ| µ+1 µ ≥ 1,
all other cases being handled by the same methods. Case a). A, the kernel of A, is of double type (1, 1).
We estimate (3.4) D 1 γ ǫ (ζ) γ(z) 2−ǫ ′ (φ(ζ, z)) µ+1 P (ζ, z) n−1/2−µ − γ(w) 2−ǫ ′ (φ(ζ, w)) µ+1 P (ζ, w) n−1/2−µ dV (ζ).
Then the integral in (3.4) is bounded by
D 1 γ ǫ (ζ) γ(z) 2−ǫ ′ (φ(ζ, w)) µ+1 − γ(w) 2−ǫ ′ (φ(ζ, z)) µ+1 (φ(ζ, w)) µ+1 (φ(ζ, z)) µ+1 P (ζ, z) n−1/2−µ dV (ζ) + D γ(w) 2−ǫ ′ γ ǫ (ζ) P (ζ, z) n−1/2−µ − P (ζ, w) n−1/2−µ (φ(ζ, w)) µ+1 P (ζ, z) n−1/2−µ P (ζ, w) n−1/2−µ dV (ζ) = I + II. In I we use (φ(ζ, w)) µ+1 − (φ(ζ, z)) µ+1 = µ l=0 (φ(ζ, w)) µ−l (φ(ζ, z)) l (φ(ζ, w) − φ(ζ, z)) and φ(ζ, w) − φ(ζ, z) = O γ(ζ) + |ζ − z| |z − w|. Therefore I µ l=0 D γ(z) 2−ǫ ′ γ ǫ (ζ) (γ(ζ) + |ζ − z|)|z − w| |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−1−2µ dV (ζ) + D 1 γ ǫ (ζ) |γ(z) 2−ǫ ′ − γ(w) 2−ǫ ′ | |φ(ζ, w)| µ+1 |ζ − z| 2n−1−2µ dV (ζ) µ l=0 D γ(z) 3−ǫ ′ γ ǫ (ζ) |z − w| |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−1−2µ dV (ζ) + µ l=0 D γ(z) 2−ǫ ′ γ ǫ (ζ) |z − w| |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−2−2µ dV (ζ) + D 1 γ ǫ (ζ) |γ(z) 2−ǫ ′ − γ(w) 2−ǫ ′ | |φ(ζ, w)| µ+1 |ζ − z| 2n−1−2µ dV (ζ) = I a + I b + I c
For the integral I a we break the region of integration into two parts: {|ζ − w| ≤ |ζ − z|} and {|ζ − z| ≤ |ζ − w|}, and by symmetry we need only consider the region {|ζ − z| ≤ |ζ − w|}.
We first consider the region E c , where c is chosen as in Lemma 3.5. Without loss of generality we can choose c sufficiently small so that γ(z) γ(ζ) holds in E c (z). We thus estimate
(3.5) D∩Ec |ζ−z|≤|ζ−w| γ(z) 3−ǫ ′ −ǫ |z − w| |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−1−2µ dV (ζ).
We use γ(z) γ(w) + |z − w| and
|z − w| β |ζ − z| β + |ζ − w| β (3.6)
|ζ − w| β for β > 0 to bound the integral in (3.5) by a constant times
|z − w| 1/2+α D∩Ec |ζ−z|≤|ζ−w| γ(z) 2 γ(w)|ζ − w| 1/2−α |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−1−2µ+ǫ+ǫ ′ dV (ζ) +|z − w| α D∩Ec |ζ−z|≤|ζ−w| γ(z) 2 |ζ − w| 2−α |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−1−2µ+ǫ+ǫ ′ dV (ζ). (3.7)
We use a coordinate system s 1 , s 2 , t 1 , . . . , t 2n−2 as given by Lemma 3.6 with s 1 = −r(ζ) and s 2 = Imφ, and the estimate (3.6) on the volume element
(3.8) dV (ζ) t 2n−3 γ(z) 2 |ds 1 ds 2 dt|
where t = t 2 1 + · · · + t 2 2n−2 , and the second line follows from γ(ζ) γ(z) on E c (z). We have the estimates
φ(ζ, z) s 1 + |s 2 | + t 2 φ(ζ, w) −r(w) + s 1 + t 2 .
After redefining s 2 to be positive, we bound the first integral of (3.7) by
|z − w| 1/2+α |r(w)| 1/2 γ(w)× (3.9) V |ζ − w| 1/2−α (s 1 + s 2 + t 2 ) µ+1−l (s 1 + |ζ − w| 2 ) l+1/2 t 2n−1−2µ+ǫ+ǫ ′ t 2n−3 ds 1 ds 2 dt |z − w| 1/2+α |r(w)| 1/2 γ(w) V t 2µ−2−ǫ−ǫ ′ (s 1 + s 2 + t 2 ) µ+1−l (s 1 + t 2 ) l+1/4+α/2 ds 1 ds 2 dt |z − w| 1/2+α |r(w)| 1/2 γ(w) V 1 s 7/8 1 (s 1 + s 2 )t 3/4+α+ǫ+ǫ ′ ds 1 ds 2 dt |z − w| 1/2+α |r(w)| 1/2 γ(w) V 1 s 15/16 1 s 15/16 2 t 3/4+α+ǫ+ǫ ′ ds 1 ds 2 dt |z − w| 1/2+α |r(w)| 1/2 γ(w), where V is a bounded subset of R 3 .
The second integral of (3.7) can be bounded by a constant times
|z − w| α V |ζ − w| 2−α (s 1 + s 2 + t 2 ) µ+1−l (s 1 + |ζ − w| 2 ) l+1 t 2n−1−2µ+ǫ+ǫ ′ t 2n−3 ds 1 ds 2 dt |z − w| α V t 2µ−2−ǫ−ǫ ′ (s 1 + s 2 + t 2 ) µ+1−l (s 1 + t 2 ) l+α/2 ds 1 ds 2 dt |z − w| α ,
where again V is a bounded subset of R 3 . The last line follows by the estimates in (3.9).
In estimating the integrals of I a over the region D \ E c , we write
D\Ec |ζ−z|≤|ζ−w| 1 γ ǫ (ζ) |z − w| |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−4−2µ+ǫ ′ dV (ζ) |z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ ǫ (ζ) |ζ − w| 1−α |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−4−2µ+ǫ ′ dV (ζ) |z − w| α × D\Ec |ζ−z|≤|ζ−w| 1 γ ǫ (ζ) 1 |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1/2+α/2 |ζ − z| 2n−4−2µ+ǫ ′ dV (ζ) |z − w| α D\Ec 1 γ ǫ (ζ) 1 |ζ − z| 2n−1+α+ǫ ′ dV (ζ).
(3.10)
We denote the critical points of r by p 1 , . . . , p k , and take ε small enough so that in each
U 2ε (p j ) = {ζ : D ∩ |ζ − p j | < 2ε}, for j = 1, . . . , k, there are coordinates u j1 , . . . , u jm , v jm+1 , . . . , v j2n such that (3.11) − r(ζ) = u 2 j1 + · · · + u 2 jm − v 2 jm+1 − · · · − v 2 j2n , with u jα (p j ) = v j β (p j ) = 0 for all 1 ≤ α ≤ m and m + 1 ≤ β ≤ 2n, from the Morse Lemma. Let U ε = k j=1 U ε (p j ).
We break the problem of estimating (3.10) into subcases depending on whether z ∈ U ε .
Suppose z ∈ U ε (p j ). Define w 1 , . . . , w 2n by
(3.12) w α = u jα for 1 ≤ α ≤ m v jα for m + 1 ≤ α ≤ 2n.
Let x 1 , . . . , x 2n be defined by ζ α = x α + ix n+α . From the Morse Lemma, the Jacobian of the transformation from coordinates x 1 , . . . , x 2n to w 1 , . . . , w 2n is bounded from below and above and thus we have
|ζ − z| ≃ |w(ζ) − w(z)| for ζ, z ∈ U 2ε (p j ). From (3.11) we have γ(z) |w(z)|, and thus |w(ζ) − w(z)| ≃ |ζ − z| γ(z) |w(z)| ≥ |w(ζ)| − |w(ζ) − w(z)|,
and we obtain
|w(ζ)| |w(ζ) − w(z)| ≃ |ζ − z|.
Using |w(ζ)| γ(ζ), we estimate, using the coordinates above
|z − w| α Uε\Ec 1 γ ǫ (ζ) 1 |ζ − z| 2n−1+α+ǫ ′ dV (ζ) |z − w| α V u m−1 v 2n−m−1 (u + v) 2n−1+α+ǫ ′ +ǫ |z − w| α , where we use u = u 2 j1 + · · · + u 2 jm , v = v 2 jm+1 + · · · + v 2 j2n
, and V is a bounded set.
In integrating over the region D \ U ε we have
|z − w| α (D\Uε)\Ec 1 γ ǫ (ζ) 1 |ζ − z| 2n−1+α+ǫ ′ dV (ζ) |z − w| α (D\Uε)\Ec 1 γ ǫ (ζ) dV (ζ) |z − w| α ,
which follows by using the coordinates w 1 , . . . , w 2n above. Subcase b). Suppose z / ∈ U ε . We have |ζ − z| γ(z), but γ(z) is bounded from below, since z / ∈ U ε . We therefore have to estimate
D 1 γ ǫ (ζ) dV (ζ),
which is easily done by working with the coordinates w 1 , . . . , w 2n above. The region in which |ζ − w| ≤ |ζ − z| is handled in the same manner, and thus we are finished bounding I a .
We now estimate I b , and again, we only consider the region |ζ − z| ≤ |ζ − w|. We first estimate the integrals of I b over the region E c (z), where c is chosen as in Lemma 3.6, and sufficiently small so that |ζ − z| γ(ζ). As we chose coordinates for the integrals in I a , we choose a coordinate system in which s 1 = −r(ζ) and s 2 = Imφ and we use the estimate on the volume element given by (3.8). We thus write
D∩Ec |ζ−z|≤|ζ−w| γ(z) 2 |z − w| |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−2−2µ+ǫ+ǫ ′ dV (ζ) (3.13) |z − w| α × D∩Ec |ζ−z|≤|ζ−w| γ(z) 2 1 |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1/2+α/2 |ζ − z| 2n−2−2µ+ǫ+ǫ ′ dV (ζ) |z − w| α V t 2n−3 (s 1 + s 2 + t 2 ) µ+1−l (s 1 + t 2 ) l+1/2+α/2 t 2n−2−2µ+ǫ+ǫ ′ ds 1 ds 2 dt |z − w| α M 0 N 0 t 2µ−1−ǫ−ǫ ′ (s 1 + t 2 ) µ−l (s 1 + t 2 ) l+1/2+α/2 ds 1 dt |z − w| α M 0 N 0 1 s 7/8 1 t 1/4+α+ǫ+ǫ ′ dsdt |z − w| α ,
where we have redefined the coordinate s 2 to be positive, V is a bounded subset of R 3 , and M, N > 0 are constants.
The integrals of I b over the region D \ E c are estimated by (3.10) above.
For the integral I c we use
|γ(w) 2−ǫ ′ − γ(z) 2−ǫ ′ | |z − w| γ(w) 1−ǫ ′ + γ(z) 1−ǫ ′ and estimate (3.14) D |ζ−z|≤|ζ−w| 1 γ ǫ (ζ) |z − w| γ(w) 1−ǫ ′ + γ(z) 1−ǫ ′ |φ(ζ, w)| µ+1 |ζ − z| 2n−1−2µ dV (ζ).
Let us first consider the case γ(w) ≤ γ(z) and integrate (3.14) over the region E c . We use a coordinate system s, t 1 , . . . , t 2n−1 , with s = −r and the estimate
dV (ζ) t 2n−2 γ(z) dsdt
for t = t 2 1 + · · · + t 2 2n−1 . We thus bound (3.14) by
D∩Ec |ζ−z|≤|ζ−w| |z − w|γ(z) 1−ǫ ′ |φ(ζ, w)| µ+1 |ζ − z| 2n−1−2µ+ǫ dV (ζ) (3.15) |z − w| α D∩Ec |ζ−z|≤|ζ−w| γ(z) |φ(ζ, w)| µ+1/2+α/2 |ζ − z| 2n−1−2µ+ǫ+ǫ ′ dV (ζ) |z − w| α V t 2n−2 (s + t 2 ) µ+1/2+α/2 t 2n−1−2µ+ǫ+ǫ ′ dsdt |z − w| α V 1 s 3/4 t 1/2+ǫ+ǫ ′ +α/2 dsdt |z − w| α , where V is here a bounded region of R 2 .
Over the complement of E c , (3.14) is bounded by
|z − w| α D |ζ−z|≤|ζ−w| 1 γ ǫ (ζ) 1 |φ(ζ, w)| µ+1/2+α/2 |ζ − z| 2n−2−2µ+ǫ ′ dV (ζ) |z − w| α D 1 γ ǫ (ζ) 1 |ζ − z| 2n−1+ǫ ′ +α dV (ζ) |z − w| α ,
which follows from the estimates of (3.10) above.
For the case γ(z) ≤ γ(w) we estimate (3.14) over the region E c using coordinates as above by
D∩Ec |ζ−z|≤|ζ−w| 1 γ ǫ (ζ) |z − w|γ(w) 1−ǫ ′ |φ(ζ, w)| µ+1 |ζ − z| 2n−1−2µ dV (ζ) (3.16) |z − w| 1/2+α/2 |r(w)| 1/2 γ(w)× D∩Ec |ζ−z|≤|ζ−w| 1 γ ǫ (ζ) 1 |φ(ζ, w)| µ+1/4+α/2 |ζ − z| 2n−1−2µ+ǫ ′ dV (ζ) |z − w| 1/2+α/2 |r(w)| 1/2 γ(w) V t 2n−2 (s + t 2 ) µ+1/4+α/2 t 2n−2µ+ǫ+ǫ ′ dsdt |z − w| 1/2+α/2 |r(w)| 1/2 γ(w),
where the last line follows as above. While over the complement of E c we use γ(w) |ζ − w| to bound (3.14) by
|z − w| α D |ζ−z|≤|ζ−w| 1 γ ǫ (ζ) 1 |φ(ζ, w)| µ+ǫ ′ /2+α/2 |ζ − z| 2n−1−2µ dV (ζ) |z − w| α D 1 γ ǫ (ζ) 1 |ζ − z| 2n−1+ǫ ′ +α dV (ζ) |z − w| α .
We are now done with integral I.
For integral II above we again break the integral into regions |ζ − z| ≤ |ζ − w| and |ζ − w| ≤ |ζ − z|, and we only consider the region |ζ − z| ≤ |ζ − w|, the other case being handled similarly.
We write
P (ζ, z) 1/2 2n−1−2µ − P (ζ, w) 1/2 2n−1−2µ = 2n−2µ−2 l=0 P (ζ, z) 1/2 2n−2−2µ−l P (ζ, w) 1/2 l P (ζ, z) 1/2 − P (ζ, w) 1/2 , and use P (ζ, z) 1/2 − P (ζ, w) 1/2 = |P (ζ, z) − P (ζ, w)| P (ζ, z) 1/2 + P (ζ, w) 1/2 |ζ − z| + |r(ζ)| γ(ζ) |ζ − z| |z − w| |ζ − w| + |r(w)| γ(w) |ζ − z| |z − w|,
which follows from Lemma 3.9.
We thus estimate
D |ζ−z|≤|ζ−w| γ(w) 2−ǫ ′ γ ǫ (ζ) P (ζ, z) n−1/2−µ − P (ζ, w) n−1/2−µ (φ(ζ, w)) µ+1 P (ζ, z) n−1/2−µ P (ζ, w) n−1/2−µ dV (ζ) 2n−2µ−2 l=0 D |ζ−z|≤|ζ−w| γ(w) 2−ǫ ′ γ ǫ (ζ) |z − w| |ζ − z| + |r(w)| γ(w) dV (ζ) |φ(ζ, w)| µ+1 P (ζ, z) 1/2 l+1 P (ζ, w) 1/2 2n−1−2µ−l |ζ − z| D |ζ−z|≤|ζ−w| γ(w) 2−ǫ ′ γ ǫ (ζ) |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2µ dV (ζ) + D |ζ−z|≤|ζ−w| γ(w) 1−ǫ ′ γ ǫ (ζ) |r(w)||z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n+1−2µ dV (ζ) =II a + II b .
For II a , we break the integral into the regions E c (z) and its complement. We first consider
D\Ec |ζ−z|≤|ζ−w| γ(w) 2−ǫ ′ γ ǫ (ζ) |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2µ dV (ζ) (3.17) |z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ ǫ (ζ) 1 |φ(ζ, w)| µ−1/2+α/2+ǫ ′ /2 |ζ − z| 2n−2µ dV (ζ) |z − w| α D 1 γ ǫ (ζ) 1 |ζ − z| 2n−1+α+ǫ ′ dV (ζ) |z − w| α ,
where we use γ(w) |ζ − w| and the estimates for (3.10).
We then bound the integral II a over the region E c (z) by considering the different cases γ(w) ≤ γ(z) and γ(z) ≤ γ(w). In the case γ(w) ≤ γ(z), we use a coordinate system, s, t 1 , . . . , t 2n−1 , in which s = −r(ζ), and using the estimate
(3.18) dV (ζ) t 2n−2 γ(z) dsdt, we have D∩Ec |ζ−z|≤|ζ−w| γ(w) 2−ǫ ′ γ ǫ (ζ) |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2µ dV (ζ) (3.19) |z − w| 1/2+α |r(w)| 1/2 γ(w) D∩Ec |ζ−z|≤|ζ−w| γ(z) 1−ǫ ′ dV (ζ) |φ(ζ, w)| µ+1/4+α/2 |ζ − z| 2n−2µ+ǫ |z − w| 1/2+α |r(w)| 1/2 γ(w) V t 2µ−2−ǫ−ǫ ′ (s + t 2 ) µ+1/4+α/2 dsdt |z − w| 1/2+α |r(w)| 1/2 γ(w) V 1 s 7/8 t 3/4+α+ǫ+ǫ ′ dsdt |z − w| 1/2+α
|r(w)| 1/2 γ(w).
In the case γ(z) ≤ γ(w), we estimate as above
D∩Ec |ζ−z|≤|ζ−w| γ(w) 2−ǫ ′ γ ǫ (ζ) |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2µ dV (ζ) |z − w| 1/2+α |r(w)| 1/2 γ(w) D∩Ec |ζ−z|≤|ζ−w| γ(w) dV (ζ) |φ(ζ, w)| µ+1/4+α/2 |ζ − z| 2n−2µ+ǫ+ǫ ′ |z − w| 1/2+α |r(w)| 1/2 γ(w) D∩Ec |ζ−z|≤|ζ−w| (γ(z) + |ζ − w|)dV (ζ) |φ(ζ, w)| µ+1/4+α/2 |ζ − z| 2n−2µ+ǫ+ǫ ′ .
The integral involving γ(z) is estimated exactly as above. We thus have to deal with
|z − w| 1/2+α |r(w)| 1/2 γ(w) D∩Ec |ζ−z|≤|ζ−w| |ζ − w|dV (ζ) |φ(ζ, w)| µ+1/4+α/2 |ζ − z| 2n−2µ+ǫ+ǫ ′ ,
which we estimate using the coordinates s, t 1 , . . . , t 2n−1 above by
|z − w| 1/2+α |r(w)| 1/2 γ(w) D∩Ec |ζ−z|≤|ζ−w| |ζ − w|dV (ζ) |φ(ζ, w)| µ+1/4+α/2 |ζ − z| 2n−2µ+ǫ+ǫ ′ |z − w| 1/2+α |r(w)| 1/2 γ(w) V t 2n−2 dsdt (s + t 2 ) µ−1/4+α/2 (s + t) 2n−2µ+1+ǫ+ǫ ′ |z − w| 1/2+α |r(w)| 1/2 γ(w) V 1 s 3/4+α/2+ǫ+ǫ ′ +δ t 1−δ dsdt |z − w| 1/2+α |r(w)| 1/2 γ(w),
where 0 < δ < 1/4 − (α/2 + ǫ + ǫ ′ ).
For II b we first estimate
D\Ec |ζ−z|≤|ζ−w| γ(w) 1−ǫ ′ γ ǫ (ζ) |r(ζ)||z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n+1−2µ dV (ζ) |z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ ǫ (ζ) |ζ − w| 2−α−ǫ ′ |φ(ζ, w)| µ |ζ − z| 2n+1−2µ dV (ζ) |z − w| α D 1 γ ǫ (ζ) 1 |ζ − z| 2n−1+α+ǫ ′ dV (ζ) |z − w| α ,
where c is chosen as in Lemma 3.6 and we use γ(w) |ζ − w| on D \ E c (z).
We now finish the estimates for II b . We have
D∩Ec |ζ−z|≤|ζ−w| γ(w) 1−ǫ ′ γ ǫ (ζ) |r(ζ)||z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n+1−2µ dV (ζ) |z − w| α D∩Ec |ζ−z|≤|ζ−w| γ(w) 1−ǫ ′ 1 |φ(ζ, w)| µ−1/2+α/2 |ζ − z| 2n+1−2µ+ǫ dV (ζ).
(3.20)
We again consider the different cases γ(w) ≤ γ(z) and γ(z) ≤ γ(w) separately. With γ(w) ≤ γ(z), we use coordinates s, t 1 , . . . , t 2n−1 as above with the volume estimate (3.18) to estimate (3.20) by
|z − w| α V t 2n−2 (s + t 2 ) µ−1/2+α/2 (s + t) 2n+1−2µ+ǫ+ǫ ′ dsdt |z − w| α V 1 s 1/2+α/2+ǫ+ǫ ′ +δ t 1−δ dsdt |z − w| α ,
where 0 < δ < 1/2 − (α/2 + ǫ + ǫ ′ ), and V again denotes a bounded subset of R 2 .
In the case γ(z) ≤ γ(w), we write γ(w) γ(z) + |ζ − w|, and estimate (3.20) by
|z − w| α D∩Ec |ζ−z|≤|ζ−w| γ(z) + |ζ − w| |φ(ζ, w)| µ−1/2+α/2 |ζ − z| 2n+1−2µ+ǫ+ǫ ′ dV (ζ).
The integral involving γ(z) is handled exactly as above, so we estimate
|z − w| α D∩Ec |ζ−z|≤|ζ−w| |ζ − w| |φ(ζ, w)| µ−1/2+α/2 |ζ − z| 2n+1−2µ+ǫ+ǫ ′ dV (ζ) |z − w| α D∩Ec |ζ−z|≤|ζ−w| 1 |φ(ζ, w)| µ−1+α/2 |ζ − z| 2n+1−2µ+ǫ+ǫ ′ dV (ζ).
The case of µ = 1 is trivial so we assume µ ≥ 2 and using the coordinates s, t 1 , . . . , t 2n−1 , we estimate
|z − w| α V t 2n−2 (s + t 2 ) µ−1+α/2 (s + t) 2n+2−2µ+ǫ+ǫ ′ dsdt |z − w| α V 1 s 3/4+α/2+ǫ+ǫ ′ t 1/2 dsdt |z − w| α .
Case b). A is of double type (1, 2).
Following the arguments above we see we need to estimate
D 1 γ(ζ) 1+ǫ γ(z) 2−ǫ ′ (φ(ζ, w)) µ+1 − γ(w) 2−ǫ ′ (φ(ζ, z)) µ+1 (φ(ζ, w)) µ+1 (φ(ζ, z)) µ+1 P (ζ, z) n−1−µ dV (ζ) + D γ(w) 2−ǫ ′ γ(ζ) 1+ǫ P (ζ, z) n−1−µ − P (ζ, w) n−1−µ (φ(ζ, w)) µ+1 P (ζ, z) n−1−µ P (ζ, w) n−1−µ dV (ζ) = III + IV.
Following the calculations for integral I in case a) we estimate integral III by the integrals
µ l=0 D γ(z) 2−ǫ ′ γ(ζ) ǫ |z − w| |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−2−2µ dV (ζ) + µ l=0 D γ(z) 2−ǫ ′ γ(ζ) 1+ǫ |z − w| |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−3−2µ dV (ζ) + D 1 γ(ζ) 1+ǫ |γ(z) 2−ǫ ′ − γ(w) 2−ǫ ′ | |φ(ζ, w)| µ+1 |ζ − z| 2n−2−2µ dV (ζ) = III a + III b + III c .
Estimates for the integral III a are given by I b in case a).
For the integrals of III b , we consider separately the regions E c (z) and its complement. We also only consider the case |ζ − z| ≤ |ζ − w|.
In the region D ∩ E c (z), we use a coordinate system in which s = −r(ζ) is a coordinate, and we use the estimate on the volume element in E c (z) given by (3.18). We can also assume that c is sufficiently small to guarantee that |ζ − z| γ(ζ) in E c .
The integrals
D∩Ec |ζ−z|≤|ζ−w| γ(z) 2−ǫ ′ γ(ζ) 1+ǫ |z − w| |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−3−2µ dV (ζ)
can thus be bounded by
|z − w| 1/2+α |r(z)| 1/2 γ(z)× V |ζ − w| 1/2−α (s + |ζ − z| 2 ) µ+1/2−l (s + |ζ − w| 2 ) l+1 |ζ − z| 2n−2−2µ+ǫ+ǫ ′ t 2n−2 dsdt |z − w| 1/2+α |r(z)| 1/2 γ(z) V t 2n−2 (s + |ζ − z| 2 ) µ+5/4+α/2 |ζ − z| 2n−2−2µ+ǫ+ǫ ′ dsdt |z − w| 1/2+α |r(z)| 1/2 γ(z) V t 2µ−ǫ−ǫ ′ (s + t 2 ) µ+5/4+α/2 dsdt |z − w| 1/2+α |r(z)| 1/2 γ(z) V 1 s 7/8 t 3/4+α+ǫ+ǫ ′ dsdt |z − w| 1/2+α |r(z)| 1/2 γ(z),
where V is a bounded subset of R 2 . We now estimate
D\Ec |ζ−z|≤|ζ−w| γ(z) 2−ǫ ′ γ(ζ) 1+ǫ |z − w| |φ(ζ, z)| µ+1−l |φ(ζ, w)| l+1 |ζ − z| 2n−3−2µ dV (ζ) |z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ(ζ) 1+ǫ 1 |φ(ζ, w)| l+1/2+α/2 |ζ − z| 2n−3−2l+ǫ ′ dV (ζ).
We use coordinates u j1 , . . . , u jm , v jm+1 , . . . , v j2n as in (3.11) and the neighborhoods U 2ε (p j ) defined above. We break the problem into subcases depending on whether z ∈ U ε . Subcase a). Suppose z ∈ U ε (p j ). As we did above above define w 1 , . . . , w 2n by
w α = u jα for 1 ≤ α ≤ m v jα for m + 1 ≤ α ≤ 2n,
and let x 1 , . . . , x 2n be defined by ζ α = x α + ix n+α . Recall that we have |w(ζ)| |ζ − z| and |w(ζ)| γ(ζ). Thus we estimate, using the coordinates above,
|z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ(ζ) 1+ǫ 1 |φ(ζ, w)| l+1/2+α/2 |ζ − z| 2n−3−2l+ǫ ′ dV (ζ) (3.21) |z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ(ζ) 1+ǫ 1 |ζ − z| 2n−2+α+ǫ ′ dV (ζ) |z − w| α V u m−1 v 2n−m−1 (u + v) 2n−1+α+ǫ+ǫ ′ dudv |z − w| α V 1 u 1/2 v 1/2+α+ǫ+ǫ ′ dudv |z − w| α , where we use u = u 2 j1 + · · · + u 2 jm , v = v 2 jm+1 + · · · + v 2 j2n
, and V is a bounded set.
Subcase b). Suppose z / ∈ U ε . We have |ζ − z| γ(z), but γ(z) is bounded from below, since z / ∈ U ε . We therefore have to estimate
D 1 γ(ζ) 1+ǫ dV (ζ),
which is easily done by working with the coordinates w 1 , . . . , w 2n above.
We now estimate integral III c . We use
|γ(z) 2−ǫ ′ − γ(w) 2−ǫ ′ | |z − w| γ(z) 1−ǫ ′ + γ(w) 1−ǫ ′ to write III c D |ζ−z|≤|ζ−w| γ(z) 1−ǫ ′ + γ(w) 1−ǫ ′ γ(ζ) 1+ǫ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2−2µ dV (ζ).
We first assume γ(w) ≤ γ(z). Then we estimate
(3.22) D |ζ−z|≤|ζ−w| γ(z) 1−ǫ ′ γ(ζ) 1+ǫ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2−2µ dV (ζ).
by breaking the integral into the regions E c and D \ E c . In E c , again assuming c is sufficiently small so that |ζ − z| γ(ζ), (3.22) is bounded by
D |ζ−z|≤|ζ−w| γ(z) 1−ǫ ′ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−1−2µ+ǫ dV (ζ),
which we showed to be bounded by |z − w| α in (3.15). In the region D \ E c , we estimate
D\Ec |ζ−z|≤|ζ−w| γ(z) 1−ǫ ′ γ(ζ) 1+ǫ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2−2µ dV (ζ) |z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ(ζ) 1+ǫ 1 |φ(ζ, w)| µ+1/2+α/2 |ζ − z| 2n−3−2µ+ǫ ′ dV (ζ) |z − w| α ,
where the last line follows from (3.21) above.
We therefore now consider the case γ(z) ≤ γ(w) so that
III c D |ζ−z|≤|ζ−w| γ(w) 1−ǫ ′ γ(ζ) 1+ǫ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2−2µ dV (ζ).
In the region E c we estimate
D∩Ec |ζ−z|≤|ζ−w| γ(w) 1−ǫ ′ γ(ζ) 1+ǫ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2−2µ dV (ζ) D∩Ec |ζ−z|≤|ζ−w| γ(w) |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−1−2µ+ǫ+ǫ ′ dV (ζ) |z − w| 1/2+α |r(w)| 1/2 γ(w) D∩Ec |ζ−z|≤|ζ−w| 1 |φ(ζ, w)| µ+1/4+α/2 |ζ − z| 2n−1−2µ+ǫ+ǫ ′ dV (ζ).
Using the coordinate system s = −r(ζ), t 1 . . . , t 2n−2 with volume estimate (3.18) as above we can estimate
|z − w| 1/2+α |r(w)| 1/2 γ(w) V t 2n−2 (s + t 2 ) µ+1/4+α/2 t 2n−2µ+ǫ+ǫ ′ dsdt |z − w| 1/2+α |r(w)| 1/2 γ(w)
by (3.19).
In the region D \ E c , we use γ(w) |ζ − w| to estimate
D\Ec |ζ−z|≤|ζ−w| γ(w) 1−ǫ ′ γ(ζ) 1+ǫ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2−2µ dV (ζ) (3.23) D\Ec |ζ−z|≤|ζ−w| 1 γ(ζ) 1+ǫ |ζ − w| 1−ǫ ′ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2−2µ dV (ζ) |z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ(ζ) 1+ǫ 1 |φ(ζ, w)| µ+ǫ ′ /2+α/2 |ζ − z| 2n−2−2µ dV (ζ) |z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ(ζ) 1+ǫ 1 |ζ − z| 2n−2+ǫ ′ +α dV (ζ) |z − w| α V u m−1 v 2n−1−m (u + v) 2n−1+ǫ+ǫ ′ +α dudv |z − w| α ,
where the coordinates u and v are defined as in (3.21), and where the last line follows from (3.21). We are now done estimating integral III and we turn to IV .
As in case a) for integral II we estimate IV by the integrals
D |ζ−z|≤|ζ−w| γ(w) 2−ǫ ′ γ(ζ) 1+ǫ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−1−2µ dV (ζ) + D |ζ−z|≤|ζ−w| γ(w) 2−ǫ ′ γ(ζ) 2+ǫ |r(ζ)||z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2µ dV (ζ) =IV a + IV b .
To estimate IV a we break the region of integration in E c and D \ E c . In the region D \ E c we use γ(w) |ζ − w| and estimate
D\Ec |ζ−z|≤|ζ−w| 1 γ(ζ) 1+ǫ |z − w| |φ(ζ, w)| µ+ǫ ′ /2 |ζ − z| 2n−1−2µ dV (ζ) |z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ(ζ) 1+ǫ 1 |φ(ζ, w)| µ+ǫ ′ /2−1/2+α/2 |ζ − z| 2n−1−2µ dV (ζ) |z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ(ζ) 1+ǫ 1 |ζ − z| 2n−2−2µ+ǫ ′ +α dV (ζ) |z − w| α ,
where the last line follows from (3.23).
In the region E c we consider the different cases γ(w) ≤ γ(z) and γ(z) ≤ γ(w) separately. In the case γ(w) ≤ γ(z), we write
D∩Ec |ζ−z|≤|ζ−w| γ(w) 2−ǫ ′ γ(ζ) 1+ǫ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−1−2µ dV (ζ) γ(w) D∩Ec |ζ−z|≤|ζ−w| γ(z) γ(ζ) 1+ǫ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−1−2µ+ǫ ′ dV (ζ) |z − w| 1/2+α |r(w)| 1/2 γ(w) D∩Ec |ζ−z|≤|ζ−w| γ(z) 1 |φ(ζ, w)| µ+1/4+α/2 |ζ − z| 2n−2µ+ǫ+ǫ ′ dV (ζ)
and we choose a coordinate system in which s = −r(ζ) and we use the estimate on the volume element given by (3.18) to reduce the estimate to
|z − w| 1/2+α |r(w)| 1/2 γ(w) V t 2µ−2−ǫ−ǫ ′ (s + t 2 ) µ+1/4+α/2 dsdt |z − w| 1/2+α |r(w)| 1/2 γ(w)
which follows from (3.19).
In the case γ(z) ≤ γ(w) we have
D∩Ec |ζ−z|≤|ζ−w| γ(w) 2−ǫ ′ γ(ζ) 1+ǫ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−1−2µ dV (ζ) |z − w| 1/2+α |r(w)| 1/2 γ(w) D∩Ec |ζ−z|≤|ζ−w| γ(w) 1 |φ(ζ, w)| µ+1/4+α/2 |ζ − z| 2n−2µ+ǫ+ǫ ′ dV (ζ).
We then write γ(w) γ(z) + |ζ − w|, and we bound
|z − w| 1/2+α |r(w)| 1/2 γ(w) D∩Ec |ζ−z|≤|ζ−w| γ(z) 1 |φ(ζ, w)| µ+1/4+α/2 |ζ − z| 2n−2µ+ǫ+ǫ ′ dV (ζ) |z − w| 1/2+α |r(w)| 1/2 γ(w)
by (3.16) and then consider (3.24) |z − w| 1/2+α |r(w)| 1/2 γ(w) D∩Ec |ζ−z|≤|ζ−w| 1 |φ(ζ, w)| µ−1/4+α/2 |ζ − z| 2n−2µ+ǫ+ǫ ′ dV (ζ).
The case µ = 1 is trivial so we assume µ ≥ 2 in which case we use coordinates s = −r(ζ), t 1 , . . . , t 2n−1 and bound (3.24) by
|z − w| 1/2+α |r(w)| 1/2 γ(w) V t 2µ−3−ǫ−ǫ ′ (s + t 2 ) µ−1/4+α/2 dsdt |z − w| 1/2+α |r(w)| 1/2 γ(w) V 1 s 7/8 t 3/4+α+ǫ+ǫ ′ dsdt |z − w| 1/2+α |r(w)| 1/2 γ(w).
To estimate IV b we use |r(ζ)| γ(ζ) 2 1, which follows by working in the coordinates of (3.11) near a critical point, and thus we have
(3.25) IV b D |ζ−z|≤|ζ−w| γ(w) 2−ǫ ′ γ(ζ) ǫ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2µ dV (ζ).
We break the regions of integration in (3.25) into E c and D \ E c . The estimates for IV b in the region E c are handled in the manner as was done for IV a . In the region D \ E c we use γ(w) |ζ − w| to bound (3.25) by
D\Ec |ζ−z|≤|ζ−w| γ(w) 2−ǫ ′ γ(ζ) ǫ |z − w| |φ(ζ, w)| µ+1 |ζ − z| 2n−2µ dV (ζ) |z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ(ζ) ǫ 1 |φ(ζ, w)| µ−1/2+ǫ ′ /2+α/2 |ζ − z| 2n−2µ dV (ζ) |z − w| α D\Ec |ζ−z|≤|ζ−w| 1 γ(ζ) ǫ 1 |ζ − z| 2n−1+ǫ ′ +α dV (ζ) |z − w| α .
ii). For T z a smooth first order tangential differential operator on D, with respect to the z variable, we have
T z r = 0 T z r * = E 0,0 r T z P = E 1,0 + E 0,0 r γ r * (γ * ) 2 = E 1,0 + E 0,0 γ * (P + E 2,0 ) T z φ = E 0,1 + E 1,0 .
We consider first the case in which the kernel of A is of double type (1, 3), of the form A (3) (ζ, z), where the subscript (3) refers to the smooth type.
Thus we write
(3.26) γ * T z A (3) = γ * A (1) γ + γ * A (2) + A (3) ,
and estimate integrals involving the various forms the integral kernels of different types assume.
We insert (3.26) into
γ * T A (3) f = D f (ζ)γ * T z A (3) (ζ, z)dV (ζ)
and we change the factors of γ * through the equality γ(z) = γ(ζ) + E 1,0 . ii) will then follow in this case by the estimates
D γ ǫ ′ (z) γ ǫ (ζ) |A (1) (ζ, z)|dV (ζ) 1 |r(z)| δ D γ ǫ ′ (z) γ 1+ǫ (ζ) |A (2) (ζ, z)|dV (ζ) 1 |r(z)| δ D γ ǫ ′ (z) γ 2+ǫ (ζ) |A (3) (ζ, z)|dV (ζ) 1 |r(z)| δ . (3.27)
We will prove the case of (3.27) in which A (3) satisfies
|A (3) | 1 P n−3/2−µ |φ| µ+1 µ ≥ 1.
The other cases are handled similarly. Using the notation from i) above, we choose coordinates u j1 , . . . , u jm , v jm+1 , . . . , v j2n such that −r(ζ) = u 2 j1 + · · · + u 2 jm − v 2 jm+1 − · · · − v 2 j2n , and let U ε = k j=1 U ε (p j ). We break the problem into subcases depending on whether z ∈ U ε . Subcase a). Suppose z ∈ U ε (p j ). We estimate (3.28)
U2ε(pj ) γ ǫ ′ (z) γ 2+ǫ (ζ) 1 |φ| µ+1 P n−3/2−µ dV (ζ) and (3.29) Dǫ\U2ε γ ǫ ′ (z) γ 2+ǫ (ζ) 1 |φ| µ+1 P n−3/2−µ dV (ζ).
We break up the integral in (3.28) into integrals over E c (z) and its complement, where c is as in Lemma 3.6. We also choose c < 1 so that we also have the estimate |ζ − z| γ(ζ).
We set θ = −r(z).
In the case U 2ε (p j ) ∩ E c (z), we use a coordinate system, s = −r(ζ), t 1 , . . . , t 2n−1 , and estimate
U2ε(pj )∩Ec(z) γ ǫ ′ (z) γ 2+ǫ (ζ) 1 |φ| µ+1 P n−3/2−µ dV (ζ) V t 2n−2 γ 1−ǫ ′ (z)(θ + s + t 2 ) µ+1 (s + t) 2n−1−2µ+ǫ dsdt V t 2µ−2+ǫ ′ −ǫ (θ + s + t 2 ) µ+1 dsdt 1 θ δ V t 2µ−2+ǫ ′ −ǫ (s + t 2 ) µ+1−δ dsdt 1 θ δ M 0 1 s 3/2−δ ds ∞ 0t 2µ−2+ǫ ′ −ǫ (1 +t 2 ) µ+1−δ dt 1 θ δ ,
where M > 0 is some constant, and we make the substitution t = s 1/2t .
We now estimate the integral
(3.30) U2ε(pj )\Ec(z) γ ǫ ′ (z) γ 2+ǫ (ζ) 1 |φ| µ+1 P n−3/2−µ dV (ζ). Defining u = u 2 j1 + · · · + u 2 jm , v = v 2 jm+1 + · · · + v 2 j2n
, and using the estimates from above
|w(ζ)| |ζ − z| |w(ζ)| γ(ζ),
where w(ζ) is defined as in (3.12), we can bound the integral in (3.30) by
U2ε(pj )\Ec(z) γ ǫ ′ (z) γ 2+ǫ (ζ) 1 |φ| µ+1 P n−3/2−µ dV (ζ) (3.31) V u m−1 v 2n−m−1 (u + v) 2n−1+ǫ−ǫ ′ (θ + u 2 + v 2 ) dudv V 1 (u + v) 1+ǫ−ǫ ′ (θ + u 2 + v 2 ) dudv 1 θ δ V 1 (u + v) 3−2δ+ǫ−ǫ ′ dudv 1 θ δ ,
where V is a bounded region. We have therefore bounded (3.28), and we turn now to (3.29).
In D \ U 2ε we have that |ζ − z| and γ(ζ) are bounded from below so
D\U2ε γ ǫ ′ (z) γ 2+ǫ (ζ) 1 |φ| µ+1 P n−3/2−µ dV (ζ) 1.
This finishes subcase a).
Case b). Suppose z / ∈ U ε . We divide D into the regions D ∩ E c (z) and D \ E c (z).
In D ∩ E c (z) the same coordinates and estimates work here as in establishing the estimates for the integral in (3.31).
In D \ E c (z) we have |ζ − z| γ(z), but γ(z) is bounded from below, since z / ∈ U ε . We therefore have to estimate
D 1 γ 2+ǫ (ζ) dV (ζ),
which is easily done by working with the coordinates w 1 , . . . , w 2n above. iii). The proof of iii) follows the same steps as those in the proof of ii), and we leave the details to the reader.
Theorem 3.11. Let X be a smooth tangential vector field. Then
γ * X z E 1−2n = −E 1−2nX ζ γ + E (0) 1−2n + l ν=1 E (ν) 1−2n ,
whereX is the adjoint of X and the E (ν) 1−2n are isotropic operators. Proof. The proof follows the line of argument used in proving case 1) of Theorem 3.4, and makes use of (γX ζ + γ * X z )E i 1−2n = E i 1−2n . Theorem 3.12. Let T be a smooth tangential vector field. Set E to be an operator with kernel of the form E i 1−2n (ζ, z)R 1 (ζ) or E i 2−2n (ζ, z) . Then we have the following properties:
i) E 1−2n : L p (D) → L s (D) ii) E : L ∞,2+ǫ,0 (D) → Λ α,2−ǫ ′ (D) 0 < ǫ, ǫ ′ , α + ǫ + ǫ ′ < 1 iii) γ * T E : Λ α,2+ǫ (D) → L ∞,ǫ ′ ,0 (D) ǫ < ǫ ′ iv) E : L ∞,ǫ,δ (D) → L ∞,ǫ ′ ,0 (D) ǫ < ǫ ′ , δ < 1/2 + (ǫ ′ − ǫ)/2
for any 1 ≤ p ≤ s ≤ ∞ with 1/s > 1/p − 1/2n.
Proof. i) is presented in [3]. The proof of ii) follows that of Theorem 3.10 i).
For iii) we let E(ζ, z) be the kernel of E, and we calculate
(γ * ) 1+ǫ ′ T Ef = D f (ζ)γ * T z E(ζ, z)dV (ζ) = D (γ * ) ǫ ′ γ 2+ǫ f (ζ) γ * T z E(ζ, z) γ 2+ǫ dV (ζ) = D (γ * ) ǫ ′ (γ 2+ǫ f (ζ) − (γ * ) 2+ǫ f (z)) γ * T z E(ζ, z) γ 2+ǫ dV (ζ) (3.32) + (γ * ) 2+ǫ f (z) D (γ * ) ǫ ′ γ * T z E(ζ, z) γ 2+ǫ dV (ζ).
We use Theorem 3.11 in the last integral to bound the last term of (3.32) by
(γ * ) 2+ǫ f (z) D E 1−2n,0 (γ * ) ǫ ′ 1 γ 1+ǫ + E 1,0 γ 2+ǫ dV (ζ) (γ * ) 2+ǫ |f (z)| f L ∞,2+ǫ,0 , f Λα,2+ǫ ,
where the first inequality can be proved by breaking the integrals into the regions U 2ε and D \ U 2ε and in the region D \ U 2ε using the same coordinates as in the proof of Theorem 3.10 ii).
For the first integral in (3.32), we note if f ∈ Λ α then γ 2+ǫ f ∈ Λ α . We have
D (γ * ) ǫ ′ γ 2+ǫ f (ζ) − (γ * ) 2+ǫ f (z) γ * T z E(ζ, z) γ 2+ǫ dV (ζ) γ 2+ǫ f Λα D |ζ − z| α (γ * ) ǫ ′ γ * T z E(ζ, z) γ 2+ǫ dV (ζ) γ 2+ǫ f Λα .
The proof of iv) follow as in the case of Theorem 3.10 iii).
C k estimates
We define Z 1 operators to be those which take the form
Z 1 = A (1,1) + E 1−2n • γ,
and we write Theorem 2.3 as
(4.1) γ 3 f = Z 1 γ 2∂ f + Z 1 γ 2∂ * f + Z 1 f.
We define Z j operators to be those operators of the form
Z j = j times Z 1 • · · · • Z 1 .
We establish mapping properties for Z j operators Lemma 4.1. For 1 < p < ∞ and j ≥ 1
(4.2) Z j f p f L p,jp ,
and for 0 < ǫ ′ < ǫ
(4.3) Z j f L ∞,ǫ,0 f L ∞,j+ǫ ′ ,0 .
Proof. We prove (4.2) for kernels of the form A (1,1) (ζ, z), where A (1,1) is a kernel of double type (1, 1). We show below that A (1,1) (ζ, z) satisfies
(4.4) sup z∈Ω 1 γ(ζ) |A (1,1) (ζ, z)||r(ζ)| −δ |r(z)| δ dV (ζ) < ∞
for δ < 1. The lemma then follows from the generalized Young's inequality. We further restrict our proof to the cases in which A (1,1) satisfies
i) 1 γ(ζ) |A (1,1) | γ(ζ) 1 P n−1/2−µ |φ| µ+1 µ ≥ 1 ii) 1 γ(ζ) |A (1,1) | 1 P n−1−µ |φ| µ+1 µ ≥ 1 iii) 1 γ(ζ) |A (1,1) | 1 γ(ζ) 1 P n−3/2−µ |φ| µ+1 µ ≥ 1
We will prove the more difficult case iii), as cases i) and ii) follow similar arguments, and we leave the details of those cases to the reader. We use the same notation as in Theorem 3.10 iii). As in Theorem 3.10 iii) we divide the estimates into subcases depending on whether z ∈ U ε . Subcase a). Suppose z ∈ U ε (p j ). We estimate (4.5)
U2ε(pj ) 1 γ(ζ)|φ| µ+1 P n−3/2−µ |r(ζ)| δ dV (ζ)
and (4.6) Dǫ\U2ε 1 γ(ζ)|φ| µ+1 P n−3/2−µ |r(ζ)| δ dV (ζ).
We break up the integral in (4.5) into integrals over E c (z) and its complement, where c is as in Lemma 3.6, and we choose c < 1. Thus, in E c (z), we have |ζ − z| γ(ζ).
In the case U 2ε (p j ) ∩ E c (z), we use a coordinate system, s = −r(ζ), t 1 , . . . , t 2n−1 , and estimate
U2ε(pj )∩Ec(z) 1 γ(ζ)|φ| µ+1 P n−3/2−µ |r(ζ)| δ dV (ζ) (4.7) R 2 + t 2n−2 γ(ζ)γ(z)s δ (θ + s + t 2 ) µ+1 (s + t) 2n−3−2µ dsdt R 2 + t 2µ−1 s δ (θ + s + t 2 ) µ+1 dsdt R 2 + 1 s δ (θ 1/2 + s 1/2 + t) 3 dsdt ∞ 0 1 s δ (θ + s) ds 1 θ δ ,
where we use the notation R j + = j times R + × · · · × R + . We now estimate the integral (4.8)
U2ε(pj )\Ec(z) 1 γ(ζ)|φ| µ+1 P n−3/2−µ |r(ζ)| δ dV (ζ).
Recall from above that with the coordinates u j1 , . . . , u jm , v jm+1 , . . . , v j2n so that around the critical point, p j we have
−r(ζ) = u 2 j1 + · · · + u 2 jm − v 2 jm+1 − · · · − v 2 j2n
and with w 1 , . . . , w 2n defined by
w α = u jα for 1 ≤ α ≤ m v jα for m + 1 ≤ α ≤ 2n,
we have |w(ζ)| |ζ − z| and |w(ζ)| γ(ζ) for ζ, z ∈ U 2ε (p j ).
We can therefore bound the integral in (4.8) by
U2ε(pj )\Ec(z) 1 γ(ζ)|φ| µ+1 P n−3/2−µ |r(ζ)| δ dV (ζ) V u m−1 v 2n−m−1 (u + v) 2n−2 (θ + u 2 + v 2 )(u 2 − v 2 ) δ dudv V 1 (θ + u 2 )(u 2 − v 2 ) δ dudv, (4.9)
where V is a bounded region. We make the substitution v =ṽu, since v 2 < u 2 , and write (4.9) as
M 0 1 u 2δ−1 (θ + u 2 ) du 1 0 1 (1 −ṽ 2 ) δ dṽ 1 θ δ M 0 1 u 2δ−1 (1 + u 2 ) du 1 θ δ ,
where M > 0 is some constant. We have therefore bounded (4.5), and we turn now to (4.6).
In D \ U 2ε we have that |ζ − z| and γ(ζ) are bounded from below so
D\U2ε 1 γ(ζ)|φ| µ+1 P n−3/2−µ |r(ζ)| δ dV (ζ) D\U2ε 1 |r(ζ)| δ dV (ζ) 1,
the last inequality following because in D \ U 2ε r can be chosen as a coordinate since γ(ζ) is bounded from below. This finishes subcase a).
Subcase b). Suppose z / ∈ U ε . We divide D into the regions D ∩ E c (z) and D \ E c (z).
In D ∩ E c (z) the same coordinates and estimates work here as in establishing the estimates for the integral in (4.7).
In D \ E c (z) we have |ζ − z| γ(z), but γ(z) is bounded from below, since z / ∈ U ε . We therefore have to estimate D 1 γ(ζ)|r(ζ)| δ dV (ζ), which is easily done by working with the coordinates w 1 , . . . , w 2n above.
i) Z n+2 : L 2 (D) → L ∞ (D) ii) γT Z 4 f C 1/4−ε f L ∞,3+ǫ,0
Proof. For i) apply Corollary 3.1 and Theorem 3.12 i), n + 2 times. For ii) we let α < 1/4, and apply the commutator theorem, Theorem 3.4, and consider the two compositions Z 1 • Z 1 • γT A 1 • Z 1 , and Z 1 • Z 1 • γT E • Z 1 . From Theorems 3.10 and 3.12 we can find ǫ 1 , . . . , ǫ 4 such that 0 < ǫ j+1 < ǫ j and such that in the first case we have
Z 1 • Z 1 • γT A 1 • Z 1 f Λα Z 1 • γT A 1 • Z 1 f L ∞,ǫ 1 ,0 γT A 1 • Z 1 f L ∞,ǫ 2 ,δ Z 1 f L ∞,2+ǫ 3 ,0 f L ∞,3+ǫ 4 ,0
and, in the second,
Z 1 • Z 1 • γT E • Z 1 f Λα Z 1 • γT E • Z 1 f L ∞,ǫ 1 ,0 γT E • Z 1 f L ∞,1+ǫ 2 ,0 Z 1 f Λα,3+ǫ 3 f L ∞,3+ǫ 4 ,0 ,
where the second and third inequalities are proved in the same way as Theorem 3.12 ii) and iii).
We now iterate (4.1) to get γ 3j f =(Z 1 γ 3(j−1)+2 + Z 2 γ 3(j−2)+2 + · · · + Z j γ 2 )∂f (4.10)
+ (Z 1 γ 3(j−1)+2 + Z 2 γ 3(j−2)+2 + · · · + Z j γ 2 )∂ * f + Z j f.
Then we can prove
Theorem 4.3. For f ∈ L 2 0,q (D) ∩ Dom(∂) ∩ Dom(∂ * ), q ≥ 1, and ε > 0 γ 3(n+3) f C 1/4−ε γ 2∂ f ∞ + γ 2∂ * f ∞ + f 2 .
Proof. Use Theorems 3.10 i) and 3.12 ii) and Lemma 4.2 i) in (4.10) with j = n + 3
We use the notation D k to denote a k th order differential operator, which is a sum of terms which are composites of k vector fields.
We define
Q k (f ) = k j=0 γ j+2 D j∂ f ∞ + k j=0 γ j+2 D j∂ * f ∞ + f 2 .
T k will be used for a k-th order tangential differential operator, which is a sum of terms which are composites of k tangential vector fields.
Lemma 4.4. Let T k be a tangential operator of order k. For ε, ǫ > 0 γ 3(n+6)+8k+ǫ T k f C 1/4−ε Q k (f ).
Proof. We first prove (4.11) γ 3(n+2)+9+8k+ǫ T k f L ∞ Q k (f ).
The proof is by induction in which the first step is proved as was Theorem 4.3. We choose j = 3 in (4.10) and then apply (4.10) to γ 3(n+2)+7k f to get
γ 3(n+2)+9+7k f = Z 1 γ 2∂ f + Z 1 γ 2∂ * f + Z 3 γ 3(n+2)+7k f.
We then apply γ ǫ (γT ) k , where T is a tangential operator. We use the commutator theorem, Theorem 3.4, to show γ 3(n+2)+9+8k+ǫ T k f =γ ǫ k−1 j=0 Z 3 γ 3(n+2)+7k+j T j f + γ ǫ γT Z 3 γ 3(n+2)+8k−1 T k−1 f (4.12) + γ ǫ k j=0 Z 1 γ j+2 T j∂ f + γ ǫ k j=0 Z 1 γ j+2 T j∂ * f.
By Lemma 4.1 and the induction hypothesis, we conclude the L ∞ norm of the first term on the right hand side of (4.12) is bounded by Q k−1 (f ).
In the same way we proved Lemma 4.2, we have γT Z 3 : L ∞,3+ǫ ′ ,0 (D) → L ∞,ǫ,0 (D), for some 0 < ǫ ′ < ǫ and so the L ∞ norm of the second term is bounded by
γ 3(n+2)+8k+2+ǫ ′ T k−1 f L ∞ γ 3(n+2)+9+8(k−1) T k−1 f L ∞ Q k−1 (f ).
The last two terms on the right side of (4.12) are obviously bounded by Q k (f ), and thus we are done with the proof of (4.11).
To finish the proof of the lemma, we follow the proof of (4.11), and choose k = 4 in (4.10), then apply (4.10) to γ 3(n+2)+7k f , and again apply the operators γ ǫ (γT ) k , where T is a tangential operator. In this way, we show
γ 3(n+2)+12+8k+ǫ T k f =γ ǫ k−1 j=0 Z 4 γ 3(n+2)+7k+j T j f + γ ǫ γT Z 4 γ 3(n+2)+8k−1 T k−1 f (4.13) + γ ǫ k j=0 Z 1 γ j+2 T j∂ f + γ ǫ k j=0 Z 1 γ j+2 T j∂ * f.
By Theorems 3.10 i) and 3.12 ii), for some ǫ ′ > 0, the first sum on the right hand side of (4.13) has its C 1/4−ε norm bounded by Z 3 γ 3(n+2)+7k+ǫ ′ +j T j f L ∞ Q k−1 (f ) from above. We can use Lemma 4.2 ii) to show the C 1/4−ε norm of the second term is bounded by γ 3(n+2)+10+8(k−1)+ǫ ′ T k−1 f ∞ Q k−1 (f ) as above.
The last two terms on the right hand side of (4.13) are easily seen to be bounded by Q k (f ), and this finishes Lemma 4.4.
In order to generalize Lemma 4.4 to include non-tangential operators, we use the familiar argument of utilizing the ellipticity of∂ ⊕∂ * to express a normal derivative of a component of a (0, q)-form, f , in terms of tangential operators acting on components of f and components of∂f and∂ * f . With the (0, q)-form f written f = |J|=q f Jω J locally, we have the decomposition in the following form:
(4.14) γN f J = jK a JjK γT j f K + L b JL f L + M c JM γ(∂f ) M + P d JP γ(∂ * f ) P ,
where N = L n +L n is the normal vector field, and T 1 , . . . , T 2n−1 are the tangential fields as described in Section 3. The coefficients a JjK , b JL , c JM , and d JP are all of the form E 0,0 and the index sets are strictly ordered with J, K, L, M, P ⊂ {1, . . . , n}, |J| = |K| = |L| = q, |M | = q + 1, |P | = q − 1, j = 1, . . . , 2n − 1. The decomposition is well known in the smooth case (see [3]) and to verify (4.14) in a neighborhood of γ = 0, one may use the coordinates u j1 , . . . , u jm , v jm+1 , . . . , v j2n as in (3.11) above. For instance, integrating by parts to compute∂ * f leads to terms of the form E 0,−1 f J , whereby multiplication by γ allows us to absorb these terms into b JL .
It is then straightforward how to generalize Lemma 4.4. Suppose D k is a k th order differential operator which contains the normal field at least once. In γ k D k we commute γN with terms of the form γT , where T is tangential, and we consider the operator D k = D k−1 • γN , where D k−1 is of order k − 1. The error terms due to the commutation involve differential operators of order ≤ k − 1. From (4.14) we just have to consider D k−1 γT f , D k−1∂ f , and D k−1∂ * f . The last two terms are bounded by Q k−1 (f ), and we repeat the process with D k−1 γT f , until we are left with k tangential operators for which we can apply Lemma 4.4.
We thus obtain the weighted C k estimates Theorem 4.5. Let f ∈ L 2 0,q (D) ∩ Dom(∂) ∩ Dom(∂ * ), q ≥ 1, α < 1/4, and ǫ > 0. Then γ 3(n+6)+8k+ǫ f C k+α Q k (f ).
As an immediate consequence we obtain weighted C k estimates for the canonical solution to the∂-equation.
Corollary 4.6. Let q ≥ 2 and let N q denote the∂-Neumann operator for (0, q)forms. Let f be a∂-closed (0, q)-form. Then for α < 1/4 and ǫ > 0, the canonical solution, u =∂ * N q f to∂u = f , satisfies
γ 3(n+6)+8k+ǫ u C k+α γ k+2 f C k + f 2 .
(4.3) is proved similarly.
Lemma 4. 2 .
2Let T be a tangential vector field and ε > 0. For ǫ > 0 sufficiently small
Integral representations on non-smooth domains. D Ehsani, preprintD. Ehsani. Integral representations on non-smooth domains. preprint.
Theory of Functions on Complex Manifolds. G Henkin, J Leiterer, Monographs in Math. G. Henkin and J. Leiterer. Theory of Functions on Complex Manifolds. Monographs in Math. Berkhäuser, Basel.
The Cauchy-Riemann complex. I Lieb, J Michel, E 34 of Aspects of Mathematics. ViewegI. Lieb and J. Michel. The Cauchy-Riemann complex, volume E 34 of Aspects of Mathematics. Vieweg, Wiesbaden, 2002.
Estimates for a class of integral operators and applications to thē ∂-Neumann problem. I Lieb, R Range, Invent. Math. 85I. Lieb and R. Range. Estimates for a class of integral operators and applications to thē ∂-Neumann problem. Invent. Math., 85:415-438, 1986.
Integral representations and estimates in the theory of the∂-Neumann problem. I Lieb, R Range, Ann. Math. 123I. Lieb and R. Range. Integral representations and estimates in the theory of the∂-Neumann problem. Ann. Math., 123:265-301, 1986.
Solution of the∂-equation with uniform estimates on strictly q-convex domains with non-smooth boundary. G Schmalz, Math. Z. 202G. Schmalz. Solution of the∂-equation with uniform estimates on strictly q-convex domains with non-smooth boundary. Math. Z., 202:409-430, 1989.
|
[] |
[
"Domain Specific Modeling (DSM) as a Service for the Internet of Things & Services",
"Domain Specific Modeling (DSM) as a Service for the Internet of Things & Services"
] |
[
"Amir H Moin [email protected] \nfortiss, An-Institut\nTechnische Universität München\nMunichGermany\n"
] |
[
"fortiss, An-Institut\nTechnische Universität München\nMunichGermany"
] |
[] |
In this paper, we propose a novel approach for developing Sense-Compute-Control (SCC) applications for the Internet of Things and Services (IoTS) following the Model-Driven Software Engineering (MDSE) methodology. We review the recent approaches to MDSE and argue that Domain Specific Modeling (DSM) suites our needs very well. However, in line with the recent trends in cloud computing and the emergence of the IoTS, we believe that both DSM creation tools and DSM solutions that are created via those tools should also be provided to their respective users in a service-oriented fashion through the cloud in the IoTS. In this work, we concentrate on the latter, i.e., DSM solutions that are created via a DSM creation tool. We argue that it makes sense for the owners of a DSM solution in a domain to provide their DSM solution as a service, following the well known Software as a Service (SaaS) model, to the interested customers through the IoTS. Our proposed approach concentrates on such a DSM solution for developing SCC applications in the IoTS. However, the idea could be applied to DSM solutions in other domains as well.
|
10.1007/978-3-319-19656-5_47
|
[
"https://arxiv.org/pdf/2009.10637v1.pdf"
] | 35,707,904 |
2009.10637
|
7a7df8358bbabdd730521404672846ed6dad8059
|
Domain Specific Modeling (DSM) as a Service for the Internet of Things & Services
22 Sep 2020
Amir H Moin [email protected]
fortiss, An-Institut
Technische Universität München
MunichGermany
Domain Specific Modeling (DSM) as a Service for the Internet of Things & Services
22 Sep 2020internet of things and servicesmodel-driven software engi- neeringdomain specific modelingdevelopment as a servicecloud com- puting
In this paper, we propose a novel approach for developing Sense-Compute-Control (SCC) applications for the Internet of Things and Services (IoTS) following the Model-Driven Software Engineering (MDSE) methodology. We review the recent approaches to MDSE and argue that Domain Specific Modeling (DSM) suites our needs very well. However, in line with the recent trends in cloud computing and the emergence of the IoTS, we believe that both DSM creation tools and DSM solutions that are created via those tools should also be provided to their respective users in a service-oriented fashion through the cloud in the IoTS. In this work, we concentrate on the latter, i.e., DSM solutions that are created via a DSM creation tool. We argue that it makes sense for the owners of a DSM solution in a domain to provide their DSM solution as a service, following the well known Software as a Service (SaaS) model, to the interested customers through the IoTS. Our proposed approach concentrates on such a DSM solution for developing SCC applications in the IoTS. However, the idea could be applied to DSM solutions in other domains as well.
Introduction
Similar to the rapid spread of the Internet among human users in the 1990s, the Internet Protocol (IP) is currently rapidly spreading into new domains, where constrained embedded devices such as sensors and actuators also play an important role. This expanded version of the Internet is referred to as the Internet of Things (IoT) [1]. On the other hand, the convergence between Web 2.0 and Service Oriented Architecture (SOA), has led to the creation of a global SOA on top of the World Wide Web (WWW), known as the Internet of Services (IoS) [2]. The combination of the IoT and the IoS is referred to as the Internet of Things and Services (IoTS). This emerging vision together with Cyber Physical Systems (CPS), in which the physical world merges with the virtual world of cyberspace [3], are believed to have sufficient power to trigger the next (i.e., fourth) industrial revolution [4]. However, with this great power often comes an enormous degree of complexity as well as an extremely high cost of design, development, test, deployment and maintenance for software systems too. One of the main reasons is the multidisciplinary nature of the field of the IoTS. A number of major challenges in this field are scalability, heterogeneity of things and services, variety of protocols, communication among stakeholders and fast pace of technological advances.
In particular, here we are interested in Sense-Compute-Control (SCC) applications [5], a typical group of applications in the IoTS. A SCC application senses the environment (e.g., temperature, humidity, light, UV radiation, etc.) through sensors, performs some computation (often decentralized, i.e., distributed) and finally prompts to take one or more actions through actuators (very often sort of control) in the environment. There exist two main differences between these applications in the IoTS and the similar ones in the field of Wireless Sensor and Actuator Networks (WSAN), a predecessor of the field of the IoTS. First, the scale of the network is quite different. While WSANs typically have several hundreds or thousands of nodes, SCC applications in the IoTS may have several millions or billions of nodes. Second, the majority of nodes in a WSAN are more or less similar to each other. However, here in the IoTS we have a wide spectrum of heterogeneous devices, ranging from tiny sensor motes with critical computational, memory and energy consumption constraints to highly capable servers for cloud computing. Heterogeneity is a property inherited from another predecessor field, known as Pervasive (Ubiquitous) computing. [6] A recent trend in software engineering for dealing with complexity through raising the level of abstraction is Model-Driven Software Engineering (MDSE). In this paper, we advocate Domain Specific Modeling (DSM), a state-of-theart approach to the MDSE methodology, for addressing the above mentioned challenges. DSM not only provides a very high level of abstraction by letting domain experts model the design specifications in their own technical jargon (i.e., the domain vocabulary), but also lets complete code generation in a fully automated manner.
The paper makes three main contributions:
1. It reviews the three mainstream recent approaches to the MDSE methodology. 2. It proposes MDSE in general, and DSM in particular, for addressing the above mentioned challenges and increasing the development productivity in the domain of SCC applications in the IoTS. 3. In line with the recent trends in cloud computing, i.e., Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), data model as a service, etc., and with the emergence of the Internet of Things and Services (IoTS), also Sensor/Actuator as a Service, Integrated Development Environments (IDEs) as well as integrated Model-Driven Software Engineering (MDSE) tools have also got the opportunity to expose themselves to users in a service-oriented fashion through the cloud, called Development as a Service (DaaS), IDE as a Service or Modeling tool as a Service. We propose DSM solution as a Service.
The rest of this paper is structured as follows: Section 2 reviews the three major recent approaches to the Model-Driven Software Engineering (MDSE) methodology. In Section 3, we propose our novel approach for addressing the above mentioned challenges. This is followed by a brief literature review in Section 4. Finally, we conclude and mention our future work in Section 5.
Model-Driven Software Engineering (MDSE)
In fact, the idea of applying models with different levels of abstraction in order to develop software according to the design intent rather than the underlying computing environment [7], a field that used to be better known as modelbased software engineering (or model-based development), has a long tradition in software engineering [8], which dates back to over five decades ago. Computer-Aided Software Engineering (CASE) tools of the 1980s and the 1990s are the famous examples of such efforts. However, although those tools were attractive and interesting at their time, in practice, they did not really affect the software industry too much. One major reason was that they mapped poorly to the underlying platforms. Moreover, they were not scalable enough to support large projects [7]. Also, their modeling language as well as code generators were all hard-coded (fixed) by tool vendors. Thus, the user of those tools had no control on the modeling language nor on the generators in order to adapt them to his or her needs and evolving requirements. Unfortunately, this is true for many existing CASE tools in the present time too.
In this section, we briefly review the three major recent approaches to the MDSE methodology.
Model-Driven Architecture (MDA)
In 2001, the Object Management Group (OMG) adopted a standard architectural framework for MDSE known as Model-Driven Architecture (MDA) [9], which was a key step towards standardization and dissemination of MDSE. MDA defines three default levels of abstraction on a system in order to address Separation of Concerns (SoC) among the stakeholders. Firstly, a Computation Independent Model (CIM) defines the business logic independent of any kind of computational details and system implementation concerns. Secondly, a Platform Independent Model (PIM) is created based on the CIM. The model transformation from CIM to PIM is often done manually or in a semi-automated manner by information technology and computer science experts. A PIM defines a set of parts and services of the system independently of any specific technology and platform. Finally, a Platform Specific Model (PSM) defines the concrete implementation of the system on a specific platform and is generated from a PIM by means of model-to-model transformations in an automated manner. Later, a number of (model-to-model and) model-to-text transformations generate the implementation including the source code out of the PSM for that specific platform. The generated implementation is usually not complete and still needs some manual development.
MDA uses Meta-Object Facility (MOF) for its metamodeling architecture. The modeling languages that are used on the PIM and PSM levels are either UML (or UML profiles) or other MOF-or EMOF-based Domain Specific Modeling Languages (DSMLs).
Although the separation of concerns through different levels of abstraction for models in MDA is very interesting, in practice since iterative model refinements are typical and model transformations are very often not bidirectional, i.e., one cannot automatically go up in the modeling layers, e.g., from PSM to PIM, due to modifications in models on the PSM level, we will easily end up in inconsistencies in the models and serious maintenance problems in the long term. Moreover, another drawback of the MDA approach is that the generated code is often not complete and still needs manual development in order to become the final usable product.
Model-Driven Software Development (MDSD)
Model-Driven Software Development (MDSD) [10] prevents the crucial maintenance problem that we mentioned for MDA by avoiding iterative model refinements. In other words, no round-trip engineering is performed. A model in MDSD should have all required platform-specific details (for one or more platforms), so that it could be directly transformed to the source code through model-to-code transformations. Moreover, any modifications to the system should be done on the model level.
The main concentration of a model in MDSD is on the architecture of the software. However, the business logic is implemented manually in handwritten code rather than being generated out of the model. Following this approach could lead to about 60% to 80% of automatically generated code [11]. Furthermore, the source code of the final product in the MDSD approach consists of three main parts [11]:
1. Generic code: This part of the source code is specific to each platform. The idea is to generate this part automatically for each platform. 2. Schematic code: This part of the source code is generated out of the platformindependent architecture model of an application through model-to-text transformations. Depending on the target platform, the model is transformed differently. 3. Individual code: This part is specific to each application and contains its business logic. This part should be written by developers manually.
Similar to the MDA approach, the MDSD approach could not lead to 100% code generation either. Therefore, one needs to keep the generated code separated from the handwritten code.
Domain Specific Modeling (DSM)
About five decades ago, software development was mainly shifted from the Assembly language to high-level third-generation programming languages (3GLs) like BASIC. This shift led to about 450% of productivity leap on average. However, the migration from BASIC to Java has only caused an improvement of about 20% in the development productivity on average. This is due to the fact that almost all the third-generation programming languages such as BASIC, FORTRAN, PASCAL, C, C++, Java, etc. are more or less on the same level of abstraction. Furthermore, although models are abstract representations that should hide complexity and one expects modeling languages to provide a higher level of abstraction than programming languages, however, in practice, generalpurpose modeling languages such as the Unified Modeling Language (UML) often have a one-to-one correspondence between modeling elements and code elements. Therefore, they cannot hide the complexity so much. In fact, with both programming languages and general-purpose modeling languages, developers must first try to solve the problem in the problem domain using the domain's terminology, then they should map the domain concepts to development concepts, i.e., to source code elements in case of programming languages and to modeling elements in case of general-purpose modeling languages without any tool support. [12] In contrast, Domain Specific Modeling (DSM) is based upon two pillars: domain specificity and automation. The first one means DSM lets one specify the solution in a language that directly uses concepts and rules from a specific problem domain rather than the programming concepts and rules (i.e., concepts and rules of the solution domain). Thus, it tremendously increases the productivity. According to many industrial reports, employing DSM solutions in various domains has led to an average productivity leap of between 3 to 10 times (i.e., 300% to 1000%) comparing the general-purpose modeling and manual programming approaches. The second one means complete and automated generation of the final product, including the source code in a programming language, out of models without any need for further development and manual modifications. This is somehow analogous to the role that compilers play for 3GLs. Unlike domain specificity, which is also the case in some other MDSE approaches (e.g., in some MDA-and MDSD-based approaches which use DSMLs instead of generalpurpose modeling languages such as UML), automation is an essential property of DSM that distinguishes it from other MDSE approaches. [12]
DSM as a Service
Recall from the mentioned major challenges for developing SCC applications for the IoTS in Section 1 (i.e., scalability, heterogeneity of things and services, variety of protocols, communication among stakeholders and fast pace of technological advances), we believe that DSM is the best choice to address those challenges. Firstly, due to providing full automation and complete code generation (not only source code, but even other artifacts such as documentation, build scripts, configuration files, etc.), it helps a lot in dealing with the scalability challenges. Secondly, different code generators (i.e., model-to-text transformations) can generate the implementation and APIs for different heterogeneous hardware and software platforms as well as various communication protocols out of the same model. The code generators are developed once, but work for as many times as one needs to generate the implementation out of a model. Furthermore, since the modeling languages use the terms, concepts and rules of the problem domain instead of software development (i.e., the solution domain's) terms, concepts and rules, the communication among stakeholders will be definitely much easier, comparing using general purpose modeling or programming languages. Last but not least, to cope with the fast pace of technological advances, one needs to maintain the code generators over time to adapt the old ones or create new ones that can generate the implementation for new platforms, protocols, etc. As mentioned, DSM creation tools give full control to their users over their modeling languages and code generators in order to adapt them to their evolving needs. This latter feature is also a property of MDA and MDSD.
Complete and automated code generation is possible in DSM mainly because of two reasons. First, because DSM is specific to a very narrow problem domain. For instance, the automotive domain is too broad as a domain in DSM, whereas the infotainment system of a particular car manufacturer could be a proper candidate to be a domain in DSM. Second, because unlike in the CASE tools of the 1980s and the 1990s (and many other existing ones in the present time), metamodeling tools (e.g., the free open source Eclipse Modeling Framework (EMF)) which are used to create DSM solutions let their users have full control over their modeling languages as well as code generators, whereas in CASE tools both were hardcoded (i.e., fixed) by the tool vendors in advance. Hence, in DSM's view, there is no one-size-fits-all solution. Instead, every organization should come up with its own solution either by developing it from scratch or by tailoring an existing one (if it is publicly available) to its needs. Moreover, as time goes on, due to changes in the requirements, business logic, technologies, etc. one needs to maintain and adapt the DSM solution. [12] However, not all organizations can afford the cost of building their own DSM solution from scratch. One option would be to reuse a free open source one, if there is any in the exact particular narrow target domain, and tailor it to one's needs. But, this is not really a realistic option, since those companies who own a good DSM solution consider it as their key asset. Therefore, they often do not disclose it. In this paper, we propose a novel idea to address this issue. We argue that it actually makes sense for the owners of a DSM solution in a domain to provide their DSM solution as a service, following the well known Software as a Service (SaaS) model (more specifically Development as a Service, IDE as a Service and Modeling tool as a Service), to the interested customers through the IoTS. This will not only help the customer of the DSM solution save costs, but will also let the DSM solution owner make money out of it. Moreover, comparing traditional DSM solutions (i.e., non-SaaS), this will be much easier to use (no need for installation, configuration, etc.) and also more friendly to collaboration via model repositories, since everything is basically stored on the cloud.
However, the key point here is that if the provided DSM solution service does not allow its users to access the modeling languages or change the code generators in order to adapt them to their needs, then we are, unfortunately, back to the traditional CASE tools, where the modeling languages and code generators were hardcoded (i.e., fixed) by tool vendors. It is clear that such a tool can support complete and fully automated code generation only in very rare cases, where the narrow application domains have 100% overlap with each other. Therefore, in order to make the service more valuable and useful to a broader range of audience, the service must at least allow the users to write new code generators of their own on demand. Of course, this requires having (read) access to the metamodel of the modeling language.
This way, users may either use the DSM solution, as it is, as a service, or they could write their own code generators and may still use some of the provided ones as services. Moreover, one could compose these services from different DSM solution service providers. In any case, the service provider does not have to disclose the source code of the code generators.
Related Work
The general concept of providing software that is used for creating other software as a service on the cloud already exists in a number of web-based tools ranging from web-based IDEs such as the Cloud9 IDE 1 , Arvue 2 , etc. to various web-based tools for creating composite SOA applications, web-based mashup development tools, etc. Similarly, the idea is recently also proposed for Model-Driven Engineering (MDE) tools, e.g., (data) Model as a Service (MaaS) 3 or (software) modeling as a service (a.k.a. MDE in the cloud) 4 . Most recently, the idea is also applied to DSM creation tools [13]. The essential difference of this contribution with our proposed approach is that our work is about the DSM solutions that have been created via a DSM creation tool for particular narrow domains, e.g., SCC applications in the IoTS. However, their work is about providing the DSM creation tool itself as a service for creating DSM solutions.
Conclusion & Future Work
In this paper, we proposed a novel approach for developing Sense-Compute-Control (SCC) applications for the Internet of Things and Services (IoTS) based on the Model-Driven Software Engineering (MDSE) methodology, a paradigm in software engineering for dealing with complexity. First, we briefly reviewed the three main recent approaches to MDSE. Second, according to the challenges and requirements for developing SCC applications for the IoTS, we advocated Domain Specific Modeling (DSM) among the MDSE approaches. Finally, we proposed the idea of providing the DSM solutions as services through the IoTS. Implementation and validation of the proposed ideas remained as future work.
https://c9.io/ 2 http://www.cloudsw.org/under-review/31a7a63b-856a-488f-9ce1-1ed5e6cfe63e/designing-ide-as-a-service/at download/file 3 http://cloudbestpractices.wordpress.com/2012/10/21/maas/ 4 http://modeling-languages.com/maas-modeling-service-or-mde-cloud/
I Ishaq, D Carels, G K Teklemariam, J Hoebeke, F V Den Abeele, E D Poorter, I Moerman, P Demeester, IETF Standardization in the Field of the Internet of Things (IoT): A Survey. I. Ishaq, D. Carels, G. K. Teklemariam, J. Hoebeke, F. V. den Abeele, E. D. Poorter, I. Moerman, and P. Demeester, "IETF Standardization in the Field of the Internet of Things (IoT): A Survey," Journal of Sensor and Actuator Networks, 2013.
Service Engineering for the Internet of Services. J Cardoso, K Voigt, M Winkler, Enterprise Information Systems, ser. Lecture Notes in Business Information Processing. J. Filipe and J. CordeiroBerlin HeidelbergSpringer19J. Cardoso, K. Voigt, and M. Winkler, "Service Engineering for the Internet of Services," in Enterprise Information Systems, ser. Lecture Notes in Business In- formation Processing, J. Filipe and J. Cordeiro, Eds. Springer Berlin Heidelberg, 2009, vol. 19, pp. 15-27.
Cyber Physical Systems (Part 1). M Broy, it-Information Technology. 546M. Broy, "Cyber Physical Systems (Part 1)," it-Information Technology, vol. 54, no. 6, pp. 255-256, 2012.
Website of the German Federal Ministry of Education and Research -Bundesministerium für Bildung und Forschung. Zukunftsprojekt Industrie 4.0"Zukunftsprojekt Industrie 4.0," Website of the German Federal Ministry of Ed- ucation and Research -Bundesministerium für Bildung und Forschung (BMBF): http://www.bmbf.de/de/19955.php.
A Model-driven Development Framework for Developing Sense-compute-control Applications. P Patel, B Morin, S Chaudhary, Proceedings of the 1st International Workshop on Modern Software Engineering Methods for Industrial Automation, ser. MoSEMInA. the 1st International Workshop on Modern Software Engineering Methods for Industrial Automation, ser. MoSEMInANew York, NY, USAACMP. Patel, B. Morin, and S. Chaudhary, "A Model-driven Development Framework for Developing Sense-compute-control Applications," in Proceedings of the 1st In- ternational Workshop on Modern Software Engineering Methods for Industrial Au- tomation, ser. MoSEMInA 2014. New York, NY, USA: ACM, 2014, pp. 52-61.
Towards Application Development for the Internet of Things. P Patel, A Pathak, T Teixeira, V Issarny, Proceedings of the 8th Middleware Doctoral Symposium, ser. MDS '11. the 8th Middleware Doctoral Symposium, ser. MDS '11New York, NY, USAACM5P. Patel, A. Pathak, T. Teixeira, and V. Issarny, "Towards Application Develop- ment for the Internet of Things," in Proceedings of the 8th Middleware Doctoral Symposium, ser. MDS '11. New York, NY, USA: ACM, 2011, pp. 5:1-5:6.
Guest Editor's Introduction: Model-Driven Engineering. D C Schmidt, IEEE Computer. 392D. C. Schmidt, "Guest Editor's Introduction: Model-Driven Engineering," IEEE Computer, vol. 39, no. 2, pp. 25-31, 2006.
10 Years Model-Driven -What Did We Achieve?. B Schaetz, IEEE Eastern European Conference on the. 01Engineering of Computer Based SystemsB. Schaetz, "10 Years Model-Driven -What Did We Achieve?" Engineering of Computer Based Systems, IEEE Eastern European Conference on the, vol. 0, p. 1, 2011.
MDA Guide Version 1.0.1. J Miller, J Mukerji, Object Management Group (OMG), Tech. Rep. J. Miller and J. Mukerji, "MDA Guide Version 1.0.1," Object Management Group (OMG), Tech. Rep., 2003.
Model-Driven Software Development: Technology, Engineering, Management, ser. M Völter, T Stahl, J Bettin, A Haase, S Helsen, K Czarnecki, B Von Stockfleth, Wiley Software Patterns Series. WileyM. Völter, T. Stahl, J. Bettin, A. Haase, S. Helsen, K. Czarnecki, and B. von Stockfleth, Model-Driven Software Development: Technology, Engineering, Management, ser. Wiley Software Patterns Series. Wiley, 2013. [Online].
The Model-Driven Software Engineering (MDSE) Lecture Slides, IBM Research -Zurich. J Küster, J. Küster, "The Model-Driven Software Engineering (MDSE) Lec- ture Slides, IBM Research - Zurich." [Online].
Domain-Specific Modeling: Enabling Full Code Generation. S Kelly, J Tolvanen, WileyS. Kelly and J. Tolvanen, Domain-Specific Modeling: Enabling Full Code Genera- tion. Wiley, 2008.
clooca : Web based tool for Domain Specific Modeling. S Hiya, K Hisazumi, A Fukuda, T Nakanishi, Demos/Posters/StudentResearch@MoDELS, 2013. S. Hiya, K. Hisazumi, A. Fukuda, and T. Nakanishi, "clooca : Web based tool for Domain Specific Modeling," in Demos/Posters/StudentResearch@MoDELS, 2013, pp. 31-35.
|
[] |
[
"Quantum Chromodynamics",
"Quantum Chromodynamics",
"Quantum Chromodynamics",
"Quantum Chromodynamics"
] |
[
"Thomas Schäfer \nDepartment of Physics\nNorth Carolina State University\n27695RaleighNCUSA\n",
"Thomas Schäfer \nDepartment of Physics\nNorth Carolina State University\n27695RaleighNCUSA\n"
] |
[
"Department of Physics\nNorth Carolina State University\n27695RaleighNCUSA",
"Department of Physics\nNorth Carolina State University\n27695RaleighNCUSA"
] |
[] |
We present a brief introduction to QCD, the QCD phase diagram, and non-equilibrium phenomena in QCD. We emphasize aspects of the theory that can be addressed using computational methods, in particular euclidean path integral Monte Carlo, fluid dynamics, kinetic theory, classical field theory and holographic duality.
| null |
[
"https://arxiv.org/pdf/1608.05459v2.pdf"
] | 118,728,642 |
1608.05459
|
7df759b3d4521243cc2a5aca1707fbb91ef76b84
|
Quantum Chromodynamics
Thomas Schäfer
Department of Physics
North Carolina State University
27695RaleighNCUSA
Quantum Chromodynamics
Chapter 1
We present a brief introduction to QCD, the QCD phase diagram, and non-equilibrium phenomena in QCD. We emphasize aspects of the theory that can be addressed using computational methods, in particular euclidean path integral Monte Carlo, fluid dynamics, kinetic theory, classical field theory and holographic duality.
Introduction
The goal of this chapter is to provide a brief summary of Quantum Chromodynamics (QCD) and the QCD phase diagram, and to give an introduction to computational methods that are being used to study different aspects of QCD. Quantum Chromodynamics is a remarkable theory in many respects. QCD is an almost parameter free theory. Indeed, in the context of nuclear physics QCD is completely characterized by the masses of the up, down, and strange quark, and a reasonable caricature of nuclear physics emerges in the even simpler case in which the up and down quark are taken to be massless, and the strange quark is infinitely heavy. QCD nevertheless accounts for the incredible richness of the phase diagram of strongly interacting matter. QCD describes finite nuclei, normal and superfluid states of nuclear matter, color superconductors, hadronic gases, quark gluon plasma, and many other states. This rich variety of states is reflected in the large number of computational methods that have been brought to bear on problems in QCD. This includes a large number of methods for the structure and excitations of finite Fermi systems, quantum Monte Carlo methods, and a variety of tools for equilibrium and non-equilibrium statistical mechanics.
The bulk of this book is devoted to the study of few and many nucleon systems. Summarizing everything else in one brief chapter is obviously out of the question, both because of limitations of space and because of my limited expertise. I will therefore be very selective, and focus on a number of very simple yet powerful ideas. This reflects, in part, my background, which is not primarily in computational physics. It also reflects my conviction that progress in computational physics is unfortunately often reflected in increasingly complicated codes that obscure the simplicity of the underlying methods.
Path integrals and the Metropolis algorithm
Consider a simple quantum mechanical problem, the motion of a particle in a one-dimensional potential. In order to be specific I will focus on the double well potential V (x) = λ (x 2 − η 2 ) 2 , where η and λ are parameters. The Hamiltonian is
H = p 2 2m + λ (x 2 − η 2 ) 2 .
(1.1)
Using a change of variables I can set 2m = λ = 1. This implies that there is only one physical parameter in this problem, the barrier separation η. The regime η 1 corresponds to the limit in which the system has two almost degenerate minima that are split by semi-classical tunneling events. The energy eigenstates and wave functions are solutions of the eigenvalue problem H|n = |n E n . Once the eigenstates are known I can compute all possible correlation functions Π n (t 1 ,t 2 , . . . ,t n ) = 0|x(t 1 )x(t 2 ) . . . x(t n )|0 ,
(1. 2) by inserting complete sets of states. An alternative to the Hamiltonian formulation of the problem is the Feynman path integral [1]. The path integral for the anharmonic oscillator is given by
x 1 |e −iHt f |x 0 = x(t f )=x 1 x(0)=x 0 Dx e iS , S = t f 0 dt 1 4ẋ 4 − (x 2 − η 2 ) 2 .
(1. 3) This expression contains a rapidly oscillating phase factor e iS , which prohibits any direct numerical attempt at computing the path integral. The standard approach is based on analytic continuation to imaginary time τ = it. This is also referred to as Euclidean time, because the Minkowski interval dx 2 − dt 2 turns into the Euclidean expression dx 2 + dτ 2 . In the following I will consider the euclidean partition function
Z(T ) = Dx e −S E , S E = β 0 dτ 1 4ẋ 4 + (x 2 − η 2 ) 2 ,
(1. 4) where β = 1/T is the inverse temperature and we assume periodic boundary conditions x(0) = x(β ). To see that equ. (1.4) is indeed the partition function we can use equ. (1.3) to express the path integral in terms of the eigenvalues of the Hamiltonian, Z(T ) = ∑ n exp(−E n /T ). In the following I will describe numerical simulations using a discretized version of the euclidean action. For this purpose I discretize the euclidean time coordinate τ j = ja, i = 1, . . . n where a = β /n is the length of time interval. The discretized action is given by
S = n ∑ i=1 1 4a (x i − x i−1 ) 2 + a(x 2 i − η 2 ) 2 ,
(1. 5) where x i = x(τ i ). I consider periodic boundary conditions x 0 = x n . The discretized euclidean path integral is formally equivalent to the partition function of a statistical system of (continuous) "spins" x i arranged on a one-dimensional lattice. This statistical system can be studied using standard Monte-Carlo sampling methods. In the following I will use the Metropolis algorithm [2]. Detailed numerical studies of the euclidean path integral can be found in [3][4][5][6]. i + δ x is performed for every lattice site. Here, δ x is a random number. The trial update is accepted with probability P x
(k) i → x (k+1) i = min {exp(−∆ S), 1} , (1.7)
where ∆ S is the change in the action equ. (1. 5). This ensures that the configurations {x i } (k) are distributed according the "Boltzmann" distribution exp(−S). The distribution of δ x is arbitrary as long as the trial update is micro-reversible, i. e. is equally likely to change x (k) i to x (k+1) i and back. The initial configuration is arbitrary. In order to study equilibration it is useful to compare an ordered (cold) start with {x i } (0) = {η} to a disordered (hot) start {x i } (0) = {r i }, where r i is a random variable.
The advantage of the Metropolis algorithm is its simplicity and robustness. The only parameter to adjust is the distribution of δ x. A simple choice is to take δ x to be a Gaussian random number, and choose the width of the distribution so that the average acceptance rate for the trial updates is around 50%. Fluctuations of O provide an estimate in the error of O .
The uncertainty is given by
∆ O = O 2 − O 2 N con f .
(1. 8) This requires some care, because the error estimate is based on the assumption that the configurations are statistically independent. In practice this can be monitored by computing the auto-correlation "time" in successive measurements O({x i } (k) ).
I have written a simple fortran code that implements the Metropolis algorithm for euclidean path integrals [6]. The most important part of that code is a sweep through the lattice with a Metropolis update on every site τ j : do j =1,n−1 nhit = nhit+1 xpm = ( x ( j )−x ( j −1))/a xpp = ( x ( j+1)−x ( j ) ) / a t = 1.0/4.0 * (xpm ** 2+xpp ** 2) v = ( x ( j ) ** 2− f ** 2) ** 2 sold = a * ( t+v ) xnew = x ( j ) + delx * (2.0 * ran2 ( iseed )−1.0) xpm = (xnew−x ( j −1))/a xpp = ( x ( j+1)−xnew ) / a t = 1.0/4.0 * (xpm ** 2+xpp ** 2) v = (xnew ** 2−f ** 2) ** 2 snew = a * ( t+v ) dels = snew−sold p = ran2 ( iseed ) i f ( exp(−dels ) . gt . p) then
x ( j ) = xnew nacc = nacc + 1 Here, sold is the local action corresponding to the initial value of x(j), and snew is the action after the trial update. The trial update is accepted if exp(-dels) is greater that the random variable p. The function ran2() generates a random number between 0 and 1, and nacc/nhit measures the acceptance rate. A typical path is shown in Fig. 1.1. An important feature of the paths in the double well potential is the presence of tunneling events. Indeed, in the semi-classical regime η 1, a typical path can be understood as Gaussian fluctuations superimposed on a series of tunneling events (instantons). The path integral method does not provide direct access to the eigenvalues of the Hamiltonian, but it can be used to compute imaginary time correlation functions
Π E n (τ 1 , . . . , τ n ) = x(τ 1 ) . . . x(τ n ) .
(1.9)
Note that the average is carried out with respect to the partition function in equ. (1.4). In the limit β → ∞ this corresponds to the ground state expectation value. A very important observable is the two-point function Π E (τ) ≡ Π E 2 (0, τ). The euclidean correlation functions is related to the eigenstates of the Hamiltonian via a spectral representations. This representation is obtained by inserting a complete set of states into equ. (1.9). The result is
Π E (τ) = ∑ n | 0|x|n | 2 exp(−(E n − E 0 )τ),
(1. 10) where E n is the energy of the state |n . This can be written as 11) where ρ(E) is the spectral function. In the case of the double well potential there are only bound states and the spectral function is a sum of delta-functions. Equ.
Π E (τ) = dE ρ(E) exp(−(E − E 0 )τ),(1.
(1.10) shows that the euclidean correlation function is easy to construct once the energy eigenvalues and eigenfunctions are known. The inverse problem is well defined in principle, but numerically much more difficult. The excitation energy of the first excited state ∆ E 1 = E 1 − E 0 is easy to extract from the exponential decay of the two-point functions, but higher states are more difficult to compute. A technique for determining the spectral function from euclidean correlation functions is the maximum entropy image reconstruction method, see [7,8].
The calculation of correlation functions in a Monte Carlo simulation is very straightforward. All I need to do is multiply the values of x(τ i ) for a given path, and then average over all paths:
do i c =1,nc
ncor = ncor + 1 ip0 = i n t ( (n−np) * ran2 ( iseed ) ) x0 = x ( ip0 ) do ip=1,np x1 = x ( ip0+ip ) xcor = x0 * x1 x2cor= xcor ** 2 xcor_sum ( ip ) = xcor_sum ( ip ) + xcor xcor2_sum ( ip ) = xcor2_sum ( ip ) + xcor ** 2 enddo enddo
The advantages of this method are that it is extremely robust, that it requires no knowledge (or preconceived notion) of what the wave function looks like, and that it can explore a very complicated configuration space. On the other hand, in the case of one-dimensional quantum mechanics, the Metropolis method is very inefficient. Using direct diagonalization in a finite basis it is not difficult to compute the energies of the first several states in the potential in equ. (1.1) with very high accuracy, ∆ E/E 0 ∼ O(10 −6 ) or better. On the other hand, using the Monte Carlo method, it is quite difficult to achieve an accuracy of O(10 −2 ) for observable other than (E 1 − E 0 )/E 0 . The advantage of the Monte Carlo method is that the computational cost scales much more favorably in high dimensional systems, such as quantum mechanics of many particles, or quantum field theory.
The Monte Carlo method also does not directly provide the ground state energy, or the partition function and free energy at finite temperature. In quantum mechanics we can compute the ground state energy from the expectation value of the Hamiltonian H = T + V in the limit β → ∞. The expectation value of the kinetic energy is singular as a → 0, but this problem can be overcome by using the Virial theorem
H = x 2 V +V .
(1.12)
There is no simple analog of this method in quantum field theory. A method for computing the free energy which does generalize to quantum field theory is the adiabatic switching technique. The idea is to start from a reference system for which the free energy is known and calculate the free energy difference to the real system using Monte Carlo methods. For this purpose I write the action as
S α = S 0 + α∆ S , (1.13)
where S 0 is the action of the reference system, ∆ S is defined by ∆ S = S − S 0 where S is the full action, and α can be viewed as a coupling constant. The action S α interpolates between the physical system for α = 1 and the reference system for α = 0. Integrating the relation
∂ log Z(α)/(∂ α) = − ∆ S α I find log(Z(α = 1)) = log(Z(α = 0)) − 1 0 dα ∆ S α ,
(1.14)
where . α is computed using the action S α . In the case of the anharmonic oscillator it is natural to use the harmonic oscillator as a reference system. In that case the reference partition function is
Z(α = 0) = ∑ n exp(−β E 0 n ) = exp(−β ω 0 /2) 1 − exp(−β ω 0 ) , (1.15)
where ω 0 is the oscillator constant. Note that the free energy F = T log(Z) of the anharmonic oscillator should be independent of the reference frequency ω 0 . The integral over the coupling constant α can be calculated in a Monte Carlo simulation by slowly changing α from 0 to 1 during the simulation. Free energy calculations of this type play an important role in quantum chemistry, and more efficient methods for determining ∆ F have been developed [9].
Quantumchromodynamics
QCD at zero temperature and density
The rich phenomenology of strong interacting matter is encoded in a deceptively simple Lagrangian. The fundamental fields in the Lagrangian are quark fields q c α f and gluon fields A a µ . Here, α = 1, . . . , 4 is a Dirac spinor index, c = 1, . . . , N c with N c = 3 is a color index, and f = up, down, strange, charm, bottom, top is a flavor index. Interactions in QCD are governed by the color degrees of freedom. The gluon field A a µ is a vector field labeled by an index a = 1, . . . , N 2 c − 1 in the adjoint representation. The N 2 c − multiplet of gluon fields can be used to construct a matrix valued field A µ = A a µ λ a 2 , where λ a is a set of traceless, Hermitian, N c × N c matrices. The QCD Lagrangian is
L = − 1 4 G a µν G a µν + N f ∑ fq f (iγ µ D µ − m f )q f , (1.16)
where G a µν is the QCD field strength tensor defined by 18) and m f is the mass of the quarks. The terms in equ. (1.16) describe the interaction between quarks and gluons, as well as nonlinear three and four-gluon interactions. Note that, except for the number of flavors and their masses, the structure of the QCD Lagrangian is completely fixed by the local SU(N c ) color symmetry.
G a µν = ∂ µ A a ν − ∂ ν A a µ + g f abc A b µ A c ν ,(1.iD µ q = i∂ µ + gA a µ λ a 2 q ,(1.
A natural starting point for studying the phase diagram of hadronic matter is to consider the light flavors (up, down, and strange) as approximately massless, and the heavy flavors (charm, bottom, top) as infinitely massive. In this limit the QCD Lagrangian is completely characterized by two integer valued parameters, the number of colors N c = 3 and flavors N f = 3, and a single dimensionless coupling constant g. Quantum fluctuations cause the coupling constant to become scale dependent [10,11]. At one-loop order the running coupling constant is
g 2 (q 2 ) = 16π 2 b 0 log(q 2 /Λ 2 QCD ) , b 0 = 11 3 N c − 2 3 N f , (1.19)
where q is a characteristic momentum and N f is the number of active flavors. The scale dependence of the coupling implies that, as a quantum theory, QCD is not governed by a dimensionless coupling but by a dimensionful scale, the QCD scale parameter Λ QCD . This phenomenon is known as dimensional transmutation [12].
A crucial aspect of the scale dependence of the coupling in QCD is that the effective interaction decreases as the energy or momentum scale is increased. This feature of QCD is called asymptotic freedom [10,11]. It implies that high energy interactions can be analyzed using perturbative QCD. The flip side of asymptotic freedom is anti-screening, or confinement: The effective interaction between quarks increases with distance, and quarks are permanently confined into hadrons. The absence of colored states in the spectrum implies that the use of perturbation theory is subtle, even at high energy. Quantities that can be computed perturbatively either involve a sum over many hadronic states, or allow for a factorization of perturbative interactions and non-perturbative matrix elements.
If quarks are massless then QCD observables are dimensionless ratios like m p /Λ QCD , where m p is the mass of the proton. This implies that the QCD scale is not a parameter of the theory, but reflects a choice of units. In the real world QCD is part of the standard model, quarks acquire masses by electroweak symmetry breaking, and the QCD scale is fixed by value of the coupling constant at the weak scale. Experiments determine the value of the QCD fine structure constant α s = g 2 /(4π) at the position of the Z boson pole, α s (m z ) = 0.1184 ± 0.0007 [13]. The numerical value of Λ QCD depends on the renormalization scheme used in computing quantum corrections to the coupling constant. Physical observables, as well as the value of b 0 , are independent of this choice. In the modified minimal subtraction (MS) scheme the scale parameter is Λ QCD 200 MeV [13].
A schematic phase diagram of QCD is shown in Fig. 1.2. In this figure I show the phases of strongly interacting matter as a function of the temperature T and the baryon chemical potential µ. The chemical potential µ controls the baryon density ρ, defined as 1/3 times the number density of quarks minus the number density of anti-quarks. In the following I will explain that the basic structure of the phase diagram is determined by asymptotic freedom and the symmetries of QCD. For more detailed reviews see [14][15][16].
At small temperature and chemical potential the interaction between quarks is dominated by large distances and the effective coupling is strong. This implies that quarks and gluons are permanently confined in color singlet hadrons, with masses of order Λ QCD . The proton, for example, has a mass of m p = 935 MeV. A simplistic view of the structure of the proton is that it is a bound state of three constituent quarks with effective masses m Q m p /3 Λ QCD .
These masses should be compared to the bare up and down quark masses which are of the order 10 MeV.
As a consequence of strong interactions between virtual quarks and anti-quarks in the QCD ground state a vacuum condensate ofqq pairs is generated, qq −Λ 3
QCD [17][18][19]. This Schematic phase diagram of QCD as a function of temperature T and baryon chemical potential µ. The quark gluon plasma phase is labeled QGP, and CFL refers to the color superconducting phase that is predicted to occur at asymptotically large chemical potential. The critical endpoints of the chiral and nuclear liquid-gas phase transitions, are denoted by red and black points, respectively. The chiral pseudo-critical line associated with the crossover transition at low temperature is shown as a dashed line. The green arrows indicate the regions of the phase diagram that can be studied by the experimental heavy ion programs at RHIC and the LHC. corresponding excitations in the spectrum of QCD are the π, K and η mesons. The SU(3) L × SU(3) R symmetry is explicitly broken by quark masses, and the mass of the charged pion is m π = 139 MeV. This scale can be compared to the mass of the lightest non-Goldstone particle, the rho meson, which has a mass m ρ = 770 MeV.
At low energy Goldstone bosons can be described in terms of an effective field theory in which composite π, K and η particles are treated as fundamental fields. The Goldstone boson field can be parametrized by unitary matrices
Σ = exp(iλ a φ a / f π ) ,
(1. 20) where λ a are the Gell-Mann matrices for SU(3) flavor and f π = 93 MeV is the pion decay constant. For example, π 0 = φ 3 and π ± = (φ 1 ± iφ 2 )/2 describe the neutral and charged pion. Other components of φ a describe the neutral and charged kaons, as well as the eta. The eta prime, which is the SU(3) F singlet meson, acquires a large mass because of the axial anomaly, and is not a Goldstone boson. The axial anomaly refers to the fact that the flavor singlet axial current, which is conserved in massless QCD at the classical level, is not conserved if quantum effects are taken into account. The divergence of the axial current A µ =qγ µ γ 5 q is
∂ µ A µ = g 2 N f 32π 2 ε µναβ G a µν G a αβ .
(1.21)
The right hand side is the topological charge density, which I will discuss in more detail in Sect. 1.4.3. At low energy the effective Lagrangian for the chiral field can be organized as a derivative expansion in gradients of Σ . Higher derivative terms describe interactions that scale as either the momentum or the energy of the Goldstone boson. Since Goldstone bosons are approximately massless, the energy is of the same order of magnitude as the momentum. We will see that the expansion parameter is p/(4π f π ). At leading order in (∂ / f π ) there is only one possible term which is consistent with chiral symmetry, Lorentz invariance and the discrete symmetries C, P, T . This is the Lagrangian of the non-linear sigma model
L = f 2 π 4 Tr ∂ µ Σ ∂ µ Σ † + BTr(MΣ † ) + h.c. + . . . . , (1.22)
where the term proportional to B takes into account explicit symmetry breaking. Here, M = diag(m u , m d , m s ) is the quark mass matrix and B is a low energy constant that I will fix below. First, I will show that the parameter f π controls the pion decay amplitude. For this purpose I have to gauge the weak SU(2) L symmetry of the non-linear sigma model. As usual, this is achieved by promoting the derivative to a gauge covariant operator ∇ µ Σ = ∂ µ Σ + ig w W µ Σ where W µ is the charged weak gauge boson and g w is the weak coupling constant. The gauged non-linear sigma model gives a pion-W boson interaction
L = g w f π W ± µ ∂ µ π ∓ .
(1. 23) This term contributes to the amplitude A for the decay π ± → W ± → e ± ν e . I get A = g w f π q µ , where q µ is the momentum of the pion. This result can be compared to the standard definition of f π in terms of the weak axial current matrix element of the pion, 0|A a µ |π b = f π q µ δ ab . This comparison shows that the coefficient of the kinetic term in the non-linear sigma model is indeed the weak decay constant of the pion.
In the ground state Σ = 1 and the ground state energy is E vac = −2BTr [M]. Using the relation qq = ∂ E vac /(∂ m) we find qq = −2B. Fluctuations around Σ = 1 determine the masses of the Goldstone bosons. The pion mass satisfies the Gell-Mann-Oaks-Renner relation (GMOR) [17] m 2 π f 2 π = −(m u + m d ) qq
(1. 24) and analogous relations exist for the kaon and eta masses. This result shows the characteristic non-analytic dependence of the pion mass on the quark masses, m π ∼ √ m q .
QCD at finite temperature
The structured of QCD at high temperature can be analyzed using the assumption that quarks and gluons are approximately free. We will see that this assumption is internally consistent, and that it is confirmed by lattice calculations. If the temperature is large then quarks and gluons have thermal momenta p ∼ T Λ QCD . Asymptotic freedom implies that these particles are weakly interacting, and that they form a plasma of mobile color charges, the quark gluon plasma (QGP) [20,21]. The pressure of a gas of quarks and gluons is
P = π 2 T 4 90 2 N 2 c − 1 + 4N c N f 7 8 .
(1. 25) This is the Stefan-Boltzmann law, where 2(N 2 c − 1) is the number of bosonic degrees of freedom, and 4N c N F is the number of fermions. The factor 7/8 takes into account the difference between Bose and Fermi statistics. The pressure of a QGP is parametrically much bigger than the pressure of a pion gas, indicating that the QGP at high temperature is thermodynamically stable.
The argument that the QGP at asymptotically high temperature is weakly coupled is somewhat more subtle than it might appear at first glance. If two quarks or gluons in the plasma interact via large angle scattering then the momentum transfer is large, and asymptotic freedom implies that the effective coupling is weak. However, the color Coulomb interaction is dominated by small angle scattering, and it is not immediately clear why the effective interaction that governs small angle scattering is weak. The basic observation is that in a high temperature plasma there is a large thermal population (n ∼ T 3 ) of mobile color charges that screen the interaction at distances beyond the Debye length r D ∼ 1/(gT ). We also note that even in the limit T Λ QCD the QGP contains a non-perturbative sector of static magnetic color fields [22]. This sector of the theory, corresponding to energies below the magnetic screening scale m M ∼ < g 2 T , is strongly coupled, but it does not contribute to thermodynamic or transport properties of the plasma in the limit T → ∞.
The quark gluon plasma exhibits neither color confinement nor chiral symmetry breaking. This implies that the high temperature phase must be separated from the low temperature hadronic phase by a phase transition. The order of this transition is very sensitive to the values of the quark masses. In QCD with massless u, d and infinitely massive s, c, b,t quarks the transition is second order [23]. In the case of massless (or sufficiently light) u, d, s quarks the transition is first order. Lattice simulations show that for realistic quark masses, m u m d 10 MeV and m s 120 MeV, the phase transition is a rapid crossover [24,25]. The transition temperature, defined in terms of the chiral susceptibility, is T c 151±3±3 MeV [26,27], which is consistent with the result 154 ± 9 MeV reported in [25,28].
The phase transition is expected to strengthen as a function of chemical potential, so that there is a critical baryon chemical potential µ at which the crossover turns into a first order phase transition [29]. This critical point is the endpoint of the chiral phase transition. Because of the fermion sign problem, which I will discuss in Sect. 1.4.4, it is very difficult to locate the critical endpoint using simulations on the lattice. Model calculations typically predict the existence of a critical point, but do not constrain its location. A number of exploratory lattice calculations have been performed [30][31][32][33][34][35], but at the time I am writing these notes it has not been demonstrated conclusively that the transition strengthens with increasing baryon chemical potential [36]. The critical endpoint is important because, with the exception of the endpoint of the nuclear liquid-gas transition, it is the only thermodynamically stable point in the QCD phase diagram at which the correlation length diverges. This means that the critical endpoint may manifest itself in heavy ion collisions in terms of enhanced fluctuation observables [37].
High baryon density QCD
The origin of the phase diagram, T = µ = 0, corresponds to the vacuum state of QCD. If we stay on the T = 0 line and increase the chemical potential µ then there is no change initially. At zero temperature the chemical potential µ is the energy required to add a baryon to the system, and QCD has a large mass gap for baryonic states. The first non-vacuum state we encounter along the T = 0 axis of the phase diagram is nuclear matter, a strongly correlated superfluid composed of approximately non-relativistic neutrons and protons. Nuclear matter is self-bound, and the baryon density changes discontinuously at the onset transition, from ρ = 0 to nuclear matter saturation density ρ = ρ 0 0.15 fm −3 . The discontinuity decreases as nuclear matter is heated, and the nuclear-liquid gas phase transition ends in a critical point at T 18 MeV and ρ ρ 0 /3 [38][39][40]. Hot hadronic matter can be described quite accurately as a weakly interacting gas of hadronic resonances. Empirically, the density of states for both mesons and baryons grows exponentially. A system of this type is called a Hagedorn gas, and it is known that a Hagedorn gas has a limiting temperature. It is also known that an exponential density of states can be realized using the string model of hadronic resonances.
In the regime µ Λ QCD we can use arguments similar to those in the limit T Λ QCD to establish that quarks and gluons are weakly coupled. At low temperature non-interacting quarks form a Fermi surface, where all states below the Fermi energy E F µ/3 are filled, and all states above the Fermi energy are empty. Interactions take place near the Fermi surface, and the corresponding interaction is weak. The main difference between cold quark matter and the hot QGP is that the large density of states near the quark Fermi surface implies that even weak interactions can cause qualitative changes in the ground state of dense matter. In particular, attractive interactions between pairs of quarks (p F , −p F ) on opposite sides of the Fermi surface leads to color superconductivity and the formation of a qq diquark condensate.
Since quarks carry many different quantum numbers, color, flavor, and spin, a variety of superconducting phases are possible. The most symmetric of these, known as the color-flavor locked (CFL) phase, is predicted to exist at asymptotically high density [41,42]. In the CFL phase the diquark order parameter is
q A α f q B β g = (Cγ 5 ) αβ ε ABC ε f gh δ h C Φ , (1.26)
where Cγ 5 is an anti-symmetric (spin zero) Dirac matrix, and Φ determines the magnitude of the gap near the Fermi surface. This order parameter has a number of interesting properties.
It breaks the U(1) symmetry associated with baryon number, leading to superfluidity, and it breaks the chiral SU(3) L × SU(3) R symmetry. Except for Goldstone modes the spectrum is fully gapped. Fermions acquire a BCS-pairing gap, and gauge fields are screened by the color Meissner effect. This implies that the CFL phase, even though it is predicted to occur in a very dense liquid of quarks, exhibits many properties of superfluid nuclear matter.
The CFL order parameter describes equal pair-condensates ud = us = ds of all three light quark flavors. As the density is lowered effects of the non-zero strange quark mass become important, and less symmetric phases are predicted to appear [14]. Phases that have been theoretically explored include Bose condensates of pions and kaons, hyperon matter, states with inhomogeneous quark-anti-quark or diquark condensates, and less symmetric color superconducting phases. The regimes of moderate baryon chemical potential in the phase diagram shown in Fig. 1.2 is largely conjecture. Empirical evidence shows that at low µ there is a nuclear matter phase with broken chiral symmetry and zero strangeness, and weak coupling calculations indicate that at high µ we find the CFL phase with broken chiral symmetry but non-zero strangeness. In principle the two phases could be separated by a single onset transition for strangeness [43,44], but model calculation support a richer picture in which one or more first order transitions intervene, as indicated in Fig. 1.2.
Lattice QCD
The Wilson action
Symmetry arguments and perturbative calculations can be used to establish general features of the QCD phase diagram, but quantitative results can only be obtained using numerical calculations based on lattice QCD. The same is true for the masses of hadrons, their proper-ties, and interactions. Lattice QCD is based on the euclidean path integral representation of the partition function, see the contribution by Hatsuda and [45][46][47][48][49] for introductions. More detailed reviews of the lattice field theory approach to hot and dense QCD can be found in [50,51]. The euclidean partition function for QCD is
Z(T, µ,V ) = DA µ Dq f Dq f exp(−S E ) , (1.27)
where S E is the euclidean action
S E = − β 0 dτ V d 3 x L E ,L E (µ) = L E (0) + µq f γ 0 q f .
(1. 29) In his pioneering work Wilson proposed to discretize the action on a N τ ×N 3 σ space-time lattice with lattice spacings a τ and a σ [52]. In many cases a σ = a τ = a, but we will encounter an exception in Sect. 1.5.4. when we discuss the Hamiltonian formulation of the theory. At finite temperature we have to ensure that the spatial volume is larger than the inverse temperature, L > β . Here, β = N τ a τ , L = N σ a σ , and V = L 3 is the volume. Thermodynamic quantities are determined by taking derivatives of the partition function. The energy and baryon density are given by
E = − 1 V ∂ log Z ∂ β β µ , (1.30) ρ = 1 βV ∂ log Z ∂ µ β .
(1.31)
The discretized action for the gauge fields originally suggested by Wilson is given by
S W = − 2 g 2 ∑ n ∑ µ<ν Re Tr W µν (n) − 1 (1.32)
where W µν (n) is the plaquette, the product of gauge links around an elementary loop on the lattice,
W µν (n) = U µ (n)U ν (n +μ)U −µ (n +μ +ν)U −ν (n +ν) .
(1.33)
Here, n = (n τ , n i ) labels lattice sites andμ is a unit vector in the µ-direction. The gauge links U µ (n) are SU(N c ) matrices. We can think of the gauge links as line integrals
U µ (n) = exp(iaA µ (n)) ,
(1. 34) and of the plaquettes as fluxes W µν (n)) = exp(ia 2 G µν (n)) ,
(1. 35) but the fundamental variables in the path integral are the (compact) group variables U µ , not the (non-compact) gauge potentials A µ . In particular, the path integral in pure gauge QCD takes the form
Z = ∏ n,µ dU µ (n) exp(−S W ) ,
(1. 36) where dU is the Haar measure on SU(N c ). The Haar measure describes the correct integration measure for the gauge group. Some group integrals are discussed by Hatsuda, but part of the beauty of the Metropolis method is that we never have to explicitly construct dU µ (n).
Using equ. (1.34) we can check that the Wilson action reduces to continuum pure gauge theory in the limit a → 0. We note that the gauge invariance of QCD is maintained exactly, even on a finite lattice, but that Lorentz invariance is only restored in the continuum limit. We also observe that classical scale invariance implies that the massless QCD action is independent of a. The continuum limit is taken by adjusting the bare coupling at the scale of the lattice spacing according to asymptotic freedom, see equ.
(1. 19). In practice the lattice spacing is not small enough to ensure the accuracy of this method, and more sophisticated scale setting procedures are used [50,51]. Monte Carlo simulations of the path integral equ. (1.36) can be performed using the Metropolis algorithm explained in Sect. 1.2:
• Initialize the link variables with random SU(N c ) matrices. A simple algorithm is based on writing U in terms of N c complex row vectors u i . Take each vector to be random unit vector and then use the Gram-Schmidt method to orthogonalize the different vectors, u i · u * j = δ i j . This ensures that U is unitary and distributed according to the SU(N c ) Haar measure [53].
V (R) = − lim T →∞ 1 T log [ W (C ) ] ,
(1. 38) where R × T is the area of a rectangular loop C . • Tune to the continuum limit a → 0 by adjusting the coupling constant according to the asymptotic freedom formula equ. (1. 19). Note that the Lambda parameter for the lattice regulator is quite small, Λ lat = ΛM S /28.8 [54]. Also observe that we have to increase N σ , N τ to keep the physical volume constant as a → 0. Indeed, once the continuum limit a → 0 is reached we have to study the infinite volume (thermodynamic) limit V → ∞. This is more difficult than it appears, because a → 0, corresponding to g → 0, is a critical point of the partition function (1.36), and simulations exhibit critical slowing down.
Metropolis simulations with the pure gauge Wilson action are very simple and robust.
As an illustration I provide a simple Z 2 lattice gauge theory code written by M. Creutz in the appendix. Reasonable results for the heavy quark potential can be obtained on fairly coarse lattices, for example an 8 4 lattice with a spacing a 0.25 fm [55]. However, accurate results with controlled error bars require significant computational resources. In practice the perturbative relation between a and g 2 is only valid on very fine lattices, and the scale setting has to be done non-perturbatively. Also, determining the spectrum of pure gauge theory is difficult. Purely gluonic states, glueballs, are quite heavy, with masses in the range m 1.6
GeV and higher. This implies that gluonic correlation functions are short range, requiring a resolution a 0.1 fm or better. Finally, simulations on fine lattices are affected by critical slowing down. Indeed, finding an efficient method for updating gauge fields on very fine lattices, analogous to the cluster algorithms for spin models [56], is an important unsolved problem.
Fermions on the lattice
The main difficulty in lattice QCD is related to the presence of light fermions. The fermion action is of the form
S F = a 4 ∑ m,nq (m)D mn q(n) .
(1.39) Formally, the integration over the fermion fields can be performed exactly, resulting in the determinant of the Dirac operator det(D(A µ , µ)). Several methods exist for discretizing the Dirac operator D, and for sampling the determinant. Different discretization schemes differ in the degree to which chiral symmetry is maintained on a finite lattice. The original formulation due to Wilson [52] preserves no chiral symmetry, the staggered Fermion scheme [57] maintains a subset of the full chiral symmetry, while the domain wall [58] and overlap methods [59] aim to preserve the full chiral symmetry on a discrete lattice.
The central difficulty in implementing these methods is that the fermion determinant is a very non-local object. While updating a single gauge link only requires recalculating a small number of plaquettes (6 in d = 4 dimensions) in the Wilson action, recalculating the fermion action requires computing the determinant of a (very sparse) matrix of size (N τ N 3
σ )×(N τ N 3 σ ) or
larger. This is clearly impractical. Fermion algorithms rely on a number of tricks. The first is the observation that the Dirac operator has a property called γ 5 -hermiticity, γ 5 Dγ 5 = D † , which implies that det(D) is real. The determinant of a two-flavor theory is then real and positive.
This allows us to rewrite the fermion determinant as a path integral over a bosonic field with a non-local but positive action
det(D u ) det(D d ) = det(DD † ) = Dφ Dφ † exp(−φ † (DD † ) −1 φ ) .
(1.40)
The path integral over the pseudofermion field φ can be sampled using a combination of deterministic methods like molecular dynamics and stochastic methods such as the Metropolis algorithm. These combined algorithms are known as Hybrid Monte Carlo (HMC) methods.
Codes that implement the HMC algorithm for pseudofermions are significantly more complicated than the Metropolis algorithm for the pure gauge Wilson action discussed above, and I refer the interested reader to the more specialized literature [60]. I also note that since these algorithms involve the calculation of D −1 the computational cost increases as the quark masses are lowered. The calculation of correlation functions also differs from the bosonic case. Consider, for example, an operator with the quantum numbers of a charged pion, J π (x) =ū a (x)γ 5 d a (x). Since the fermion action is quadratic the correlation function in a given gauge configuration can be computed exactly in terms of the fermion propagator. The full correlation function is
Π π (x) = J π (x)J π (0) = Tr [S(x, 0)γ 5 S(0, x)γ 5 ] ,(1.41)
where S(x, y) = x|D −1 |y is the fermion propagator, and we have assumed exact isospin symmetry so that the propagator of the up quark is equal to the propagator of the down quark. Note that the interaction between quarks is encoded in the average over all gauge fields. The
(x) = ε abc (u a (x)Cγ µ u b (x))(γ µ γ 5 d c (x)) α . The correlation function is Π αβ (x) = 2ε abc ε a b c γ µ γ 5 S cc (0, x)γ ν γ 5 αβ
Tr γ µ S aa (0, x)γ ν C(S bb (0, x)) T C .
(1.42)
Note that meson correlation function involves one forward and one backward going propagator, whereas the propagators in the baryon correlation function are all forward going.
A difficulty arises when we consider flavor singletqq currents such as J η = (ū a (x)γ 5 u a (x) + d a (x)γ 5 d a (x))/ √ 2, which has the quantum numbers of the η meson. We find
Π η (x) = J η (x)J η (0) = Tr [S(x, 0)γ 5 S(0, x)γ 5 ] − 2Tr [S(x, x)γ 5 ] Tr [S(0, 0)γ 5 ] ,
(1. 43) which involve propagators S(x, x) that loop back to the same point. These contributions are known as quark-line disconnected diagrams, and difficult to treat numerically, see [61] for a recent discussion.
The QCD vacuum
It is natural to hope that lattice QCD can provide us with an intuitive picture of what the QCD vacuum looks like, similar to the picture of the quantum mechanical ground state shown in Fig. 1.1. This turns out to be more complicated, for a number of reasons. The first is that the field in QCD is a SU(3) matrix, which is hard to visualize. The second, more important, problem is related to quantum fluctuations. In QCD there is no obvious separation of scales that would allow us to clearly separate perturbative fluctuations from large semi-classical fluctuations. This has led to the idea to eliminate short range fluctuations by some kind of filtering or smoothing algorithm. The simplest of these is known as cooling [63]. In the cooling method we modify the Metropolis algorithm so that only updates that reduce the action are accepted. Since the update algorithm is local, this will tend to eliminate small structures but preserve larger objects. A modern version of cooling is gradient flow [64]. In the gradient flow method we continue the gauge fields to a 5th "time" dimension. In this direction the fields satisfy a differential equation
∂ τ A µ = D ν G µν , (1.44) where A µ (τ = 0) isQ top = d 4 x q(x) , q(x) = g 2 64π 2 ε µναβ G a µν G a αβ .
(1.45)
Exact higher charge solutions exist, but the QCD vacuum is dominated by configurations with both instantons and anti-instantons. These gauge field configurations are only approximate solutions of the equations of motion [66]. Under cooling or gradient flow instantons and antiinstantons will eventually annihilate and evolve to an exact multi-instantons solution with Q top = N I − N A , where N I,A are the numbers of (anti)instantons. However, the N I + N A topological objects are preserved for flow times that are much longer than the decay time of ordinary quantum fluctuations, and the total number of well separated instantons and anti-instantons can be determined. The average topological charge is zero, but the pure gauge vacuum is characterized by a non-zero topological susceptibility
χ top = 1 V Q 2 top ,
(1. 46) where V is the euclidean four-volume. The topological charge can be determined using the naive lattice discretization of equ. (1. 45), but this operator is very noisy, and in general not an integer. This problem can be addressed using the cooling or gradient flow algorithms discussed above. Recent lattice calculations based on these methods give χ top = (190±5 MeV) 4 [67,68]. A simple picture of the QCD vacuum which is consistent with this value is the dilute instanton liquid model, which assumes that the topological susceptibility is determined by Poisson fluctuations in an ensemble of instantons and anti-instantons with an average density
(N I +N A )/V 1 fm −4 [66]
. This is an approximate picture, and more complicated configurations involving monopoles and fractional charges are needed to understand the large N c limit and the role of confinement [69]. Another important development is the use of fermionic methods to analyze the vacuum structure of QCD. In a given gauge configuration the quark propagator can written as
S(x, y) = ∑ λ ψ λ (x)ψ † λ (y) λ + im , (1.47)
where ψ λ is an eigenvector of the Dirac operator with eigenvalue λ : Dψ λ = (λ + im)ψ λ . Note that this is not how propagators are typically determined in lattice QCD, because the calculation of the complete spectrum is numerically very expensive. Gamma five hermiticity implies that eigenvalues come in pairs ±λ . The quark condensate is given by
qq = −i d 4 x Tr [S(x, x)] = − ∑ λ >0 2m λ 2 + m 2 .
(1.48)
Here, I have ignored the contribution from exact zero modes because the density of zero modes is suppressed by m N f . This factor comes from the determinant in the measure. If we were to ignore the determinant (this is called the quenched approximation), then the quark condensate would diverge as 1/m. We observe that a finite value of the quark condensate in the chiral limit m → 0 requires an accumulation of eigenvalues near zero. This can be made more explicit by introducing the density of states
ρ(ν) = ∑ λ ≥0 δ (λ − ν) .
(1. 49) The chiral condensate in the thermodynamic and chiral limits is given by
qq = −πρ(0) .
(1. 50) This is known as the Banks-Casher relation [70]. Note that is is essential to take the thermodynamic V → ∞ limit before the chiral limit m → 0.
Exact zero modes of the Dirac operator are related to topology. The Dirac operator has one left handed zero mode in the field of an instanton, and a right handed zero mode in the field of an anti-instanton. This is consistent with the Atiyah-Singer index theorem, which states that the topological charge is equal to the index of the Dirac operator, the difference between the number of left and right handed zero modes, Q top = N f (n L − n R ). These results suggest that it is possible to give a purely fermionic definition of the topological charge density.
On the lattice, this can be achieved for a class of Dirac operators that satisfy the Ginsparg-Wilson relation [71] Dγ 5 + γ 5 D = aDγ 5 D ,
(1.51)
where a is the lattice spacing. In the continuum limit we recover the expected relation (1.53) on a discrete lattice. Fig. 1.3 shows the absolute square of q f (x) constructed from lying eigenmodes of the Dirac operator. We observe that fermionic operators can indeed be used to probe the topological content of the QCD vacuum directly, without the need for filtering or smoothing.
The existence of zero mode implies that the topological susceptibility is zero if at least one quark flavor is massless. This is because the path integral measure contains the fermion determinant, which is vanishes if m = 0 and Q top = 0. We can be more precise using the chiral lagrangian equ. (1.22). In order to keep track of topology we add to the QCD action a topological term S θ = iθ Q top . Then the topological susceptibility is given by the second derivative of the free energy with respect to θ . Since every zero mode in the Dirac operator contributes a factor det(M) to the partition function we know that θ enters the effective lagrangian in the combination θ + arg(det(M)). The vacuum energy is determined by It is tempting to think that exact zero modes, governed by topology, and approximate zero modes, connected to chiral symmetry breaking, are related. This is the basis of the instanton liquid model [66]. In the instanton liquid model we consider an ensemble of instantons and anti-instantons with no (or small) net topology. The exact zero modes of individual instantons are lifted, and form a zero mode zone. The density of eigenvalues in the zero mode zone determines the chiral condensate via the Banks-Casher relation. This model predicts the correct order of magnitude of qq , but the calculation cannot be systematically improved because chiral symmetry breaking requires strong coupling. Recently, we showed that the connection of chiral symmetry breaking, instantons and monopoles can be made precise in a certain limit of QCD. The idea is to compactify QCD on R 3 × S 1 , where the size of the circle is much smaller than Λ −1 QCD , and the fermions satisfy non-thermal (twisted) boundary conditions [72].
V = −BTr Me iθ /N f Σ † + h.c. ,
Lattice QCD at finite baryon density
In section 1.4.2 I discussed some of the difficulties that appear when we discretize the Dirac operator. A separate, more serious, issue with fermions is that for µ = 0 the Dirac operator does not satisfy γ 5 -hermiticity. This implies that the fermion determinant is no longer real, and that standard importance sampling methods fail. This is the "sign" problem already mentioned in Sect. where . pq refers to a phase quenched average. This average can be computed using the Metropolis (or HMC) algorithm. The problem is that the average phase e iϕ pq is very small. This follows from the fact that the average phase can be expressed as the ratio of two partition functions
e iϕ pq = dU det(D) e −S dU | det(D)| e −S = Z Z pq = e −V ∆ F ,(1.57)
where ∆ F is the free energy density difference, and V is the volume of the system. This shows that the phase is exponentially small, and that the ratio equ. (1.56) is very difficult to compute. As a specific example consider QCD with two degenerate flavors, up and down, and a baryon chemical potential µ u = µ d = µ B /3. Then det(D) = det(D u ) det(D d ) and | det(D)| = det(D u ) det(D d ) * . The phase quenched partition function Z pq can be interpreted as the partition function of QCD with a non-zero isospin chemical potential µ u = −µ d = µ I /2. The small µ behavior of both the isospin and baryon number theories at T = 0 is easily understood. The isospin theory has a second order phase transition at µ I = m π which corresponds to the onset of pion condensation. The baryon theory has a first order transition at µ B = m p − B, where B 15 MeV is the binding energy of infinite nuclear matter. This implies that for µ > m π the partition functions Z and Z pq describe very different physical systems, and the sign problem is severe. The sign problem may manifest itself in different ways. Consider, for example, an attempt to study the correlation function of A nucleons in a QCD ensemble generated at µ B = 0. For large A this correlation function determines the binding energy of nuclear matter. There are two difficulties with this approach. The first is that the operator contains 3A quark fields, so that the correlator has up to (3A)! contractions. This is not prohibitive, because the number of contractions can be reduced using symmetries and iterative algorithms. Indeed, correlators for medium mass nuclei have been computed [49]. The second, more serious, problem is the signal-to-noise ratio. The variance of the correlator C is
var(C) = CC † − C 2 .
(1.58)
The A nucleon correlator C contains 3A forward going quark propagators, and CC † consists of 3A forward and 3A backward propagators. This implies that CC † couples to a state of 3A mesons. Since the lightest meson is the pion and the lightest baryon is the proton the signalto-noise of an A nucleon correlation function is
S N ∼ exp(−A(m p − 3m π /2)τ) .
(1.59)
In order to resolve the ground state with a given A we have to make τ sufficiently large so that excited states with the same A are suppressed. For A = 1 there is a πN continuum starting an excitation energy ∆ E = m π , and the first resonance at ∆ E = m ∆ − m N 300 MeV. This means that we have to consider τ ∼ > 1 fm. For multi-nucleon states the situation is more complicated, because there are many closely spaced multi-nucleon states in a finite volume. The problem is studied, for example, in [73]. The conclusion is that different bound and scattering states are separated by 10s of MeV, requiring τ ∼ > 4 fm. It may be possible to improve on this estimate by using variationally improved sources, but even for τ 2 fm the signal to noise is extremely poor for A ∼ > 4. This shows that in simulations with fixed A the sign problem manifests itself as a noise problem. This is not surprising. One way to think about the sign problem is to view it as an overlap problem. The configurations that contribute to Z pq have poor overlap with those that contribute to Z. The same phenomenon is at work here. Configurations generated at µ B = 0 reflect vacuum physics, and the lightest fermionic fluctuation is a pion. Large cancellations are required to explore the physics of multi-baryon states. There are many attempts to find direct solutions to the sign problem, but at this time the only regime in which controlled calculations are feasible is the regime of small µ and high T . In this region the partition function can be expanded in a Taylor series in µ/T . The corresponding expansion coefficients are generalized susceptibilities that can be determined from lattice simulations at zero chemical potential. The susceptibilities not only determine the equation of state at finite baryon density, but also control fluctuations of conserved charges.
In addition to methods that are restricted to the regime µ ∼ < πT , a number of proposals to explore QCD at high baryon density are being pursued. This includes new approaches, like integration over Lefshetz thimbles [74,75], as well as novel variants of old approaches, like the complex Langevin method [76,77], or the use of dual variables [78]. The ultimate promise of these methods is still unclear, but the central importance of the sign problem to computational physics continues to attract new ideas.
Real time properties
The basic trick in lattice QCD is the continuation of the path integral to imaginary time. This makes it possible to calculate the path integral by importance sampling, but it implies that we only have direct access to imaginary time correlation functions. For many observables this is not a serious problem. Thermodynamic observables, for example, are static quantities and no analytic continuation is necessary. The ground state contribution to a hadron correlation function is Π (τ) ∼ e −m H τ which is trivially continued to Π (t) ∼ e −im H t . However, difficulties arise if one studies excited states, in particular resonances, the interaction between hadrons, or the real time response of many body systems at finite temperature and density. Significant progress has been made in studying scattering processes, at least in the elastic regime. This is discussed in some of the later chapters of this book. Here, I will concentrate on the calculation of real time response functions. The prototypical example is the calculation of the shear viscosity of a QCD plasma using the retarded correlation function of the stress tensor T xy , G xy,xy
R (ω, k) = −i dt d 3 x e i(ωt−k·x) Θ (t) [T xy (x,t), T xy (0, 0)] ,
(1.60)
The associated spectral function is defined by ρ(ω, k) = − Im G R (ω, k). The imaginary part of the retarded correlator is a measure of dissipation. This relation can be made more precise using fluid dynamics, which is an effective theory of the response function in the low energy, small momentum limit [79,80]. Linearized fluid dynamics shows that the static response function is determined by the pressure of the fluid, and that the leading energy and momentum dependence is governed by transport coefficients. These relations can be used to derive Kubo formulas, expressions for the transport coefficients in terms of retarded correlation functions. The Kubo relation for the shear viscosity is
η = lim ω→0 lim k→0 ρ xy,xy (ω, k) ω ,
(1. 61) and similar results hold for the bulk viscosity, the thermal conductivity, and heavy quark diffusion constants. The spectral function contains information about the physical excitations that carry the response. The euclidean path integral does not provide direct access to the retarded correlator or the spectral function. Lattice calculations are based on the relation between the spectral function and the imaginary energy (Matsubara) correlation function
G E (iω n ) = dω 2π ρ(ω) ω − iω n ,
(1. 62) where ω n = 2πnT is the Matsubara frequency. The imaginary time correlation function is
G E (τ) = dω 2π K(ω, τ)ρ(ω) ,
(1. 63) where the kernel K(ω, τ) is given by was studied on the lattice in [81][82][83][84]. The basic idea for calculating transport coefficients is to numerically compute G E (τ), invert the integral transform in equ. (1.63) to obtain the spectral functions ρ(ω), and then study the limit ω → 0. The problem is that G E (τ) is computed on a small number of discrete lattice sites, and that the imaginary time correlator at distances on the order of β /2 is not very sensitive to the slope of the spectral function at low energy. Recent attempts to to address these problems and to obtain numerically stable spectral functions and reliable error estimates are based on Bayesian methods such as the maximum entropy method mentioned in Sect. 1.2, see [85,86]. It is also possible to optimize the contribution from the transport peak by measuring the correlation functions of conserved charges, such as energy and momentum density, at nonzero spatial momentum [87,88]. A possible issue with lattice calculations is that effects of where η(ω) = ρ(ω)/ω. On the lattice it is difficult to resolve sharp features in the spectral function. Roughly, the resolution is limited by the lowest Matsubara frequency πT . I will therefore assume that the T = 0 spectral function is a Lorentzian with width πT
K(ω, τ) = cosh[ω(τ − 1/(2T ))] sinh[ω/(2T )] = [1 + n B (ω)] e −ωτ + n B (ω)e ωτ ,η(ω) − η T=0 (ω) η(0)(πT ) 2 ω 2 + (πT ) 2 .
(1. 66) Then the integral on the lhs is equal to η(0)πT , and the sum rule predicts η/s ∼ 3/(10π), quite close to η/s = 1/(4π). The lesson is that it is easy to obtain small values of η/s, and much more difficult to obtain large values of η/s, predicted by perturbative QCD [90].
The first calculation of the shear viscosity on the lattice was performed by Karsch and Wyld [81]. More recently, the problem of computing the shear and and bulk viscosity in a pure gauge plasma near T c was revisited by Meyer [82,88]. He obtains η/s = 0.102(56) and ζ /s = 0.065(17) at T = 1.24T c . Shear viscosity is only weakly dependent on temperature, but bulk viscosity is strongly peaked near T c . The value of η/s is consistent with experimental results, and with the prediction from holographic duality, η/s = 1/(4π) [91].
Nonequilibrium QCD
In the remainder of this chapter I will discuss a number of coarse grained approaches to the non-equilibrium dynamics of QCD. These method are relevant to the study of nuclear collisions, in particular in the ultra-relativistic regime. This regime is explored experimentally at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory and the Large Hadron Collider (LHC) at CERN. A rough time line of a heavy ion collision is shown in Fig. 1.4. Initial nucleon-nucleon collisions release a large number of quarks and gluons. This process is described by the full non-equilibrium quantum field theory, but there are a number of approximate descriptions that may be useful in certain regimes. The first is a classical field theory description in terms of highly occupied classical gluon fields. The second is a kinetic theory in terms of quark and gluon quasi-particles. Finally, there is a new approach, which is a description in terms of a dual gravitational theory.
Theories of the initial state demonstrate that there is a tendency towards local equilibration. If local equilibrium is achieved then a simpler theory, fluid dynamics is applicable. Fluid dynamics is very efficient in the sense that it deals with a small number of variables, the conserved densities of particle number, energy and momentum, and that it has very few parameters, an equation of state and a set of transport coefficients. The fluid dynamic stage of a heavy ion collision has a finite duration. Eventually the density becomes too low and local equilibrium can no longer be maintained. At this point kinetic theory is again relevant, now formulated in terms of hadronic quasi-particles. All the theories we have mentioned, fluid dynamics, kinetic theory, classical field theory, and holography, have reached a high degree of sophistication and I will point to text books and review for detailed introductions. Nevertheless, the basic ideas are quite simple, and I will provide some examples in the following sections.
Fluid Dynamics
I begin with fluid dynamics, because it is the most general and in some ways the simplest non-equilibrium theory. It is important to remember, however, that fluid dynamics is a very rich framework, both mathematically and in terms of the range of phenomena that one may encounter. In the following I will focus on the non-relativistic theory. There is no fundamental difference between the relativistic and non-relativistic theories, but some simplifications appear in the non-relativistic regime. Non-relativistic fluid dynamics is used in many areas of physics, including the physics of cold atomic Fermi gases and neutron stars. The relativistic theory is relevant to high energy heavy ion collisions and supernova explosions. Introductions to relativistic fluid dynamics can be found in [92][93][94].
Fluid dynamics reduces the complicated non-equilibrium many-body problem to equations of motion for the conserved charges. The reason that this is possible is the separation of scales between the microscopic collision time τ micro , and the relaxation time τ macro of hydrodynamic variables. A generic perturbation of the system decays on a time scale on the order of τ micro , irrespective of the typical length scale involved. Here, τ micro is determined by microscopic time scales, such as the typical collision time between quasi-particles. A fluctuation of a conserved charge, on the other hand, cannot decay locally and has to relax by diffusion or propagation. The relevant time scale τ macro increases with the length scale of the perturbation. As a consequence, when we focus on sufficiently large scales we can assume τ macro τ micro , and focus on the evolution of conserved charges.
In a simple non-relativistic fluid the conserved charges are the mass density ρ, the momentum density π, and the energy density E . The momentum density can be used to define the fluid velocity, u = π/ρ. By Galilean invariance the energy density can then be written as the sum of the internal energy density and the kinetic energy density, E = E 0 + 1 2 ρu 2 . The conservation laws are ∂ ρ ∂t = −∇ · π,
(1.67)
∂ π i ∂t = −∇ j Π i j , (1.68) ∂ E ∂t = −∇ · j ε .
(1. 69) In order for these equations to close we have to specify constitutive relations for the stress tensor Π i j and the energy current j ε . Since fluid dynamics is an effective long wavelength theory we expect that the currents can be systematically expanded in gradients of the hydrodynamic variables ρ, u and E 0 . At leading order the stress tensor contains no derivatives and the structure is completely fixed by rotational symmetry and Galilean invariance. We have
Π i j = ρu i u j + Pδ i j + δ Π i j ,
(1. 70) where P = P(ρ, E 0 ) is the equation of state and δ Π i j contains gradient terms. The approximation δ Π i j = 0 is called ideal fluid dynamics, and the equation of motion for π is known as the Euler equation. Ideal fluid dynamics is time reversal invariant and the entropy is conserved. If gradient terms are included then time reversal invariance is broken and the entropy increases. We will refer to δ Π i j as the dissipative stresses. At first order in the gradient expansion δ Π i j can be written as δ Π i j = −ησ i j − ζ δ i j σ with
σ i j = ∇ i u j + ∇ j u i − 2 3 δ i j σ , σ = ∇ · u .
(1.71)
The dissipative stresses are determined by two transport coefficients, the shear viscosity η and the bulk viscosity ζ . The energy current is given by
j ε = uw + δ j ε ,(1.72)
where w = P + E is the enthalpy. At leading order in the gradient expansion
δ j ε i = u j δ Π i j − κ∇ i T , (1.73)
where κ is the thermal conductivity. The second law of thermodynamics implies that η, ζ and κ must be positive. The equation of motion for π at first order in gradients is known as the Navier-Stokes equation, and equ. (1.73) is Fourier's law of heat conduction. It is sometimes useful to rewrite the fluid dynamic equations using the comoving deriva-
tives D t = ∂ t + u · ∇. The equations are D t ρ = −ρ∇ · u , (1.74) D t u i = − 1 ρ ∇ j (δ i j P + δ Π i j ) , (1.75) D t ε = − 1 ρ ∇ i u i P + δ j E i ,(1.76)
where ε = E /ρ is the energy per mass. This is called the Lagrangian form of the equations, in contrast to the Eulerian form given above. The Eulerian form is more naturally implemented on a fixed space-time lattice, whereas the Lagrangian form lends itself to a discretization where the computational cell is dragged along with the fluid.
Computational fluid dynamics
The fluid dynamic equations form a set of partial differential equations (PDEs) that can be solved in a variety of ways. I will focus here on grid based methods. The main difficulties that a numerical method needs to address are: i) The existence of surfaces of discontinuity (shocks), ii) the need to implement global conservation laws exactly, even on a coarse lattice, iii) the existence of instabilities (turbulence), and the need to deal with solutions that involve many different length scales.
In the following I will discuss a numerical scheme that addresses these issues in a fairly efficient way, the PPM algorithm of Collela and Woodward [95], as implemented in the VH1 code by Blondin and Lufkin [96] and extended to viscous fluids in [97]. The first observation is that it is sufficient to construct a 1-d algorithm. Higher dimensional methods can be constructed by combining updates in different directions. Note that the coordinate system can be curvilinear, for example 3-d spherical or cylindrical coordinates, or the Milne coordinate system that is used for longitudinally expanding quark gluon plasmas.
The basic 1-d algorithm consists of a Lagrangian time step followed by a remap onto an Eulerian grid. I will denote the 1-d velocity by u and write the equation of mass conservation in terms of a mass variable m where U is any of the hydrodynamic variables (τ, u, ε), ∆ m j is the mass contained in the cell j, and m j+1/2 = ∑ j k ∆ m k . We can now integrate the conservation laws (1.80,1.77). The result is u n+1
∂ τ ∂t − ∂ u ∂ m = 0 ,∂ u ∂t + ∂ P ∂ m = 0 , (1.80) ∂ ε ∂t + ∂ (uP) ∂ m = 0 ,j = u n j + ∆t ∆ m j P j−1/2 −P j+1/2 , (1.83) ε n+1 j = ε n j + ∆t ∆ m j ū j−1/2Pj−1/2 −ū j+1/2Pj+1/2 ,(1.84)
where I have introduced the cell face averagesū j±1/2 andP j±1/2 . These quantities can be obtained by parabolic interpolation from the cell integrated values. The PPM scheme introduced in [95] uses a method for constructing cell face averages which conserves the cell integrated variables. This scheme addresses the second issue mentioned above. The first issue, the existence of shocks, can be taken into account by refining the method for calculating the cell face averages. The observation is that one can make use of exact solution of the equations of fluid dynamics in the case of piecewise constant one-dimensional flows, known as the Riemann problem. We can viewū j+1/2 andP j+1/2 as the solution of a Riemann problem with left state u j , P j and right state u j+1 , P j+1 . The PPM code contains a simple iterative Riemann solver described in [95]. Usingū j±1/2 andP j±1/2 the Lagrange step is given by: do n = nmin−3, nmax+3 ! density evolution . lagrangian code , so a l l we have to do i s watch the ! change in the geometry . r (n) = r (n) * ( dvol1 (n) / dvol (n) ) r (n) = max( r (n ) , smallr ) ! velocity evolution due to pressure acceleration and forces . uold (n) = u(n) u(n) = u(n) − dtbdm(n ) * (pmid(n+1)−pmid(n) ) * 0 . 5 * (amid(n+1)+amid(n ) ) & + 0.5 * dt * ( f i c t 0 (n)+ f i c t 1 (n ) ) ! t o t a l energy evolution e (n) = e (n) − dtbdm(n ) * ( amid(n+1) * upmid(n+1) − amid(n) * upmid(n ) ) q(n) = e (n) − 0.5 * (u(n) ** 2+v (n) ** 2+w(n) ** 2) p(n) = max( r (n) * q(n) * gamm, smallp ) enddo Here, r(n) is the density, u(n) is the velocity, and e(n) is the energy per mass. The transverse components of the velocity are v(n), w(n). In cartesian coordinates the volume and area factors dvol(n), amid(n) are equal to unity, and the fictitious forces fict(n) vanish.
After the Lagrange step the hydrodynamic variables have to be remapped onto a fixed Eulerian grid. This can be done using the parabolic interpolation mentioned above. The advantage of the remap step is that it is simple to change the grid resolution in the process. Finally, we have to specify the time step and grid resolution. The grid resolution is determined by the requirement that (∆ x)∇ x U U, where ∆ x is the cell size, and U is any of the hydrodynamic variables. Note that there is no need to worry about discontinuities, because shocks are captured by the Riemann solver. Also note that the PPM scheme has at least second order accuracy, so that relatively coarse grids can be used. The time step is determined by the Courant criterion c∆ x ≤ ∆t, where c is the maximum of the speed of sound and the local flow velocity. This criterion ensures that the domain of dependence of any physical variable does not exceed the cell size.
In general, modern hydro codes are very fast and efficient. The main difficulty is that 3 + 1 dimensional simulations may require a lot of memory, and that some physical phenomena, such as turbulent convection and shock instabilities in supernovae, require very high resolution. One of the frontiers of numerical hydrodynamics is the problem of dealing with systems that transition from fluid dynamics to ballistic behavior at either early or late times, or systems in which the density varies by a very large factor. Problems of this type arise in the early and late time dynamics of heavy ion collisions, the dilute corona of cold atomic gases, and the transition from hydrodynamics to free streaming in the neutrino transport in a supernova explosions. Recent progress in this direction includes the development of the anisotropic hydrodynamics method [98][99][100][101], and applications of the lattice Boltzmann method to problems in nuclear and atomic physics [102,103].
In the relativistic regime recent progress includes the development of stable and causal viscous fluid dynamics codes [92,94]. The problem with a naive implementation of the relativistic Navier-Stokes equation derived by Landau is that viscous stresses are determined by the instantaneous value of the shear strain ∇ i u j . This leads to acausal propagation of shear waves and possible instabilities. This is not a fundamental problem with fluid dynamics. Acausal behavior occurs in the regime of high wave numbers in which fluid dynamics is not expected to be reliable. However, high wave number instabilities prohibit numerical implementations. The solution is to go to next order in the gradient expansion, which includes the finite relaxation time of viscous stresses. In practice, second order fluid dynamics codes are usually based on the idea of transient fluid dynamics. In this method, the shear stresses δ Π i j are promoted to fluid dynamic variables, which satisfy separate fluid dynamic equations, see [92,94].
Kinetic theory
Fluid dynamics is based on the assumption of local thermal equilibrium and requires the mean free path to be small compared to the characteristic scales of the problem. When this condition is not satisfied a more microscopic approach to the non-equilibrium problem is required. The simplest method of this type is kinetic theory, which is based on the existence of well defined quasi-particles. This implies, in particular, that the width of a quasi-particle has to be small compared to its energy. In this case we can define the phase space density f (x, p,t) of quasi-particles. In general, there can be many different kinds of quasi-particles, labeled by their spin, charge, and other quantum numbers. The phase space distribution determines the conserved densities that enter the hydrodynamic description. For example, the mass density is given by ρ(x,t) = dΓ m f (x, p,t) ,
(1. 85) where dΓ = d 3 p/(2π) 3 . The momentum density is π(x,t) = dΓ mv p f (x, p,t) ,
(1. 86) where v p = ∇ p E p is the quasi-particle velocity and E p is the quasi-particle energy. In general, the quasi-particle energy can be a functional of the phase distribution f (x, p,t). This takes into account possible in-medium modifications of particle properties. If E p is a functional of f (x, p,t) then the total energy of the system is not just given by the integral of E p f (x, p,t). Instead, we must construct an energy density functional E [ f ] that satisfies [104]
E p = δ E δ f p .
(1.87)
The equation of motion for the distribution function is the Boltzmann equation
(∂ t + v · ∇ x − F · ∇ p ) f (x, p,t) = C[ f ] ,where f i = f (x, p i ,t).
The transition rate is given by
w(1, 2; 3, 4) = (2π) 4 δ ∑ i E i δ ∑ i p i |A | 2 ,
(1. 90) where A is the scattering amplitude. For non-relativistic s-wave scattering A = 4πa/m, where a is the scattering length.
The Boltzmann equation is a 6+1 dimensional partial integro-differential equation, and direct methods of integration, similar to those used in computational fluid dynamics, are impractical. Standard methods for solving the Boltzmann equation rely on sampling phase space using Monte Carlo methods. In nuclear physics the test particle method for solving the Boltzmann equation was popularized by Bertsch and Das Gupta [105]. Below, I will present a simple non-relativistic algorithm described by Lepers et al. [106].
The main idea is to represent the distribution as a sum of delta functions 91) where N is the number of particles, the integral of f (x, p,t) over phase space, and N t is the number of test particles. In typical applications N t N, but if N is already very large it is possible to run simulations with N t < N. Phase space averages can be computed as averages
f (x, p,t) = N N t N t ∑ i=1 (2π) 3 δ (p − p i (t))δ (x − x i (t)) ,(1.over test particlesF = 1 N d 3 x dΓ f (x, p,t)F(x, p) = 1 N t N t ∑ i=1 F(x i , p i ) .
(1. 92) In practice this requires some smoothing, and the delta functions are replaced by Gaussian distributions
δ (p − p i )δ (x − x i ) → g w p (p − p i )g w x (x − x i ) ,
(1. 93) where g w (x) is a normalized Gaussian with width w. The widths w x and w p are chosen such that the delta function singularities are smoothed out, but physical structures of the distribution function f (x, p,t) are preserved.
If there is no collision term the equation of motion for the distribution function is Hamilton's equation for the test particle positions and momenta
dx i dt = p i m , dp i dt = F i .
(1.94)
These equations can be solved with high accuracy using a staggered leapfrog algorithm
v i (t n+1/2 ) = v i (t n ) + a i (t n )∆t/2 , (1.95) r i (t n+1 ) = r i (t n ) + v i (t n+1/2 )∆t , (1.96) v i (t n+1 ) = v i (t n+1/2 ) + a i (t n+1 )∆t/2 ,(1.97)
where a i = F i /m is the acceleration of particle i, and ∆t = t n+1 − t n is the time step of the algorithm. The size of the time step depends on the specific problem, but a good check is provided by monitoring conservation of energy. The collision term is treated stochastically, by allowing the test particles to collide with the scaled cross section σ t = (N/N t )σ . To determine whether a collision occurs we go through all pairs of particles and compute the relative distance r i j = r i − r j and velocity v i j = v i − v j . We then determine whether on the current trajectory the time of closest approach will be reached during the next time step. This happens if t min = t n − r i j · v i j /v 2 i j satisfies |t min −t n | ≤ ∆t/2. In that case we compute
r 2 min = r 2 i j − (r i j · v i j ) 2 v 2 i j (1.98)
and check if πr 2 min < σ t . If this condition is satisfied then the collision is allowed to take place. For an s-wave elastic collision we propagate the particles to t min , randomize their relative velocity v i j , and then propagate them back to t n . Higher partial wave amplitudes are easy to implement by randomizing v i j with suitable probability distributions. After all pairs have been checked we perform the velocity and position update in equ. (1.95-1.97).
There are a number of refinements that can be included. At low temperature Pauli-blocking has to be taken into account. This can be done by computing the phase space densities f (r i , p i ,t) for the collision products, and accepting the collision with probability (1 − f i )(1 − f j ).
At higher energies relativistic effects are important. Relativistic effects in the particle propagation are easy to incorporate, but the treatment of the collision term is more subtle. The problem is that a finite collision cross section, treated geometrically, will lead to instantaneous interactions at a distance. Additional difficulties arise from the treatment of resonances, pair production and annihilation, n-body processes, etc. There are a number of codes on the market that address these issues, and that have been tuned against existing data on pp, pA and AA interactions in the relativistic regime. Examples include UrQMD [107], GiBUU [108], HSD [109], and others.
At high energies the initial pp collisions are very inelastic, and one has to rely on Monte Carlo generators developed in the high energy physics community. A possible alternative is to use a purely partonic kinetic theory that involves scattering between quark and gluon quasiparticles. There are some subtleties with this approach, having to do with the problem of how to include screening and damping of the exchanged gluons, soft gluon radiation, etc. I will not attempt to discuss these issues here, and I refer the reader to the original literature [110,111].
Classical field theory
An interesting simplification occurs if the occupation numbers are large, f 1. This is argued to happen for the gluons in the initial state of a heavy ion collision [112]. In this limit the classical kinetic theory is equivalent to a classical field theory [113]. Indeed, if the occupations numbers are non-perturbative, f ∼ > 1/g, the kinetic theory no longer applies, and we have to rely on classical field theory. In general the classical action is not known, but in the weak coupling limit the bare QCD action can be used. Classical QCD simulation have been used to study a number of issues, such as particle production from an overpopulated gluon field, and the possible approach to thermal equilibrium. Instabilities in the classical field evolution may play an important role in speeding up the equilibration process. Here, I will briefly describe a method for solving classical evolution equations on a space-time lattice, following the recent review [114].
In order to construct a Hamiltonian approach to lattice QCD I start from the Wilson action in Minkowski space with separate coupling constants β 0 and β s in the temporal and spatial
direction S[U] = − β 0 2N c ∑ x 3 ∑ i=1 Tr W 0i (x) +W † 0i (x) − 2 + β s 2N c ∑ x ∑ i< j Tr W i j (x) +W † i j (x) − 2 ,
(1.99)
In the continuum limit, we expect
β 0 = 2N c a g 2 ∆t , β s = 2N c ∆t g 2 a .
(1. 100) where a and ∆t are spatial and temporal lattice spacings. In order to construct a Hamiltonian we have to fix the gauge freedom of the theory. Here, I will use the temporal axial gauge, A 0 = 0. In this case the canonical variables are the spatial gauge potentials and the conjugate momenta are the electric fields. On the lattice the gauge A 0 = 0 corresponds to setting all temporal gauge links to the identity, U 0 (x) = 1. The canonical variables are given by the spatial gauge links U j (x), and the conjugate momenta are the temporal plaquettes W 0 j (x). In the continuum limit
A a j (x) = 2i ag Tr λ a U j (x) , (1.101) E a j (x) = 2i ag∆t
Tr λ a W 0 j (x) .
(1.102)
Varying the action equ. (1.99) with respect to U j (x) gives an equation of motion for E j
E a j (t + ∆t, x) = E a j (t, x) + i∆t ga 3 ∑ k Tr λ a U j (x)U k (x +ĵ)U † j (x +k)U † k (x) + Tr λ a U j (x)U † k (x +ĵ −k)U † j (x −k)U k (x −k) . (1.103)
We note that E a j (t + ∆t, x) is determined by the electric fields and the spatial gauge links at time t. Using equ. (1.102) and the electric field E a j at time t + ∆t we can compute the temporal plaquette W 0 j (x) at t + ∆t. This result can be used to evolve the spacelike gauge links
U j (t + ∆t, x) = W 0 j (x)U j (x) .∑ j E a j (x) −U † j (x −ĵ)E a j (x −ĵ)U j (x −ĵ) = 0 .
(1. 105) This constraint is preserved by the evolution equations. The classical field equations are exactly scale invariant and there is no dependence on the coupling constant g. Physical quantities, like the energy momentum tensor, explicitly depend on g. In practice classical field simulations require a model for the initial conditions and the corresponding coupling. The initial conditions are typically an ensemble of gauge fields distributed according to some distribution, for example an anisotropic Gaussian in momentum space. The anisotropy is assumed to be a consequence of the strong longitudinal expansion of the initial state of a heavy ion collision. Physical observables are determined by averages the evolved fields over the initial ensemble. Note that a purely classical field evolution does not thermalize. A thermal ensemble of classical fields would satisfy the equipartition law, and the total energy would be dominated by modes near the lattice cutoff. This is the Rayleigh-Jeans UV catastrophe. However, classical field evolution has interesting non-thermal fixed points [115], which may play a role in thermalization.
The classical field framework has been extended in a variety of ways. One direction is the inclusion of quantum fluctuations on top of the classical field [116]. Another problem is the inclusion of modes that are not highly populated. In the hard thermal loop approximation one can show that hard modes can be described as colored particles interacting with the classical field corresponding to the soft modes [117]. The equations of motion for the colored particles are known as Wong's equations [118]. Numerical studies can be found in [119].
Nonequilibrium QCD: Holography
A new approach to quantum fields in and out of equilibrium is provided by the AdS/CFT correspondence [120][121][122][123][124]. The AdS/CFT correspondence is a holographic duality. It asserts that the dynamics of a quantum field theory defined on the boundary of a higher dimensional space is encoded in boundary correlation functions of a gravitational theory in the bulk. The correspondence is simplest if the boundary theory is strongly coupled and contains a large number N of degrees of freedom. In this case the bulk theory is simply classical Einstein gravity. The partition function of the boundary quantum field theory (QFT) is
Z QFT [J i ] = exp (−S [ φ i | ∂ M = J i ]) ,
(1. 106) where J i is a set of sources in the field theory, S is the gravitational action, φ i is a dual set of fields in the gravitational theory, and ∂ M is the boundary of AdS 5 . The fields φ i satisfy classical equations of motions subject to boundary conditions on ∂ M.
The original construction involves a black hole in AdS 5 and is dual to a relativistic fluid governed by a generalization of QCD known as N = 4 super Yang-Mills theory. This theory is considered in the limit of a large number of colors N c . The gravitational theory is Einstein gravity with additional matter fields that are not relevant here. The AdS 5 black hole metric is
ds 2 = (πT R a ) 2 u − f (u)dt 2 + dx 2 + R 2 a 4u 2 f (u) du 2 , (1.107)
where x,t are Minkowski space coordinates, and u is a "radial" coordinate where u = 1 is the location of the black hole horizon and u = 0 is the boundary. T is the temperature, R a is the AdS radius, and f (u) = 1 − u 2 .
It is instructive to check that this metric does indeed provide a solution to the Einstein equations with a negative cosmological constant. This can be done using a simple Mathematica script. I begin by defining the metric and its inverse: In the boundary theory the metric couples to the stress tensor Π µν . Correlation functions of the stress tensor can be found by linearizing the bulk action around the AdS 5 solution, g µν = g 0 µν + δ g µν . Small oscillations of the off-diagonal strain δ g y
Γ µ αβ = 1 2 g µν ∂ α g νβ + ∂ β g να − ∂ µ g αβ ,(1.
x are particularly simple, because the equation of motion for φ ≡ g y
x is that of a minimally coupled scalar 1 √ −g ∂ µ √ −gg µν ∂ ν φ = 0 .
(1.110)
The wave equation can be obtained using the metric coefficients defined above.
( * \ sqrt{−g} g^{\mu\ nu} \ p a r t i a l _{nu} \ Phi ( t , z , u) * )
φ k (u) − 1 + u 2 u f (u) φ k (u) + ω 2 − k 2 f (u) (2πT ) 2 u f (u) 2 φ k (u) = 0 .
(1.111)
This differential equation has two linearly independent solutions. The retarded correlation function corresponds to picking a solution that is purely infalling at the horizon [121]. The where w = ω/(2πT ) and the first factor describes the near horizon behavior. The function F k (u) can be obtained as an expansion in w and k = k/(2πT ). At second order in O(w and k the solution is [125] F k (u) = 1− iw 2 log 1 + u 2 + w 2 8 8 − 8k 2 w 2 + log 1 + u 2 log 1 + u 2 − 4Li 2 1 − u 2 .
(1.113)
In the opposite limit, w 1, the wave equation can be solved using a WKB approximation [126]. For k = 0 the result is
φ k (u) = πw 2 u √ 1 − u 2 iJ 2 2w √ u −Y 2 2w √ u .
(1. 114) In the intermediate regime the wave equation can be solved numerically. A standard method is to start from the near horizon result given in equ. (1.112) and integrate outwards towards the boundary. The retarded correlation function is given by the variation of the boundary action with respect to the field. For this purpose we consider the quadratic part of the Einstein-Hilbert action and use the AdS/CFT correspondence to express Newton's constant in terms of gauge theory parameters. We find quite different from expectations at weak coupling. At weak coupling we expect the spectral function to show a narrow transport peak at zero energy [80]. So far we have only considered calculations very close to equilibrium, corresponding to small perturbations of the AdS 5 Schwarzschild solution. In order to address the problem of initial state dynamics and thermalization we have to consider initial conditions that mimic colliding nuclei. Recent work focuses on colliding shock waves in asymptotically AdS 5 spaces.
S = − π 2 N 2 T 4 8 du d 4 x f (u) u (∂ u φ ) 2 + . . . .
In the strong coupling limit the evolution of the shock waves is a problem in numerical relativity. Special methods have been developed to deal with problems in AdS space [129]. These methods are quite different from the techniques employed in connection with black hole or neutron star mergers in asymptotically flat Minkowski space time. A typical result is shown in Fig. 1.6. The calculations demonstrate fast "hydrodynamization", that means a rapid decay of non-hydrodynamic modes. At somewhat longer time scales thermal equilibration is achieved. This corresponds to the formation of an event horizon in the bulk. In general, it was realized that there is a fluid-gravity correspondence, an equivalence between dynamic space times containing a horizon and solutions of the Navier-Stokes equation [130]. This correspondence can be used to study, both analytically and numerically, difficult problems in fluid dynamics.
Outlook and acknowledgments
I hope this brief review provides a flavor of the breadth of computational problems that are related QCD. This includes many issues that are at the forefront of computational physics, like the sign problem in euclidean QCD at finite baryon density, and the challenge to extract real time correlation functions from the euclidean path integral. It also includes many problems that are of great interest to mathematicians. Both the Yang-Mills existence and mass gap as well as the Navier-Stokes existence and smoothness problems are among the Clay Millenium Prize problems [131,132]. Interesting work on the Boltzmann equation was recently recognized with a Fields medal [133], and gradient flow plays an important role in the proof of the Poincare conjecture [134].
double update ( double beta){ } / * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * / i n t main(){ double beta , dbeta , action ; srand48(1234L ) ; / * i n i t i a l i z e random number generator * / / * do your experiment here ; t h i s example i s a thermal cycle * / dbeta=.01; coldstart ( ) ; / * heat i t up * / for ( beta=1; beta>0.0; beta−=dbeta){ action=update ( beta ) ; p r i n t f ("%g \ t%g \ n" , beta , action ) ; } p r i n t f ( " \ n \ n " ) ; / * cool i t down * / for ( beta=0; beta<1.0; beta+=dbeta){ action=update ( beta ) ; p r i n t f ("%g \ t%g \ n" , beta , action ) ; } p r i n t f ( " \ n \ n " ) ; e x i t ( 0 ) ; }
The
Metropolis method generates an ensemble of configurations {x i } (k) where i = 1, . . . , n labels the lattice points and k = 1, . . . , N con f labels the configurations. Quantum mechanical averages are computed by averaging observables over many configurations, O (k) is the value of the classical observable O in the configuration {x i } (k) . The configurations are generated using Metropolis updates {x i } (k) → {x i } (k+1) . The update consists of a sweep through the lattice during which a trial update x
Fig. 1. 1
1Typical euclidean path obtained in a Monte Carlo simulation of the discretized euclidean action of the double well potential for η = 1.4. The lattice spacing in the euclidean time direction is a = 0.05 and the total number of lattice points is N τ = 800. The green curve shows the corresponding smooth path obtained by running 100 cooling sweeps on the original path. endif enddo
vacuum expectation value spontaneously breaks the approximate chiral SU(3) L ×SU(3) R flavor symmetry of the QCD Lagrangian down to its diagonal subgroup, the flavor symmetry SU(3) V . Spontaneous chiral symmetry breaking implies the existence of Goldstone bosons, massless modes with the quantum numbers of the generators of the broken axial symmetry SU(3)
β
= T −1 is the inverse temperature and L E is the euclidean Lagrangian, which is obtained by analytically continuing equ. (1.16) to imaginary time τ = it. As in the quantum mechanical example in equ. (1.4) the temperature enters via the boundary condition on the fields in the imaginary time direction. Gauge fields and fermions obey periodic and anti-periodic boundary conditions, respectively. The chemical potential enters through its coupling to the conserved baryon density
•
Sweep through the lattice and update individual link variables. For this purpose multiply the link variable by a random SU(N c ) matrix, U µ → RU µ . Compute the change in the Wilson action and accept the update with probability exp(−∆ S W ). • Compute physical observables. The simplest observable is the average plaquette W µν , which can be related to the equation of state, see equ. (1.30). More complicated observables include the correlation function between plaquettes, and the Wilson loop W (C ) = Tr [L(C )] , (C ) is the product of link variables around a closed loop. The average Wilson loop is related to the potential between two static charges in the fundamental representation
Fig. 1. 3
3Topological objects in lattice QCD (figure courtesy of S. Sharma, see[62]). This picture shows a slice through a low lying eigenstate of the Dirac operator in lattice QCD.one-gluon exchange interaction, for example, corresponds to a perturbative fluctuation in the gauge field that modifies the two quark propagators. An operator with the quantum number of the proton is η α
the four-dimensional gauge field and the rhs is computed from the gauge potentials evaluated at the flow time τ. The Lorentz indices remain four-dimensional. The rhs of the flow equations is the classical equation of motion, so that the gradient flow tends to drive gauge fields towards the closest classical solution. The only finite action solutions of the euclidean field equations on R 4 are instantons [65, 66]. Instantons and anti-instantons are characterized by integer values Q top = ±1 of the topological charge
Dγ 5 + γ 5 D = 0 for the massless Dirac operator. The important observation is that the fermionic topological density q f (n) = 1 2a 3 tr CD [γ 5 D(n, n)] , (1.52) where tr CD is a color-Dirac trace, satisfies the index theorem Q top = a 4 ∑ n q f (n)
observe that the topological susceptibility in QCD with degenerate quark masses is proportional to m qq . Note that equ. (1.54) is consistent with the vanishing of χ top for m u = 0. If m u = 0 and m d = 0 then equ. (1.54) is minimized by Σ = exp(iφ τ 3 ) with φ = θ /2, and the vacuum energy is independent of θ .
1.3.2. To understand the severity of the problem consider a generic expectation value O = dU det(D) O e −S dU det(D) e −S . (1.55) If the determinant is complex I can write this as O = dU | det(D)| Oe iϕ e −S dU | det(D)| e iϕ e
B (ω) is the Bose distribution function. The imaginary time correlation function equ. (1.63)
Fig. 1. 4
4Schematic time evolution of a heavy ion collision. Figure courtesy of S. Bass. CGC refers to the color glass condensate, a semi-classical model of the overpopulated gluon configuration in the initial state of a heavy ion collision. Glasma refers to the non-equilibrium evolution of this state into a locally equilibrated plasma. Hydrodynamics is the theory of the time evolution of a locally equilibrated fireball, and hadronic phase refers to the late time kinetic stage of the collision.poor resolution tend to bias the result towards small values of η/s, where s is the entropy density. The finite temperature spectral function satisfies the sum rule[89] 2 π dω [η(ω) − η T=0 (ω)
I restrict myself to flat coordinate systems. In curvilinear coordinates equ. (1.77) and (1.78) contain suitable volume factors. Equ. (1.77) is solved by dr dt = u (m(r),t) , (1.79) which is the equation for the Lagrange coordinate. In terms of the mass coordinate m(r) the momentum and energy equations are
have only written down the ideal contributions to the stress tensor and energy current. To put these equations on a grid I focus on the mass integrated quantities
F
= −∇ x E p is a force, and C[ f p ] is the collision term. For dilute systems the collision term is dominated by binary scattering and C[ f p ] = − ∏ i=2,3,4 dΓ i w(1, 2; 3, 4) ( f 1 f 2 − f 3 f 4 ) , (1.89)
equ. (1.103) and equ. (1.104) describe a staggered leapfrog algorithm, similar to equ. (1.95-1.97) above. An important constraint on the numerical evolution is provided by Gauss law. Varying the lattice action with respect to U 0 before imposing temporal axial gauge gives
DiagonalMatrix[{−f [u ] / u * ( Pi * T * Ra)^2, ( Pi * T * Ra)^2/u , ( Pi * T * Ra)^2/ u , ( Pi * T * Ra)^2/u , Ra^2/(4 * u^2 * f [u] ) } ] inversemetric = Simplify [ Inverse [ metric ] ] From the metric I compute the Christoffel symbols
(
* C h r i s t o f f e l Symbols * ) ( * −−−−−−−−−−−−−−−−−−− * ) a f f i n e := a f f i n e = Simplify [
can check the equation of motion, G µν = Λ 2 g µν , where the cosmological constant is determined by the AdS radius R.
(
* −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− * ) SqrtG = Simplify [ Sqrt[−Det [ metric ] ] , {Ra > 0 , T > 0 , u > 0}] dnuPhi =
Fig. 1. 5
5Viscosity spectral function in a N = 4 SUSY Yang Mills plasma. The spectral function is computed in the large N c limit of a strongly coupled plasma using the AdS/CFT correspondence. The figure in the left panel shows η(ω)/s (blue) and the zero temperature counterpart η T =0 (ω)/s (red) as a function of ω. The figure in the right panel shows the finite temperature part [η(ω) − η T =0 (ω)]/s. The figures were generated using the script described below equ. . { D[ Phi [ t , z , u ] , {z , 2}] −> −k^2 * fp , D[ Phi [ t , z , u ] , {t , 2}] −> −w^2 * fp , D[ Phi [ t , z , u ] , {u , 2}] −> fpPP , D[ Phi [ t , z , u ] , {u , 1}] −> fpP} In the case of harmonic dependence on the Minkowski coordinates δ g y x = φ k (u)e ikx−iωt the fluctuations are governed by the wave equation
retarded correlation function G R (ω, k) defined in equ. (1.60) is determined by inserting the solution into the Einstein-Hilbert action, and then computing the variation with respect to the boundary value of δ g y x . The infalling solution can be expressed as φ k (u) = (1 − u) −iw/2 F k (u) (1.112)
Fig. 1. 6
6action follows after an integration by parts. The retarded Green function is determined by the second variational derivative with respect to the boundary value of the field[125,127], Energy density of colliding shock waves in AdS 5 space[128]. The figure shows the energy density E /µ 4 on the boundary of AdS 5 as a function of the time coordinate v and the longitudinal direction z. The shocks are infinitely extended in the transverse direction. The parameter µ sets the overall scale.
17 )
17and f abc = 4i Tr([λ a , λ b ]λ c ) are the SU(N c ) structure constants. The action of the covariant derivative on the quark fields is
the Ricci tensor R αβ = R µ α µβ , and the scalar curvature R = R µ µ . Finally, I compute the Einstein tensor G µν = R µν − 1 2 g µν R.108)
the Riemann tensor
R
µ
ναβ = ∂ α Γ
µ
νβ − ∂ β Γ
µ
να + Γ
ρ
νβ Γ
µ
ρα − Γ
ρ
να Γ
µ
ρβ ,
(1.109)
Table [ (
[1 / 2 ) * Sum[ ( inversemetric [ [ i , s ] ] ) * (D[ metric [ [ s , j ] ] , coord [ [ k ] ] ] + D[ metric [ [ s , k ] ] , coord [ [ j ] ] ] − D[ metric [ [ j , k ] ] , coord [ [ s ] ] ] ) , {s , 1 , n}] , {i , 1 , n} , {j , 1 , n} , {k , 1 , n}]]
Table [ D
[[ Phi [ t , z , u ] , coord [ [ i ] ] ] , {i , 1 , n} ] ; DnuPhi = SqrtG * inversemetric . dnuPhi ;
Quantum Chromodynamics 7
Quantum Chromodynamics 9
Quantum Chromodynamics
Quantum Chromodynamics
Quantum Chromodynamics
Quantum Chromodynamics
Quantum Chromodynamics
Quantum Chromodynamics
Quantum Chromodynamics
Quantum Chromodynamics
Quantum Chromodynamics
AcknowledgementsThe euclidean path integral simulation in quantum mechanics is described in[6], and the programs are available at https://www.physics.ncsu.edu/schaefer/physics/. A simple Z 2 lattice gauge code can be found in the Appendix. You should be able to extend this code to SU(2) and SU(3). Modern lattice QCD tools can be found on the chroma website http://github.com/JeffersonLab/chroma. The.(1.116)Finally, the spectral function is given by η(ω, k) = −ω −1 Im G R (ω, k). Below is a short Mathematica script that determines the spectral function numerically.( * equation of motion for minimally coupled scalar * ) ( * with harmonic space and time dependence * )The spectral function for k = 0 is shown inFig. 1.5. This is an interesting result because it represent a systematic calculation of a real time observable in the strong coupling limit of a quantum field theory. As explained in Sect. 1.4.5 the corresponding lattice calculation is very difficult, and existing results are difficult to improve upon. We also note that the result is Appendix: Z 2 gauge theory This is a simple Monte Carlo program for Z 2 gauge theory written by M. Creutz[136].
Much more sophisticated tensor packages are easily found on the web. The simple script for solving the wave equation in AdS 5 is adapted from a notebook written by Matthias Kaminski. A set of lecture notes and mathematica notebooks for solving the Einstein equations numerically on asymptotically AdS spaces can be found on Wilke van der Schee. The mathematica notebooks in Sect. 1.5.5 are adapted from notebooks available on Jim Hartle's web. Dissipative and anisotropic versions are available on request. T. S. work is supported by the US Department of Energy grant DE-FG02-03ER41260Dissipative and anisotropic versions are available on request. There are a number of relativistic hydro codes on the web. An example is the VISHNU code [135] which is available at https://u.osu.edu/vishnu/. Both UrQMD http://urqmd.org/ and GiBUU https://gibuu.hepforge.org/ are also available online. The mathematica notebooks in Sect. 1.5.5 are adapted from notebooks available on Jim Hartle's web- site http://web.physics.ucsb.edu/~gravitybook/. Much more sophisticated tensor packages are easily found on the web. The simple script for solving the wave equation in AdS 5 is adapted from a notebook written by Matthias Kaminski. A set of lecture notes and mathematica notebooks for solving the Ein- stein equations numerically on asymptotically AdS spaces can be found on Wilke van der Schee's website https://sites.google.com/site/wilkevanderschee/ads-numerics. T. S. work is supported by the US De- partment of Energy grant DE-FG02-03ER41260.
R P Feynman, A R Hibbs, Quantum Mechanics and Path Integrals. McGraw-HillReferences 1. R.P. Feynman, A.R. Hibbs, Quantum Mechanics and Path Integrals (McGraw-Hill, 1965)
. N Metropolis, A W Rosenbluth, M N Rosenbluth, A H Teller, E Teller, DOI10.1063/1.1699114J. Chem. Phys. 211087N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, E. Teller, J. Chem. Phys. 21, 1087 (1953). DOI 10.1063/1.1699114
. M Creutz, B Freedman, 10.1016/0003-4916(81)90074-9Annals Phys. 132427M. Creutz, B. Freedman, Annals Phys. 132, 427 (1981). DOI 10.1016/0003-4916(81)90074-9
. E V Shuryak, O V Zhirov, DOI10.1016/0550-3213(84)90401-2Nucl. Phys. 242393E.V. Shuryak, O.V. Zhirov, Nucl. Phys. B242, 393 (1984). DOI 10.1016/0550-3213(84)90401-2
. E V Shuryak, DOI10.1016/0550-3213(88)90191-5Nucl. Phys. 302621E.V. Shuryak, Nucl. Phys. B302, 621 (1988). DOI 10.1016/0550-3213(88)90191-5
Instantons and Monte Carlo methods in quantum mechanics. T Schäfer, hep-lat/0411010T. Schäfer, Instantons and Monte Carlo methods in quantum mechanics (2004). ArXive:hep-lat/0411010
. M Jarrell, J E Gubernatis, DOI10.1016/0370-1573(95)00074-7Phys. Rept. 269133M. Jarrell, J.E. Gubernatis, Phys. Rept. 269, 133 (1996). DOI 10.1016/0370-1573(95)00074-7
. M Asakawa, T Hatsuda, Y Nakahara, DOI 10.1016/ S0146-6410(01Prog. Part. Nucl. Phys. 46M. Asakawa, T. Hatsuda, Y. Nakahara, Prog. Part. Nucl. Phys. 46, 459 (2001). DOI 10.1016/ S0146-6410(01)00150-8
. C Jarzynski, DOI10.1103/PhysRevLett.78.2690Physical Review Letters. 782690C. Jarzynski, Physical Review Letters 78, 2690 (1997). DOI 10.1103/PhysRevLett.78.2690
. D J Gross, F Wilczek, DOI10.1103/PhysRevLett.30.1343Phys.Rev.Lett. 301343D.J. Gross, F. Wilczek, Phys.Rev.Lett. 30, 1343 (1973). DOI 10.1103/PhysRevLett.30.1343
. H D Politzer, 10.1103/PhysRevLett.30.1346Phys.Rev.Lett. 301346H.D. Politzer, Phys.Rev.Lett. 30, 1346 (1973). DOI 10.1103/PhysRevLett.30.1346
. S R Coleman, E J Weinberg, DOI10.1103/PhysRevD.7.1888Phys.Rev. 71888S.R. Coleman, E.J. Weinberg, Phys.Rev. D7, 1888 (1973). DOI 10.1103/PhysRevD.7.1888
. K Nakamura, 10.1088/0954-3899/37/7A/075021J. Phys. 3775021K. Nakamura, et al., J. Phys. G37, 075021 (2010). DOI 10.1088/0954-3899/37/7A/075021
. M G Alford, A Schmitt, K Rajagopal, T Schäfer, 10.1103/ RevModPhys.80.1455Rev. Mod. Phys. 801455M.G. Alford, A. Schmitt, K. Rajagopal, T. Schäfer, Rev. Mod. Phys. 80, 1455 (2008). DOI 10.1103/ RevModPhys.80.1455
. A Adams, L D Carr, T Schäfer, P Steinberg, J E Thomas, 10.1088/ 1367-2630/14/11/115009New J. Phys. 14115009A. Adams, L.D. Carr, T. Schäfer, P. Steinberg, J.E. Thomas, New J. Phys. 14, 115009 (2012). DOI 10.1088/ 1367-2630/14/11/115009
. P Braun-Munzinger, V Koch, T Schäfer, J Stachel, DOI10.1016/j.physrep.2015.12.003Phys. Rept. 62176P. Braun-Munzinger, V. Koch, T. Schäfer, J. Stachel, Phys. Rept. 621, 76 (2016). DOI 10.1016/j.physrep. 2015.12.003
. M Gell-Mann, R J Oakes, B Renner, 10.1103/PhysRev.175.2195Phys. Rev. 1752195M. Gell-Mann, R.J. Oakes, B. Renner, Phys. Rev. 175, 2195 (1968). DOI 10.1103/PhysRev.175.2195
. S R Coleman, E Witten, 10.1103/PhysRevLett.45.100Phys. Rev. Lett. 45100S.R. Coleman, E. Witten, Phys. Rev. Lett. 45, 100 (1980). DOI 10.1103/PhysRevLett.45.100
. G Hooft, NATO Sci. Ser. B. 59135G. 't Hooft, NATO Sci. Ser. B 59, 135 (1980)
. E V Shuryak, Zh. Eksp. Teor. Fiz. 47212Sov. Phys. JETPE.V. Shuryak, Sov. Phys. JETP 47, 212 (1978). [Zh. Eksp. Teor. Fiz.74,408(1978)]
. E V Shuryak, DOI10.1016/0370-2693(78)90370-2.Phys. Lett. 78150Yad. Fiz.E.V. Shuryak, Phys. Lett. B78, 150 (1978). DOI 10.1016/0370-2693(78)90370-2. [Yad. Fiz.28,796(1978)]
. A D Linde, DOI10.1016/0370-2693(80)90769-8Phys. Lett. 96289A.D. Linde, Phys. Lett. B96, 289 (1980). DOI 10.1016/0370-2693(80)90769-8
. R D Pisarski, F Wilczek, DOI10.1103/PhysRevD.29.338Phys. Rev. 29338R.D. Pisarski, F. Wilczek, Phys. Rev. D29, 338 (1984). DOI 10.1103/PhysRevD.29.338
. Y Aoki, G Endrodi, Z Fodor, S D Katz, K K Szabo, DOI10.1038/nature05120Nature. 443675Y. Aoki, G. Endrodi, Z. Fodor, S.D. Katz, K.K. Szabo, Nature 443, 675 (2006). DOI 10.1038/nature05120
. A Bazavov, DOI10.1103/PhysRevD.85.054503Phys. Rev. 8554503A. Bazavov, et al., Phys. Rev. D85, 054503 (2012). DOI 10.1103/PhysRevD.85.054503
. Y Aoki, Z Fodor, S D Katz, K K Szabo, DOI10.1016/j.physletb.2006.10.021Phys. Lett. 64346Y. Aoki, Z. Fodor, S.D. Katz, K.K. Szabo, Phys. Lett. B643, 46 (2006). DOI 10.1016/j.physletb.2006.10.021
. Y Aoki, S Borsanyi, S Durr, Z Fodor, S D Katz, S Krieg, K K Szabo, 10.1088/1126-6708/2009/06/088JHEP. 0688Y. Aoki, S. Borsanyi, S. Durr, Z. Fodor, S.D. Katz, S. Krieg, K.K. Szabo, JHEP 06, 088 (2009). DOI 10.1088/1126-6708/2009/06/088
. A Bazavov, DOI10.1103/PhysRevD.90.094503Phys. Rev. 90994503A. Bazavov, et al., Phys. Rev. D90(9), 094503 (2014). DOI 10.1103/PhysRevD.90.094503
. M A Stephanov, Prog. Theor. Phys. Suppl. 153139M.A. Stephanov, Prog. Theor. Phys. Suppl. 153, 139 (2004)
. Z Fodor, S D Katz, JHEP. 0314Z. Fodor, S.D. Katz, JHEP 03, 014 (2002)
. C Allton, S Ejiri, S Hands, O Kaczmarek, F Karsch, 10.1103/PhysRevD.66.074507Phys.Rev. 6674507C. Allton, S. Ejiri, S. Hands, O. Kaczmarek, F. Karsch, et al., Phys.Rev. D66, 074507 (2002). DOI 10.1103/PhysRevD.66.074507
. F Karsch, C R Allton, S Ejiri, S J Hands, O Kaczmarek, E Laermann, C Schmidt, DOI10.1016/S0920-5632(03)02659-8Nucl. Phys. Proc. Suppl. 129614F. Karsch, C.R. Allton, S. Ejiri, S.J. Hands, O. Kaczmarek, E. Laermann, C. Schmidt, Nucl. Phys. Proc. Suppl. 129, 614 (2004). DOI 10.1016/S0920-5632(03)02659-8. [,614(2003)]
. Z Fodor, S Katz, DOI10.1088/1126-6708/2004/04/050JHEP. 040450Z. Fodor, S. Katz, JHEP 0404, 050 (2004). DOI 10.1088/1126-6708/2004/04/050
. R V Gavai, S Gupta, DOI10.1103/PhysRevD.78.114503Phys. Rev. 78114503R.V. Gavai, S. Gupta, Phys. Rev. D78, 114503 (2008). DOI 10.1103/PhysRevD.78.114503
. S Datta, R V Gavai, S Gupta, DOI10.1016/j.nuclphysa.2013.02.156Nucl. Phys. A904-905. 883S. Datta, R.V. Gavai, S. Gupta, Nucl. Phys. A904-905, 883c (2013). DOI 10.1016/j.nuclphysa.2013.02. 156
. P De Forcrand, O Philipsen, 10.1103/PhysRevLett.105.152001Phys. Rev. Lett. 105152001P. de Forcrand, O. Philipsen, Phys. Rev. Lett. 105, 152001 (2010). DOI 10.1103/PhysRevLett.105.152001
. M A Stephanov, K Rajagopal, E V Shuryak, Phys. Rev. Lett. 814816M.A. Stephanov, K. Rajagopal, E.V. Shuryak, Phys. Rev. Lett. 81, 4816 (1998)
. G Sauer, H Chandra, U Mosel, DOI10.1016/0375-9474(76)90429-2Nucl. Phys. 264221G. Sauer, H. Chandra, U. Mosel, Nucl. Phys. A264, 221 (1976). DOI 10.1016/0375-9474(76)90429-2
. J Pochodzalla, DOI10.1103/PhysRevLett.75.1040Phys. Rev. Lett. 751040J. Pochodzalla, et al., Phys. Rev. Lett. 75, 1040 (1995). DOI 10.1103/PhysRevLett.75.1040
. J B Elliott, P T Lake, L G Moretto, L Phair, 10.1103/PhysRevC.87.054622Phys. Rev. 87554622J.B. Elliott, P.T. Lake, L.G. Moretto, L. Phair, Phys. Rev. C87(5), 054622 (2013). DOI 10.1103/PhysRevC. 87.054622
. M G Alford, K Rajagopal, F Wilczek, DOI10.1016/S0550-3213(98)00668-3Nucl. Phys. 537443M.G. Alford, K. Rajagopal, F. Wilczek, Nucl. Phys. B537, 443 (1999). DOI 10.1016/S0550-3213(98) 00668-3
. T Schäfer, 10.1016/S0550-3213(00)00063-8Nucl. Phys. 575269T. Schäfer, Nucl. Phys. B575, 269 (2000). DOI 10.1016/S0550-3213(00)00063-8
. T Schäfer, F Wilczek, 10.1103/PhysRevLett.82.3956Phys. Rev. Lett. 823956T. Schäfer, F. Wilczek, Phys. Rev. Lett. 82, 3956 (1999). DOI 10.1103/PhysRevLett.82.3956
. T Hatsuda, M Tachibana, N Yamamoto, G Baym, 10.1103/ PhysRevLett.97.122001Phys. Rev. Lett. 97122001T. Hatsuda, M. Tachibana, N. Yamamoto, G. Baym, Phys. Rev. Lett. 97, 122001 (2006). DOI 10.1103/ PhysRevLett.97.122001
M Creutz, Quarks, Gluons, and Lattices. Cambridge University PressM. Creutz, Quarks, Gluons, and Lattices (Cambridge University Press, 1983)
I Montvay, G Münster, Quantum Fields on a Lattice. Cambridge University PressI. Montvay, G. Münster, Quantum Fields on a Lattice (Cambridge University Press, 1994)
J Smit, Introduction to Quantum Fields on a Lattice. Cambridge University PressJ. Smit, Introduction to Quantum Fields on a Lattice (Cambridge University Press, 2002)
C Gattringer, C B Lang, Quantum Chromodynamics on the Lattice. SpringerC. Gattringer, C.B. Lang, Quantum Chromodynamics on the Lattice (Springer, 2009)
H W Lin, H B Meyer, Lattice QCD for Nuclear Physics. SpringerH.W. Lin, H.B. Meyer, Lattice QCD for Nuclear Physics (Springer, 2014)
The Phase diagram of quantum chromodynamics. Z Fodor, S D Katz, ArXiv:0908.3341Z. Fodor, S.D. Katz, The Phase diagram of quantum chromodynamics (2009). ArXiv:0908.3341
H T Ding, F Karsch, S Mukherjee, ArXiv:1504.05274Thermodynamics of strong-interaction matter from Lattice QCD. H.T. Ding, F. Karsch, S. Mukherjee, Thermodynamics of strong-interaction matter from Lattice QCD (2015). ArXiv:1504.05274
. K G Wilson, DOI10.1103/PhysRevD.10.2445Phys. Rev. 1045K.G. Wilson, Phys. Rev. D10, 2445 (1974). DOI 10.1103/PhysRevD.10.2445. [,45(1974)]
How to generate random matrices from the classical compact groups. F Mezzadri, ArXiv:math- ph/0609050F. Mezzadri, How to generate random matrices from the classical compact groups (2006). ArXiv:math- ph/0609050
. A Hasenfratz, P Hasenfratz, 10.1016/0370-2693(80)90118-5Phys. Lett. 93241A. Hasenfratz, P. Hasenfratz, Phys. Lett. B93, 165 (1980). DOI 10.1016/0370-2693(80)90118-5. [,241(1980)]
G P Lepage, Strong interactions at low and intermediate energies. Proceedings, 13th Annual Hampton University Graduate Studies, HUGS'98. Newport News, USAG.P. Lepage, in Strong interactions at low and intermediate energies. Proceedings, 13th Annual Hampton University Graduate Studies, HUGS'98, Newport News, USA, May 26-June 12, 1998 (1998), pp. 49-90
. U Wolff, 10.1103/PhysRevLett.62.361Phys. Rev. Lett. 62361U. Wolff, Phys. Rev. Lett. 62, 361 (1989). DOI 10.1103/PhysRevLett.62.361
. J B Kogut, L Susskind, DOI10.1103/PhysRevD.11.395Phys. Rev. 11395J.B. Kogut, L. Susskind, Phys. Rev. D11, 395 (1975). DOI 10.1103/PhysRevD.11.395
. D B Kaplan, 10.1016/0370-2693(92)91112-M59.H.Neuberger10.1016/S0370-2693(97)01368-3Phys. Lett. 288141Phys. Lett.D.B. Kaplan, Phys. Lett. B288, 342 (1992). DOI 10.1016/0370-2693(92)91112-M 59. H. Neuberger, Phys. Lett. B417, 141 (1998). DOI 10.1016/S0370-2693(97)01368-3
M Luscher, Modern perspectives in lattice QCD: Quantum field theory and high performance computing. Proceedings, International School, 93rd Session. Les Houches, FranceM. Luscher, in Modern perspectives in lattice QCD: Quantum field theory and high performance comput- ing. Proceedings, International School, 93rd Session, Les Houches, France, August 3-28, 2009 (2010), pp. 331-399
. E Endress, C Pena, K Sivalingam, DOI10.1016/j.cpc.2015.04.017Comput. Phys. Commun. 19535E. Endress, C. Pena, K. Sivalingam, Comput. Phys. Commun. 195, 35 (2015). DOI 10.1016/j.cpc.2015. 04.017
. V Dick, F Karsch, E Laermann, S Mukherjee, S Sharma, 10.1103/PhysRevD.91.094504Phys. Rev. 91994504V. Dick, F. Karsch, E. Laermann, S. Mukherjee, S. Sharma, Phys. Rev. D91(9), 094504 (2015). DOI 10.1103/PhysRevD.91.094504
. M Teper, 10.1016/0370-2693Phys. Lett. 1718691004M. Teper, Phys. Lett. B171, 86 (1986). DOI 10.1016/0370-2693(86)91004-X
. M Lüscher, DOI10.1007/JHEP08(2010)071,10.1007/JHEP03(2014)092JHEP. 0871Erratum: JHEP03,092(2014)M. Lüscher, JHEP 08, 071 (2010). DOI 10.1007/JHEP08(2010)071,10.1007/JHEP03(2014)092. [Erratum: JHEP03,092(2014)]
. A A Belavin, A M Polyakov, A S Schwartz, Yu S Tyupkin, 10.1016/ 0370-2693(75Phys. Lett. 5990163A.A. Belavin, A.M. Polyakov, A.S. Schwartz, Yu.S. Tyupkin, Phys. Lett. B59, 85 (1975). DOI 10.1016/ 0370-2693(75)90163-X
. T Schäfer, E V Shuryak, 10.1103/RevModPhys.70.323Rev. Mod. Phys. 70323T. Schäfer, E.V. Shuryak, Rev. Mod. Phys. 70, 323 (1998). DOI 10.1103/RevModPhys.70.323
. L Debbio, L Giusti, C Pica, 10.1103/PhysRevLett.94.032003Phys. Rev. Lett. 9432003L. Del Debbio, L. Giusti, C. Pica, Phys. Rev. Lett. 94, 032003 (2005). DOI 10.1103/PhysRevLett.94. 032003
. M Ce, C Consonni, G P Engel, L Giusti, PoS. 2014353M. Ce, C. Consonni, G.P. Engel, L. Giusti, PoS LATTICE2014, 353 (2014)
. E Poppitz, T Schäfer, M , DOI10.1007/JHEP03(2013)087JHEP. 0387E. Poppitz, T. Schäfer, M. Ünsal, JHEP 03, 087 (2013). DOI 10.1007/JHEP03(2013)087
. T Banks, A Casher, 10.1016/0550-3213(80)90255-2Nucl. Phys. 169103T. Banks, A. Casher, Nucl. Phys. B169, 103 (1980). DOI 10.1016/0550-3213(80)90255-2
. P H Ginsparg, K G Wilson, DOI10.1103/PhysRevD.25.2649Phys. Rev. 252649P.H. Ginsparg, K.G. Wilson, Phys. Rev. D25, 2649 (1982). DOI 10.1103/PhysRevD.25.2649
A Cherman, T Schäfer, M , ArXiv:1604.06108Chiral Lagrangian from Duality and Monopole Operators in Compactified QCD. A. Cherman, T. Schäfer, M. Unsal, Chiral Lagrangian from Duality and Monopole Operators in Compact- ified QCD (2016). ArXiv:1604.06108
. S R Beane, P F Bedaque, A Parreno, M J Savage, DOI10.1016/j.physletb.2004.02.007Phys. Lett. 585106S.R. Beane, P.F. Bedaque, A. Parreno, M.J. Savage, Phys. Lett. B585, 106 (2004). DOI 10.1016/j.physletb. 2004.02.007
. M Cristoforetti, F Di Renzo, L Scorzato, DOI10.1103/PhysRevD.86.074506Phys. Rev. 8674506M. Cristoforetti, F. Di Renzo, L. Scorzato, Phys. Rev. D86, 074506 (2012). DOI 10.1103/PhysRevD.86. 074506
. G Aarts, L Bongiovanni, E Seiler, D Sexty, DOI10.1007/JHEP10(2014)159JHEP. 10159G. Aarts, L. Bongiovanni, E. Seiler, D. Sexty, JHEP 10, 159 (2014). DOI 10.1007/JHEP10(2014)159
. G Aarts, E Seiler, I O Stamatescu, 10.1103/PhysRevD.81.054508Phys. Rev. 8154508G. Aarts, E. Seiler, I.O. Stamatescu, Phys. Rev. D81, 054508 (2010). DOI 10.1103/PhysRevD.81.054508
. D Sexty, DOI10.1016/j.physletb.2014.01.019Phys. Lett. 729108D. Sexty, Phys. Lett. B729, 108 (2014). DOI 10.1016/j.physletb.2014.01.019
. T Kloiber, C Gattringer, PoS. 2013206T. Kloiber, C. Gattringer, PoS LATTICE2013, 206 (2014)
. T Schäfer, D Teaney, Rept , DOI10.1088/0034-4885/72/12/126001Prog. Phys. 72126001T. Schäfer, D. Teaney, Rept. Prog. Phys. 72, 126001 (2009). DOI 10.1088/0034-4885/72/12/126001
. T Schäfer, DOI10.1146/annurev-nucl-102313-025439Ann. Rev. Nucl. Part. Sci. 64125T. Schäfer, Ann. Rev. Nucl. Part. Sci. 64, 125 (2014). DOI 10.1146/annurev-nucl-102313-025439
. F Karsch, H W Wyld, DOI10.1103/PhysRevD.35.2518Phys. Rev. 352518F. Karsch, H.W. Wyld, Phys. Rev. D35, 2518 (1987). DOI 10.1103/PhysRevD.35.2518
. H B Meyer, DOI10.1103/PhysRevD.76.101701Phys. Rev. 76101701H.B. Meyer, Phys. Rev. D76, 101701 (2007). DOI 10.1103/PhysRevD.76.101701
. H B Meyer, 10.1103/PhysRevLett.100.162001Phys. Rev. Lett. 100162001H.B. Meyer, Phys. Rev. Lett. 100, 162001 (2008). DOI 10.1103/PhysRevLett.100.162001
. S Sakai, A Nakamura, 10.1063/1.2729742PoS. 2007221AIP ConfS. Sakai, A. Nakamura, PoS LAT2007, 221 (2007). DOI 10.1063/1.2729742. [AIP Conf.
. Proc. 893Proc.893,5(2007)]
. G Aarts, C Allton, J Foley, S Hands, S Kim, 10.1103/ PhysRevLett.99.022002Phys. Rev. Lett. 9922002G. Aarts, C. Allton, J. Foley, S. Hands, S. Kim, Phys. Rev. Lett. 99, 022002 (2007). DOI 10.1103/ PhysRevLett.99.022002
. G Aarts, PoS. 20071G. Aarts, PoS LAT2007, 001 (2007)
. G Aarts, C Allton, J Foley, S Hands, S Kim, DOI10.1016/j.nuclphysa.2006.11.148Nucl. Phys. 785202G. Aarts, C. Allton, J. Foley, S. Hands, S. Kim, Nucl. Phys. A785, 202 (2007). DOI 10.1016/j.nuclphysa. 2006.11.148
. H B Meyer, DOI10.1088/1126-6708/2008/08/031JHEP. 0831H.B. Meyer, JHEP 08, 031 (2008). DOI 10.1088/1126-6708/2008/08/031
. P Romatschke, D T Son, DOI10.1103/PhysRevD.80.065021Phys. Rev. 8065021P. Romatschke, D.T. Son, Phys. Rev. D80, 065021 (2009). DOI 10.1103/PhysRevD.80.065021
. P B Arnold, G D Moore, L G Yaffe, DOI10.1088/1126-6708/2000/11/001JHEP. 111P.B. Arnold, G.D. Moore, L.G. Yaffe, JHEP 11, 001 (2000). DOI 10.1088/1126-6708/2000/11/001
. P Kovtun, D T Son, A O Starinets, 10.1103/PhysRevLett.94.111601Phys. Rev. Lett. 94111601P. Kovtun, D.T. Son, A.O. Starinets, Phys. Rev. Lett. 94, 111601 (2005). DOI 10.1103/PhysRevLett.94. 111601
. P Romatschke, 10.1142/S0218301310014613Int. J. Mod. Phys. 191P. Romatschke, Int. J. Mod. Phys. E19, 1 (2010). DOI 10.1142/S0218301310014613
. L Rezzolla, O Zanotti, Relativistic Hydrodynamics, Oxford University PressL. Rezzolla, O. Zanotti, Relativistic Hydrodynamics (Oxford University Press, 2013)
. S Jeon, U Heinz, DOI10.1142/S0218301315300106Int. J. Mod. Phys. 24101530010S. Jeon, U. Heinz, Int. J. Mod. Phys. E24(10), 1530010 (2015). DOI 10.1142/S0218301315300106
. P Colella, P R Woodward, J. Comp. Phys. 54174P. Colella, P.R. Woodward, J. Comp. Phys. 54, 174 (1984)
. J M Blondin, E A Lufkin, Astrophys. J. Supp. Ser. 88589J.M. Blondin, E.A. Lufkin, Astrophys. J. Supp. Ser. 88, 589 (1993)
. T Schäfer, DOI10.1103/PhysRevA.82.063629Phys. Rev. 8263629T. Schäfer, Phys. Rev. A82, 063629 (2010). DOI 10.1103/PhysRevA.82.063629
. W Florkowski, R Ryblewski, DOI10.1103/PhysRevC.83.034907Phys. Rev. 8334907W. Florkowski, R. Ryblewski, Phys. Rev. C83, 034907 (2011). DOI 10.1103/PhysRevC.83.034907
. M Martinez, M Strickland, 10.1016/j.nuclphysa.2010.08.011Nucl. Phys. 848183M. Martinez, M. Strickland, Nucl. Phys. A848, 183 (2010). DOI 10.1016/j.nuclphysa.2010.08.011
. M Bluhm, T Schäfer, DOI10.1103/PhysRevA.92.043602Phys. Rev. 92443602M. Bluhm, T. Schäfer, Phys. Rev. A92(4), 043602 (2015). DOI 10.1103/PhysRevA.92.043602
. M Bluhm, T Schäfer, 10.1103/PhysRevLett.116.115301Phys. Rev. Lett. 11611115301M. Bluhm, T. Schäfer, Phys. Rev. Lett. 116(11), 115301 (2016). DOI 10.1103/PhysRevLett.116.115301
. P Romatschke, M Mendoza, S Succi, DOI10.1103/PhysRevC.84.034903Phys. Rev. 8434903P. Romatschke, M. Mendoza, S. Succi, Phys. Rev. C84, 034903 (2011). DOI 10.1103/PhysRevC.84. 034903
. J Brewer, M Mendoza, R E Young, P Romatschke, 10.1103/ PhysRevA.93.013618Phys. Rev. 93113618J. Brewer, M. Mendoza, R.E. Young, P. Romatschke, Phys. Rev. A93(1), 013618 (2016). DOI 10.1103/ PhysRevA.93.013618
L P Kadanoff, G Baym, Quantum Statistical Mechanics. W. A. BenjaminL.P. Kadanoff, G. Baym, Quantum Statistical Mechanics (W. A. Benjamin, 1962)
. G F Bertsch, S Das Gupta, DOI10.1016/0370-1573(88)90170-6Phys. Rept. 160189G.F. Bertsch, S. Das Gupta, Phys. Rept. 160, 189 (1988). DOI 10.1016/0370-1573(88)90170-6
. T Lepers, D Davesne, S Chiacchiera, M Urban, 10.1103/PhysRevA.82.023609Phys. Rev. 8223609T. Lepers, D. Davesne, S. Chiacchiera, M. Urban, Phys. Rev. A82, 023609 (2010). DOI 10.1103/PhysRevA. 82.023609
. S A Bass, 10.1016/S0146-6410(98)00058-1.Prog. Part. Nucl. Phys. 41225Prog. Part. Nucl. Phys.S.A. Bass, et al., Prog. Part. Nucl. Phys. 41, 255 (1998). DOI 10.1016/S0146-6410(98)00058-1. [Prog. Part. Nucl. Phys.41,225(1998)]
. O Buss, T Gaitanos, K Gallmeister, H Van Hees, M Kaskulov, O Lalakulich, A B Larionov, T Leitner, J Weil, U Mosel, DOI10.1016/j.physrep.2011.12.001Phys. Rept. 5121O. Buss, T. Gaitanos, K. Gallmeister, H. van Hees, M. Kaskulov, O. Lalakulich, A.B. Larionov, T. Leitner, J. Weil, U. Mosel, Phys. Rept. 512, 1 (2012). DOI 10.1016/j.physrep.2011.12.001
. W Ehehalt, W Cassing, 10.1016/0375-9474(96)00097-8Nucl. Phys. 602449W. Ehehalt, W. Cassing, Nucl. Phys. A602, 449 (1996). DOI 10.1016/0375-9474(96)00097-8
. K Geiger, B Muller, ; Z Xu, C Greiner, 10.1016/0550-3213(92)90280-O11110.1103/PhysRevC.71.064901Nucl. Phys. 36964901Phys. Rev.K. Geiger, B. Muller, Nucl. Phys. B369, 600 (1992). DOI 10.1016/0550-3213(92)90280-O 111. Z. Xu, C. Greiner, Phys. Rev. C71, 064901 (2005). DOI 10.1103/PhysRevC.71.064901
. L D Mclerran, R Venugopalan, DOI10.1103/PhysRevD.49.2233Phys. Rev. 492233L.D. McLerran, R. Venugopalan, Phys. Rev. D49, 2233 (1994). DOI 10.1103/PhysRevD.49.2233
. A H Mueller, D T Son, DOI10.1016/j.physletb.2003.12.047Phys. Lett. 582279A.H. Mueller, D.T. Son, Phys. Lett. B582, 279 (2004). DOI 10.1016/j.physletb.2003.12.047
. S Mrowczynski, B Schenke, M Strickland, S. Mrowczynski, B. Schenke, M. Strickland, (2016)
. J Berges, A Rothkopf, J Schmidt, DOI10.1103/PhysRevLett.101.041603Phys. Rev. Lett. 10141603J. Berges, A. Rothkopf, J. Schmidt, Phys. Rev. Lett. 101, 041603 (2008). DOI 10.1103/PhysRevLett.101. 041603
. K Dusling, T Epelbaum, F Gelis, R Venugopalan, DOI 10.1016/j. nuclphysa.2010.11.009Nucl. Phys. 85069K. Dusling, T. Epelbaum, F. Gelis, R. Venugopalan, Nucl. Phys. A850, 69 (2011). DOI 10.1016/j. nuclphysa.2010.11.009
. D F Litim, C Manuel, DOI10.1016/S0370-1573(02)00015-7Phys. Rept. 364451D.F. Litim, C. Manuel, Phys. Rept. 364, 451 (2002). DOI 10.1016/S0370-1573(02)00015-7
. S K Wong, Nuovo Cim, 10.1007/BF0289213465689S.K. Wong, Nuovo Cim. A65, 689 (1970). DOI 10.1007/BF02892134
. C R Hu, B Muller, DOI10.1016/S0370-2693(97)00851-4Phys. Lett. 409377C.R. Hu, B. Muller, Phys. Lett. B409, 377 (1997). DOI 10.1016/S0370-2693(97)00851-4
. J M Maldacena, 10.1023/A:1026654312961Adv. Theor. Math. Phys. 38231Int. J. Theor. Phys.J.M. Maldacena, Int. J. Theor. Phys. 38, 1113 (1999). DOI 10.1023/A:1026654312961. [Adv. Theor. Math. Phys.2,231(1998)]
. D T Son, A O Starinets, DOI10.1146/annurev.nucl.57.090506.123120Ann. Rev. Nucl. Part. Sci. 5795D.T. Son, A.O. Starinets, Ann. Rev. Nucl. Part. Sci. 57, 95 (2007). DOI 10.1146/annurev.nucl.57.090506. 123120
. S S Gubser, A Karch, DOI10.1146/annurev.nucl.010909.083602Ann. Rev. Nucl. Part. Sci. 59145S.S. Gubser, A. Karch, Ann. Rev. Nucl. Part. Sci. 59, 145 (2009). DOI 10.1146/annurev.nucl.010909. 083602
. J Casalderrey-Solana, H Liu, D Mateos, K Rajagopal, U A Wiedemann, J. Casalderrey-Solana, H. Liu, D. Mateos, K. Rajagopal, U.A. Wiedemann, (2011)
. O Dewolfe, S S Gubser, C Rosen, D Teaney, DOI10.1016/j.ppnp.2013.11.001Prog. Part. Nucl. Phys. 7586O. DeWolfe, S.S. Gubser, C. Rosen, D. Teaney, Prog. Part. Nucl. Phys. 75, 86 (2014). DOI 10.1016/j.ppnp. 2013.11.001
. G Policastro, D T Son, A O Starinets, DOI10.1088/1126-6708/2002/09/043JHEP. 0943G. Policastro, D.T. Son, A.O. Starinets, JHEP 09, 043 (2002). DOI 10.1088/1126-6708/2002/09/043
. D Teaney, DOI10.1103/PhysRevD.74.045025Phys. Rev. 7445025D. Teaney, Phys. Rev. D74, 045025 (2006). DOI 10.1103/PhysRevD.74.045025
. D T Son, A O Starinets, 10.1088/1126-6708/2006/03/052JHEP. 0352D.T. Son, A.O. Starinets, JHEP 03, 052 (2006). DOI 10.1088/1126-6708/2006/03/052
. P M Chesler, L G Yaffe, 10.1103/PhysRevLett.106.021601Phys. Rev. Lett. 10621601P.M. Chesler, L.G. Yaffe, Phys. Rev. Lett. 106, 021601 (2011). DOI 10.1103/PhysRevLett.106.021601
. P M Chesler, L G Yaffe, DOI10.1007/JHEP07(2014)086JHEP. 0786P.M. Chesler, L.G. Yaffe, JHEP 07, 086 (2014). DOI 10.1007/JHEP07(2014)086
. M Rangamani, DOI10.1088/0264-9381/26/22/224003Class. Quant. Grav. 26224003M. Rangamani, Class. Quant. Grav. 26, 224003 (2009). DOI 10.1088/0264-9381/26/22/224003
A Jaffe, E Witten, Quantum Yang Mills Theory, Official Description of Millenium Prize Problems. A. Jaffe, E. Witten, Quantum Yang Mills Theory, Official Description of Millenium Prize Problems (2000). www.claymath.org
Existence and smoothness of the Navier-Stokes Equation, Official Description of Millenium Prize Problems. C Fefferman, C. Fefferman, Existence and smoothness of the Navier-Stokes Equation, Official Description of Millenium Prize Problems (2000). www.claymath.org
. C Mouhot, C Villani, DOI10.1007/s11511-011-0068-9Acta Mathematica. 20729C. Mouhot, C. Villani, Acta Mathematica 207, 29 (2011). DOI 10.1007/s11511-011-0068-9
The entropy formula for the Ricci flow and its geometric applications. G Perelman, ArXiv:math/0211159math.DGG. Perelman, The entropy formula for the Ricci flow and its geometric applications (2002). ArXiv:math/0211159 [math.DG]
. C Shen, Z Qiu, H Song, J Bernhard, S Bass, U Heinz, 10.1016/j.cpc.2015.08.039Comput. Phys. Commun. 19961C. Shen, Z. Qiu, H. Song, J. Bernhard, S. Bass, U. Heinz, Comput. Phys. Commun. 199, 61 (2016). DOI 10.1016/j.cpc.2015.08.039
M Creutz, Computers in Science & Engineering. 80M. Creutz, Computers in Science & Engineering March/April 2004, 80 (2004)
|
[
"http://github.com/JeffersonLab/chroma."
] |
[
"Skyrmion Excitation in Two-Dimensional Spinor Bose-Einstein Condensate",
"Skyrmion Excitation in Two-Dimensional Spinor Bose-Einstein Condensate"
] |
[
"Hui Zhai \nCenter for Advanced Study\nTsinghua University\n100084BeijingChina\n",
"Wei Qiang Chen \nCenter for Advanced Study\nTsinghua University\n100084BeijingChina\n",
"Zhan Xu \nCenter for Advanced Study\nTsinghua University\n100084BeijingChina\n",
"Lee Chang \nCenter for Advanced Study\nTsinghua University\n100084BeijingChina\n"
] |
[
"Center for Advanced Study\nTsinghua University\n100084BeijingChina",
"Center for Advanced Study\nTsinghua University\n100084BeijingChina",
"Center for Advanced Study\nTsinghua University\n100084BeijingChina",
"Center for Advanced Study\nTsinghua University\n100084BeijingChina"
] |
[] |
We study the properties of coreless vortices(skyrmion) in spinor Bose-Einstein condensate. We find that this excitation is always energetically unstable, it always decays to an uniform spin texture. We obtain the skyrmion energy as a function of its size and position, a key quantity in understanding the decay process. We also point out that the decay rate of a skyrmion with high winding number will be slower. The interaction between skyrmions and other excitation modes are also discussed.
|
10.1103/physreva.68.043602
|
[
"https://arxiv.org/pdf/cond-mat/0210397v2.pdf"
] | 119,397,793 |
cond-mat/0210397
|
4114f535951f04f5ffa44622c02c89450735805e
|
Skyrmion Excitation in Two-Dimensional Spinor Bose-Einstein Condensate
13 Nov 2003
Hui Zhai
Center for Advanced Study
Tsinghua University
100084BeijingChina
Wei Qiang Chen
Center for Advanced Study
Tsinghua University
100084BeijingChina
Zhan Xu
Center for Advanced Study
Tsinghua University
100084BeijingChina
Lee Chang
Center for Advanced Study
Tsinghua University
100084BeijingChina
Skyrmion Excitation in Two-Dimensional Spinor Bose-Einstein Condensate
13 Nov 2003PACS numbers: 0375Lm 0375Mn
We study the properties of coreless vortices(skyrmion) in spinor Bose-Einstein condensate. We find that this excitation is always energetically unstable, it always decays to an uniform spin texture. We obtain the skyrmion energy as a function of its size and position, a key quantity in understanding the decay process. We also point out that the decay rate of a skyrmion with high winding number will be slower. The interaction between skyrmions and other excitation modes are also discussed.
I. INTRODUCTION
Topological objects have been attracting interest from various fields of physics for several decades [1]. Roughly speaking, there are two kinds of topological excitations in two-dimensional space. The configuration of the first kind, such as vortex or monopole, has a natural singularity and depends on azimuthal angles even at infinity. These excitations have infinite kinetic energy unless they are coupled to other fields vanishing at infinity. In order to obtain a topological structure with finite energy, the configuration of the second kind must be uniform at infinity, which means that the topological structure should be defined in a compactified space. The skyrmion is an example of the second kind topological structure, and the skyrmion excitation in n-dimensional space exists when the nth homotopy group of the internal space is nontrivial.
Since its introduction in the 1960s in nuclear physics [2] and its application in QCD [3], skyrmions have been found in condensed matter systems such as quantum Hall effect [4] and high temperature superconductivity. The achievement of Bose-Enistein condensate (BEc) in dilute Bose gases provide an opportunity to investigate manybody theory in new system. So far some topological excitations such as vortices and vortex rings in scalar BEc have been observed in a number of labs [5]. Recently the realization of a spinor BEc [6], whose spin degrees of freedom are unfrozen has generated interest in studying richer topological excitations in such systems [7][8] [9].
The mean field description of spin-1 BEc was proposed in pioneer works of Ho and Ohmi et al [10]. The symmetry group of the ground state order parameter of an antiferromagnetic spin-1 BEc is found to be U (1) × S 2 /Z 2 , [11] where U (1) denotes the global phase angle and S 2 is a unit sphere denoting all orientation of the spin quantization axis. The additional Z 2 is because the refection of spin quantization axis is equivalent to the change the global phase by π. The general form of such a spinor field is
ζ = 1 √ 2 −m x + im y √ 2m z m x + im y (1)
where − → m is the Bloch vector. It should be pointed out that the second homotopy group of S 2 is homomorphic to integer group, this not only tells us of the existence of monopole excitation in three-dimensional antiferromagnetic BEc, which had been investigated by H.T.C. Stoof et al [8], but also implies the existence of the skyrmion excitation in two-dimensional case.
It is naturally to ask if this excitation mode is energetically stable or not. In this paper, we answer the question within the Thomas-Fermi approximation and a variational method, and we find that the skyrmion is always energetically unstable. In other words, in presence of any energy dissipation mechanism, the spin texture will always decay. However, this result does not imply that the skyrmion can not be created, a skyrmion with half topological charge has been successfully generated in a recent experiment by adiabatic deformation of the magnetic trap [12]. Although the half skyrmion created in this experiment is different from that discussed in this paper, it is believed that a skyrmion with integer winding number can also be created in the near future.
Furthermore, it is useful to discover whether the skyrmion decays by expanding to infinity size or shrinking to tiny size after it is created, and to understand how its center of mass moves. To answer these two questions, we need to obtain the energy of skyrmion as a function of its size and position, this energy function is the central result of our paper. We also find that the presence of a vortex influences skyrmion's motion. Additionally based on the results of Ref. [7], we will point out that the decay rate of high winding number skyrmions will be much slower than those with winding number 1.
II. THE ENERGETIC STABILITY OF Q = 1 SKYRMION
When we only focus on the energy property of skyrmions, we can first neglect the interaction in spin channel which disrupt the S 2 order parameter, and the energy functional for such BEc can be simplified as following:
K (ϕ, ζ) = d 2 r 2 2m |∇ϕ| 2 + 2 2m |∇ζ| 2 |ϕ| 2 − (µ − V trap ( − → r )) |ϕ| 2 + 4π 2 a sc N m |ϕ| 4 (2)
µ is the chemical potential and V trap is the confining trap potential. In this section, we will use conformal mapping to construct a general skyrmion excitation with winding number Q = 1. These variational wave functions are used to show that such skyrmions are always energetically unstable. Equation (2) is a nonlinear sigma model coupled to a ϕ 4 model in an external potential. An n-skyrmion is an instanton solution to the 1 + 1 dimensional nonlinear sigma model [1], this skyrmion minimizes the energy functional in the sector of each homotopy class, and can be constructed from the help of fractional linear mapping which maps the compactified complex space into itself, and the most general form of this mapping is
Ω = f (z) = N i=1 a i z + b i c i z + d i (3) with the constraint a i d i − b i c i = 1,
where N is the winding number. Complex number ω are mapped to vectors on the unit sphere via
m x = Ω +Ω 1 + |Ω| 2 , m y = −i Ω −Ω 1 + |Ω| 2 , m z = |Ω| 2 − 1 |Ω| 2 + 1(4)
Owing to the conformal invariance of the nonlinear sigma model, its classical solutions are infinitely degenerate and the total energy is independent of the parameters in the analytical function f (z). The coupling between the spinor field ζ and the superfluid field ϕ breaks the conformal invariance, and skyrmions are not classical solutions to our model . The spin texture defined by equation (3) can however be used in a variational calculation. The texture of spinor field contributes an effective potential, which changes the density profile. Given an effective potential, the minimum of the energy functional, is function of the parameters in the conformal mapping f (z). This function will tell us the information about the energetic stability of skyrmion excitation.
We first consider the Q = 1 case, Ω = f (z) = az+b cz+d . It is not difficult to show that
|∇ζ| 2 = 8 |f ′ (z)| 2 1 + |f (z)| 2 2 = 8 |cz + d| 2 + |az + b| 2 2
(5) On the condition ad − bc = 1, the effective potential is non-singular implying the density remains finite at the skyrmion core. This result is consistent with the coreless character of skyrmion excitation. The effective potential has two barrier localized at z = − d c and z = − b a , with the height |c| 4 and |a| 4 respectively, where the density of the condensate should be smaller than the surrounding. Recall that in the repulsive case the interaction constant g is positive, both the interaction energy and the kinetic energy favor homogenous density profile, too much undulation will no doubt increase the energy, so the choice c = 0 helps to decrease the energy. This value of c also produces the most symmetric texture. The function f (z) is reduced to a 2 (z − z 1 ), where the parameters a and z 1 characterizes the size and location of the skyrmion. [13].
For simplicity, we first force z 1 = 0 to be at the center of the trap. Non-zero z 1 will be discussed later. The effective potential from the spin texture now is 1 | 1 a | 2 +|a| 2 r 2 2 , and the full potential is showed in Figure 1 for different a. Notice that the barrier height of the effective potential is proportional to |a| 4 , and that integrating the potential energy over the whole two-dimension space results in a constant. In the limit a → 0, the effective potential spreads uniformly throughout the whole space. In the contrary limit of infinite a, the effective potential becomes a δ function.
We now investigate the energetic stability of skyrmion in the framework of TF approximation where the term |∇ϕ| 2 is neglected. In this approximation, the density profile is
n ( − → r ) = µ − a 2 HO r 2 − 8 1 a 2 + a 2 r 2 2 1 2g Θ µ − a 2 HO r 2 − 8 1 a 2 + a 2 r 2 2 (6)
in which a HO = mω 2 and g = 8πa sc N , ω is the frequency of the harmonic trap, and Θ(x) is a step function.
The normalization condition and the constraint that the condensate density is non-negative give the following two equations:
1 2g
x 0 2π 0 rdrdθ µ − a 2 HO r 2 − 8 1 a 2 + a 2 r 2 2 = 1 (7) µ − a 2 HO x 2 − 8 1 a 2 + a 2 x 2 2 = 0(8)
where x is the value of r at which the Thomas-Fermi density vanishes. Solving the two equations we obtain the relationship between the chemical potential µ, Thomas-Fermi radius x and the skyrmion's size a. One can substitute these relationships back to the energy functional and then obtain the minimal energy E as a function of size a which is plotted in Figure 2. Figure 2 shows that the minimal energy E has two minima occurring at zero and infinity, and there exists a critical size a c which corresponds to the maximal value of E. When a is quite large, the TF approximation fails, we use variational density profile to obtain a more accurate result [14], which is shown in Figure 3.
Physically, we can understand the two figures in the following way. When the size parameter a is sufficiently small, the spin configuration within the TF radius becomes uniform, the effective potential becomes flat inside the TF radius and just becomes a uniform shift of the chemical potential. So the energy approaches the ground state energy as a → 0. In the opposite limit, the barrier is quite high and thus the density almost vanishes at the center of the skyrmion, but at the same time the width of the barrier becomes narrow. The energy cost of the low density region is proportional to the volume and decreases as a becoming larger. In this case the skyrmion can be observable by imaging the density profile, and observing the low density core. These figures tell us that the Q = 1 skyrmion is energetically unstable, in presence of any weak dissipation, it will either expand to become unobservable, or shrink to an infinitesimal size. As a semi-classical object the dissipative dynamics of a skyrmion depends on whether or not its initial size is larger than the critical size a c .
For the same reason, the position z 1 of the skyrmion tends to move toward the edge of the condensate where the density is lower and the energy cost V ef f |ϕ| 2 is smaller.
III. DISSIPATION DYNAMICS
Although we believe that the energetically unstable skyrmion will decay in presence of any weak dissipation, the exact description of the dynamics depends on the details of the dissipation mechanism. A possible source of dissipation is interaction with the non-condensed component. Since skyrmion is a global topological object and the system is in a quite low temperature, this may not be important. Another source of dissipation is the spontaneous emission of other excitation modes, such as phonons mode or vortices, similar to the spontaneous emission of an excited atom. This mechanism is observed in numerical studies of superfluid vortex recconnection [15]. Let us discuss the process in more detail. When a increases, the peak of effective potential grows and atoms are pushed away from the center of the skyrmion. The kinetic energy of these atoms will lead to oscillatory motion of the condensate density. If the trap harmonic potential is anisotropic, the extracted atoms can also rotate to form a vortex. These modes take energy away from the skyrmion, leading to a change of the skyrmion size.
IV. INTERACTION BETWEEN A VORTEX AND A SKYRMION
We rewrite ϕ as Ψe iθ . In equation (2) the phase field θ is not directly coupled to the spinor field ζ, but they both interacts with the density field. Since both vortices and skyrmion have low density cores, it is favorable that the skyrmion core fits entirely within the vortex, thus one expects that the presence of vortex change the dominant trend of the evolvement of the skyrmion from expanding to shrinking, and one should find the skyrmion attracted to the vortex center instead of moving toward the cloud's edge.
In principle, we can integrate out the Ψ field in the action and obtain the effective Hamiltonian to describe the interaction between a vortex and a skyrmion. Unfortunately the Ψ 4 term makes this procedure difficult. However in the TF approximation, where the derivative terms of Ψ are neglected, the action is a quadratic form of the density field n. We can integrate it out and obtain an effective Hamiltonian describing the interaction between θ and ζ:
H int = − 1 2g 2 2m 2 d 2 r |∇θ| 2 |∇ζ| 2(9)
The interaction energy will be smallest when the maximum of |∇θ| and |∇ζ| coincide. This result is consistent with the above picture.
V. Q > 1 SKYRMION
In this section we discuss skyrmions with Q > 1, demonstrating their differences from the Q = 1 skyrmion. Following the logic of the second section, we choose the most symmetric case, f (z) = a 2 z n , finding the effective potential
V n = 8 n 2 r 2n−2 [( 1 a ) 2 + a 2 r 2n ] 2(10)
The barrier of V n (n > 2) lies on a circle
r = 2n − 2 2n + 2 1 a 4 1 2n(11)
with the height
V max = 2 n 2 − 1 2n + 1 2n − 1 a 4 1 n(12)
this structure is markedly different from V 1 whose peak is always localized at its center. In the center, the effective potentials is always zero for any n ≥ 2. Figure 4 shows V 2 for different value of a. As a is decreasing, the location of the off-center peak of the potential approaches infinity as 1 a 4 2n and its height decreases as a 4 n . Because the size of condensate is always finite, this implies that it is energetically favorable for a skyrmion to increase to a larger size so that the peak of the effective potential is outside the TF radius and the effective potential becomes small and uniform inside the condensate. What is different from the Q = 1 case is the atoms must tunnel through the barrier of the effective potential as the size increases. Thus, the rate of the process will be characterized by the WKB tunnelling rate. The situation has been studied carefully in Ref [7], and it showed that the tunnelling rate of such process may be long enough to form a dynamic metastable skyrmion. In summary, we constructed and studied the skyrmion excitation mode of the spinor field and showed that any finite size skyrmion is always unstable. Through a variational study of the skyrmion excitation energy for different size and position, we find that it is energetically favorable for skyrmion to expand to infinite size or shrinking to infinitesimal size, resolution in a uniform spin texture in most part of the condensate. We discussed the interplay between skyrmion modes and other excitation modes: (1) the emission phonons can dissipate the skyrmion energy and change its size, and (2) the vortex has attractive interactions with a skyrmion. We also showed that the behavior of Q > 1 skyrmions are quite different from Q = 1 skyrmion since the effective potentials induced by their configuration have some markedly difference. However, all these discussions are concentrated on properties of the skyrmion mode its itself and a qualitative prediction on the skyrmion dynamics is made from the energetic consideration. To study the detail of skyrmion dynamic process, it is important to consider the interaction in spin channel, which cause spin fluctuations exceeding the S 2 internal space. [8] In the end, noticing that ζ represents local spin-gauge degrees of freedom, we remark that the model we explored in this paper shares some features of Yang-Mills theory, that is, the instanton solutions having the same winding number are saddle points of the Langrangian and are energetically degenerate although they have different shape and size, because of the local scaling invariance of the pure gauge theory. However, when the coupling to matter field breaks the conformal symmetry, the instanton solutions are no longer classical solutions and have different actions. In addition, the dynamic term of gauge field is absent in our model. The conclusion of energetic instability of any finite size skyrmion is determined by the above two features.
FIG. 1 :FIG. 2 :
12The trap potential added with the effective potential induced by Q = 1 skyrmion, in the unit of ω, is plotted as a function of x/l for a/ √ aHO = 1The minimal energy E in the unit of ω vs. the size of skyrmion a in the unit L = √ aHO. The parameter g/aHO is set to 2.
FIG. 3 :
3The minimal energy E in the unit of ω vs. size a in the unit of L, the result is obtained using variational method.
FIG. 4 :
4The trap potential added with the effective potential induced by Q = 2 skyrmion, in the unit of ω, is plotted as a function of x/l for a/ √ aHO = 1, 3 and 5 respectively VI. CONCLUSION
Acknowledgements: The authors thank Professor C.N. Yang for encouraging. We acknowledge Professor Tin-Lun Ho for helpful discussion, and Doctor Erich Mueller, Lü Rong for his a lot of wonderful comments and suggestions. This work is supported by National Natural Science Foundation of China ( Grant No. 10247002 and 90103004 )
See R Rajaraman, Soliton and Instantons. North-Holland, AmsterdamSee R.Rajaraman, Soliton and Instantons (North- Holland, Amsterdam, 1989 )
. T H R Skyrme, Proc.R.Soc.London A. 260127T.H.R.Skyrme, Proc.R.Soc.London A 260, 127 (1961)
. Y M For Example, Cho, Phys.Rev.Lett. 87252001For example, Y.M.Cho, Phys.Rev.Lett 87, 252001 (2001)
. S L Sondhi, A Karlhede, S A Kivelson, E H Rezayi, Phys.Rev B. 4716419S.L.Sondhi, A.Karlhede, S.A. Kivelson and E.H.Rezayi, Phys.Rev B 47, 16419 (1993)
. J R Abo-Shaeer, C Raman, J M Vogels, W Ketterle, Science. 292476J.R.Abo-Shaeer, C.Raman, J.M.Vogels, W.Ketterle, Sci- ence 292, 476 (2001)
. M R Matthews, B P Anderson, P C Haljan, D S Hall, C E Wieman, E Cornell, Phys.Rev.Lett. 832498M.R.Matthews, B.P.Anderson, P.C.Haljan, D.S.Hall, C.E.Wieman, and E.A Cornell, Phys.Rev.Lett. 83, 2498 (1999)
. U A Khawaja, H T C Stoof, Nature. 411918U.A.Khawaja and H.T.C.Stoof, Nature 411, 918 (2001);
. Phys.Rev.A. 6443612Phys.Rev.A 64, 043612 (2001)
. H T C Stoof, E Vliegen, U Al, Khawaja, Phys.Rev.Lett. 87120407H.T.C.Stoof, E.Vliegen, and U.Al. Khawaja, Phys.Rev.Lett., 87 , 120407 (2001)
. R A Battye, N R Cooper, P M Sutcliffe, Phys.Rev Lett. 8880401R.A.Battye, N.R.Cooper and P.M.Sutcliffe, Phys.Rev Lett 88, 080401 (2002);
. J Ruostekoski, J R Anglin, Phys.Rev.Lett. 863934J.Ruostekoski and J.R.Anglin, Phys.Rev.Lett, 86, 3934 (2001);
. J Martikainen, A Collin, K A Suominen, Phys.Rev.Lett. 8890404J.P Martikainen, A. Collin and K.A.Suominen, Phys.Rev.Lett 88, 090404 (2002);
. T Mizushima, K Machida, T Kita, Phys. Rev. Lett. 8930401T. Mizushima, K. Machida, and T. Kita, Phys. Rev. Lett. 89, 030401 (2002)
. T L Ho, Phys.Rev.Lett. 81742T.L.Ho, Phys.Rev.Lett, 81, 742 (1998);
. K Ohmi, Machida, J. Phys. Soc. Jpn. B. 671822Ohmi and K. Machida, J. Phys. Soc. Jpn. B, 67 1822 (1998).
. F Zhou, cond-mat/0108473Phys, Rev. Lett. 8780401F. Zhou, Phys, Rev. Lett 87, 080401, and cond-mat/0108473
. A E Leanhardt, Y Shin, D Kielpinski, D E Pritchard, W Ketterle, Phys. Rev. Lett. 90140403A.E. Leanhardt, Y. Shin, D. Kielpinski, D.E. Pritchard, W. Ketterle, Phys. Rev. Lett., 90, 140403 (2003)
In fact, the small a corresponds to a large size skyrmion and vice versa. In fact, the small a corresponds to a large size skyrmion and vice versa.
The trial wave function we used here takes the form a + br 2 e −ω 2 r 2. The trial wave function we used here takes the form a + br 2 e −ω 2 r 2 .
. M Leadbeater, T Winiecki, D C Samuels, C F Barenghi, C S Adams, Phys.Rev.Lett. 861410M.Leadbeater, T.Winiecki, D.C.Samuels, C.F.Barenghi and C.S.Adams, Phys.Rev.Lett 86, 1410 (2001)
|
[] |
[
"Temperature scaling law for quantum annealing optimizers",
"Temperature scaling law for quantum annealing optimizers"
] |
[
"Tameem Albash \nInformation Sciences Institute\nMarina del Rey\nUniversity of Southern California\n90292CaliforniaUSA\n\nDepartment of Physics and Astronomy and Center for Quantum Information Science & Technology\nUniversity of Southern California\n90089Los AngelesCaliforniaUSA\n",
"Victor Martin-Mayor \nDepartamento de Física Teórica I\nUniversidad Complutense\n28040MadridSpain\n\nInstituto de Biocomputación y Física de Sistemas Complejos (BIFI)\nZaragozaSpain\n",
"Itay Hen \nInformation Sciences Institute\nMarina del Rey\nUniversity of Southern California\n90292CaliforniaUSA\n\nDepartment of Physics and Astronomy and Center for Quantum Information Science & Technology\nUniversity of Southern California\n90089Los AngelesCaliforniaUSA\n"
] |
[
"Information Sciences Institute\nMarina del Rey\nUniversity of Southern California\n90292CaliforniaUSA",
"Department of Physics and Astronomy and Center for Quantum Information Science & Technology\nUniversity of Southern California\n90089Los AngelesCaliforniaUSA",
"Departamento de Física Teórica I\nUniversidad Complutense\n28040MadridSpain",
"Instituto de Biocomputación y Física de Sistemas Complejos (BIFI)\nZaragozaSpain",
"Information Sciences Institute\nMarina del Rey\nUniversity of Southern California\n90292CaliforniaUSA",
"Department of Physics and Astronomy and Center for Quantum Information Science & Technology\nUniversity of Southern California\n90089Los AngelesCaliforniaUSA"
] |
[] |
Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.Introduction.-Quantum computing devices are becoming sufficiently large to undertake computational tasks that are infeasible using classical computing[1][2][3][4][5][6][7]. The theoretical underpinning for whether such tasks exist with physically realizable quantum annealers remains lacking, despite the excitement brought on by recent technological breakthroughs that have made programmable quantum annealing (QA)[8][9][10][11][12]optimizers consisting of thousands of quantum bits commercially available. Thus far, no examples of practical relevance have been found to indicate a superiority of QA optimization, i.e., to find bit assignments that minimize the energy, or cost, of discrete combinatorial optimization problems, faster than possible classically[13][14][15][16][17][18][19][20]. Major ongoing efforts continue to build larger, more densely connected QA devices, in the hope that the capability to embed larger optimization problems would eventually reveal the coveted quantum speedup[21][22][23][24][25].Understanding the robustness of QA optimization to errors that reduce the final ground state probability is critical. In this work, we consider perhaps the most optimistic setting where the only source of error is due to nonzero temperature. We analyze the theoretical scaling performance of ideal fixed-temperature quantum annealers for optimization. We show that even in the case where annealers are assumed to thermalize instantly (rather than only in the infinite runtime limit), the energies, or costs, of their output configurations would be computationally trivial to achieve (in a sense that we explain). We further derive a scaling law for QA optimizers and provide corroboration of our analytical findings by experimental results obtained from the commercial D-Wave 2X QA processor[26][27][28][29][30] as well as numerical simulations (our results equally apply to ideal thermal annealing devices). We discuss the implications of our results for both past benchmarking studies and for the engineering requirements of future QA devices.Fixed-temperature quantum annealers.-In the adiabatic limit, closed-system quantum annealers are guaranteed to find a ground state of the target cost func-tion, or final Hamiltonian H, they are to solve. The adiabatic theorem of quantum mechanics ensures that the overlap of the final state of the system with the ground state manifold of H, approaches unity as the duration of the process increases [31, 32]. For physical quantum annealers that operate at positive temperatures (T > 0), there is no equivalent guarantee of reaching the ground state with high probability. For long runtimes, an ideal finite-temperature quantum annealer is expected to sample the Boltzmann distribution of the final Hamiltonian at the annealer temperature [33].In what follows, we argue that even instantlythermalizing quantum annealers [34] are severely limited as optimizers due to their finite temperature. For concreteness, we restrict to annealers for which i) the number of couplers scales linearly with the number of qubits N [35], ii) the coupling strengths are discretized and are bounded independently of problem size, and iii) the scaling of the free energy with problem size is not pathological, i.e., that our system is not tuned to a critical point. Other than the above standard assumptions, our treatment is general (we discuss the performance of quantum annealers when some of these conditions are lifted later on). For clarity, we consider optimization problems written in terms of a Hamiltonian of the Ising-typewhere {s i = ±1} are binary Ising spin variables that are to be optimized over, {J ij , h i } are the coupling strengths between connected spins and external biases, respectively, and ij denotes the underlying connectivity graph of the model. The discussion that follows however is not restricted to any particular model. Under the above assumptions, the ground state energies, denoted E 0 , of any given problem class, scale linearly with increasing problem size (i.e., the energy is an extensive property as is generically expected from physical systems) while the classical minimal gap ∆ = E 1 −E 0 arXiv:1703.03871v2 [quant-ph] 15 Sep 2017 2 remains fixed. It follows then [36] that the thermal expectation values of the intensive energyand specific heatremain finite as N → ∞ for any fixed inversetemperature β = 1/T . The intensive energy is discretized in steps of ∆/N , yet its statistical dispersion σ β (e) = −c β /N is much larger. Treating e as a stochastic variable, for large enough values of N it can be treated as a continuous variable as the ratio of discretization versus dispersion is −∆ 2 /(c β N ) decaying to zero for large N . From the Boltzmann distribution it follows that the probability density of e goes as p β (e) = Z β −1 e N (s(e)−βe) , where Z β = n g n e −βEn is the partition function, g n is the degeneracy of the n-th level, i.e., the number of microstates with H({s i }) = E n , satisfying 2 N = n≥0 g n , and s(e) is the entropy density [37]. The linear combination Ψ β (e) = s(e)−βe plays the role of a large-deviations functional for e. The most probable value of e, which we denote by e * , is given by the maximum of Ψ β . SolvingClose to e * , Ψ β can be Taylor-expanded as Ψ β (e) ≈The probability density is thus approximately Gaussian in the vicinity of e * , although deviations from the Gaussian behavior are crucial [39]. Moreover, in the limit of large N , we findTherefore, the probability of finding by Boltzmannsampling any energy e < e * (equivalently, E < e * N ) is exponentially suppressed in N , scaling in fact as exp[−N ( Ψ β (e * )−Ψ β (e) )]. We thus arrive at the conclusion that even ideal fixed temperature quantum annealers that thermalize instantaneously to the Gibbs state of the classical Hamiltonian are exponentially unlikely to find the ground state since e * > e 0 ≡ E 0 /N . We now corroborate the above derivation by runs on the commercial DW2X quantum annealer[26][27][28][29]. To do so, we first generate random instances of differently sized sub-graphs of the DW2X Chimera connectivity graph [40, 41] and run them multiple times on the annealer, recording the obtained energies [42].Figure 1depicts typical resultant residual energy (E − E 0 ) distributions. As is evident, increasing the problem size N 'pushes' the energy distribution farther away from E 0 , as well as broadening the distribution and making it more gaussian-like. In the inset, we measure the departure of H β from E 0 and the spread of the energies σ β (H) over 100 'planted-solution' [18] instances per sub-graph size as a function of problem size N [43]. For sufficiently large problem sizes, we find that the scaling of H − E 0 β is close to linear while σ β (H) scales slightly faster than √ N . While the slight deviations from our analytical predictions suggest that the DW2X configurations have not fully reached asymptotic behavior[44], they exhibit a trend that closely matches our assumptions with the agreement getting better with growing problem sizes.FIG. 1. Distributions of residual energy, E − E0, from DW2X runs. As problem sizes grow, the distributions become more Gaussian-like. Inset: Gaussians' mean (blue) and standard deviation (red) as a function of problem size, averaged over 100 instances per size. The solid lines correspond to power-law fits of the average mean with power 0.98 ± 0.14 and average standard deviation scaling with power 0.63±0.09, taking into accounts all sizes but the smallest (1.01 ± 0.62 and 0.57 ± 0.37 respectively if the two smallest sizes are omitted).Given the scaling of the mean and standard deviation, we conclude that fixed-temperature quantum annealers will generate energies e with a fixed distance from e 0 , or in terms of extensive energies, configurations obtained from fixed-temperature annealers will have energies concentrated around E = (1 − )E 0 for some > 0 and E 0 < 0.One could now ask what the difficulty is for classical algorithms to generate energy values in the above range. This question has been recently answered by the discovery of a polynomial time approximation scheme (PTAS) for spin-glasses defined on a Chimera graph [45] (and which can be easily generalized to any locally connected model), where reaching such energies can be done efficiently [46]. While the scaling of the PTAS with 3 is not favorable, scaling as c 1/ for some constant c, in practice there exist algorithms (e.g., parallel tempering that we discuss later on) that are known to scale more favorably than PTAS. Scaling law for quantum annealing temperatures.-In light of the above, it may seem that quantum annealers are doomed to fail as optimizers as problem sizes increase. We now argue that success may be regained if the temperature of the QA device is appropriately scaled with problem size. Specifically, we address the question of how the inverse-temperature β should scale with N such that there is a probability of at least q of finding the ground state.An estimate for the required scaling can be given as follows. From the above analysis, it should be clear that the probability of finding a ground state at inverse temperature β will not decay exponentially with system size only if the ground state falls within the variation of the mean energy, specifically ifis comparable toThe third law of thermodynamics dictates that the specific heat c T ≡ d e /dT goes to zero when T → 0. Assuming a scaling of the form c T ∼ T α , or equivalently, −c β ∼ β −α−2 , givesFor a power-law specific heat, it thus follows that the sought scaling is β ∼ N 1/α . If on the other hand c β vanishes exponentially in β, the inverse-temperature scaling will be milder, of the form β ∼ log N . To illustrate the above, we next present an analysis of simulations of randomly generated instances on Chimera lattices (we study several problem classes and architectures, see the Supplemental Information). To study the energy distribution generated by a thermal sampler on these instances, we use parallel tempering (PT) [47, 48], a Monte Carlo method whereby multiple copies of the system at different temperatures are simulated [49]. InFig. 2, we show an example of how the energy distribution of a planted-solution instance changes with β. The qualitative behavior is similar to what we observe with increasing problem size, whereby decreasing β (increasing the temperature) pushes the energy distribution to larger energies and makes it more gaussian-like.The behavior of the specific heat c β as the inversetemperature β becomes large is shown inFig. 3. At large sizes, the scaling becomes c β ∝ exp(−∆β) as expected FIG. 2. Distributions of residual energy, E − E0, from PT simulations. For a planted-solution instance defined on an L = 12 Chimera graph, the distributions become more Gaussian-like as β decreases. For the case of β = 0.75, the mean residual energy and standard deviation are indicated. Inset: Scaling with problem size of the median mean energy and median standard deviation of the energy for β = 1.47 over 100 instances.(here, ∆ = 4 is the gap). Based on our predictions above, this should mean that if for a fixed q, the minimum β * such that p β * (E 0 ) ≥ q falls in this exponential regime, then we should observe a scaling β * ∝ log N . Indeed, the inset ofFig. 3, which shows simulation results of β * versus N , exhibits the expected log N behavior [50]. 0 0.5 1 1.5 2 2.5 3 4000 1 1.5 2 FIG. 3.Typical specific heat with inversetemperature. Behavior of the median specific heat (over 100 instances) for planted-solution instances with inversetemperature β for N = 3872. The behavior transitions from a polynomial scaling with β to an exponential scaling. Inset: Typical minimum inverse-temperature required for instances of size N such that the probability of the target energy ET = E0 + δ(N ) is at least q = 10 −1 . Also shown are fits to log N for all three cases and a power-law fit to cN α that finds α = 0.19 ± 0.05 for the δ = 0 case, which is almost indistinguishable from the logarithmic fit.4While for problem classes with a fixed minimum gap ∆, one may naively expect c β to vanish exponentially in general, implying that a logarithmic scaling of β will generally be sufficient as our simulations indeed indicate, it is important to note that two-dimensional spin glasses are known to exhibit a crossover between an exponential behavior to a power law [51][52][53][54]. This crossover is characterized by a constant θ ≈ 1/2, whereby the discreteness of the gap ∆ is evident only for sizes N θ/2 β. Beyond N θ/2 ∼ β, the 2d system behaves as if the coupling distribution is continuous[52,53]at which point the system can be treated as if with continuous couplings, for which the specific heat c T scales as T α with α c = 2ν [51], where ν = 3.53(7)[54]. Therefore, for an ideal quantum annealer operating beyond the crossover, a scaling of β ∼ N 1/(2ν)≈0.14 is required. We may thus expect the same crossover to appear for instances defined on the Chimera lattice, which is 2d-like. Interestingly, for the temperature scaling shown in the inset ofFig. 3, a power-law fit β ∼ N α with α = 0.19 ± 0.05 is almost indistinguishable from the logarithmic one, with a power that is consistent with the 2d prediction.Suboptimal metrics for optimization problems.-For many classically intractable optimization problems, when formulated as Ising models, it is crucial that solvers find a true minimizing bit assignment rather than low lying excited states. This is especially true for NP-complete/hard problems[55]where sub-optimal costs generally correspond to violated constraints that must be satisfied (otherwise the resultant configuration is nonsensical despite its low energy). Nonetheless, it is plausible to assume the existence of problems for which slightly sub-optimal configurations would still be of value[56]. We thus also study the necessary temperature scaling for cases where the target energies obey E T ≤ E 0 + δ(N ) with δ(N ) scaling sub-linearly with problem size. In the inset ofFig. 3, we plot the required scaling of β for δ(N ) = const and δ(N ) ∝ √ N . In both cases we find that a logarithmic scaling is still essential, albeit with smaller prefactors.Conclusions and discussion.-We have shown that fixed temperature quantum annealers can only sample 'easily reachable' energies in the large problem size limit, thereby posing fundamental limitation on their performance. We derived a temperature scaling law to ensure that quantum annealing optimizers find nontrivial energy values with sub-exponential probabilities. The scaling of the specific heat with temperature controls this scaling: if β lies in the regime where the specific heat scales exponentially with β, then the inverse-temperature of the annealer must scale as log N . However, further considerations are needed because of a possible crossover behavior in the specific heat with temperature and problem size. For Chimera graphs, because of their essentially two-dimensional structure, this may lead to a crossover to power law scaling. Little is known about this crossover in three dimensions or for different architectures, so this concern may not be mitigated by a more complex connectivity graph.Our results shed important light on benchmarking studies that have found no quantum speedups[17,18,[57][58][59], identifying temperature as a relevant culprit for their unfavorable performance. Our analysis is particularly relevant for both the utility as well as the design of future QA devices that have been argued to sample from thermal or close-to-thermal distributions [60], calling their role as optimization devices into question.One approach to scaling down the temperature with problem size is the (theoretically) equivalent scaling up of the overall energy scale of the Hamiltonian. However, the rescaling of the total Hamiltonian is also known to be challenging and may not represent a convenient approach for a scalable architecture. An alternative approach is to develop quantum error correction techniques to effectively increase the energy scale of the Hamiltonian by coupling multiple qubits to form a single logical qubit[61][62][63][64][65][66]in conjunction with classical postprocessing[67][68][69][70]or to effectively decouple the system from the environment [71-74].Our results reiterate the need for fault-tolerant error correction for scalable quantum annealing, however they do not preclude the utility of quantum annealing optimizers for large finite size problems, where engineering challenges may be overcome to allow the device to operate effectively at a sufficiently low temperature such that problems of interest of a finite size may be solved even in the absence of fault-tolerance. Our results only indicate that this 'window of opportunity' cannot be expected to continue as devices are scaled without further improvements in the device temperature or energy scale. While our arguments above indicate that fixedtemperature quantum annealers may not be scalable as optimizers, the current study does not pertain to the usage of quantum annealers as samplers[60,75,76], where the objective is to sample from the Boltzmann distribution. The latter objective is known to be very difficult task (it is #P-hard [77-79]) and little is known about when or if quantum annealers can provide an advantage in this regard[80].Acknowledgements.-TA and IH thank Daniel Lidar for useful comments on the manuscript. The computing resources were provided
|
10.1103/physrevlett.119.110502
|
[
"https://arxiv.org/pdf/1703.03871v2.pdf"
] | 31,027,756 |
1703.03871
|
808ee6d7c0dd90eccdbf09831c7113c282e2a8ed
|
Temperature scaling law for quantum annealing optimizers
2000
Tameem Albash
Information Sciences Institute
Marina del Rey
University of Southern California
90292CaliforniaUSA
Department of Physics and Astronomy and Center for Quantum Information Science & Technology
University of Southern California
90089Los AngelesCaliforniaUSA
Victor Martin-Mayor
Departamento de Física Teórica I
Universidad Complutense
28040MadridSpain
Instituto de Biocomputación y Física de Sistemas Complejos (BIFI)
ZaragozaSpain
Itay Hen
Information Sciences Institute
Marina del Rey
University of Southern California
90292CaliforniaUSA
Department of Physics and Astronomy and Center for Quantum Information Science & Technology
University of Southern California
90089Los AngelesCaliforniaUSA
Temperature scaling law for quantum annealing optimizers
2000
Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.Introduction.-Quantum computing devices are becoming sufficiently large to undertake computational tasks that are infeasible using classical computing[1][2][3][4][5][6][7]. The theoretical underpinning for whether such tasks exist with physically realizable quantum annealers remains lacking, despite the excitement brought on by recent technological breakthroughs that have made programmable quantum annealing (QA)[8][9][10][11][12]optimizers consisting of thousands of quantum bits commercially available. Thus far, no examples of practical relevance have been found to indicate a superiority of QA optimization, i.e., to find bit assignments that minimize the energy, or cost, of discrete combinatorial optimization problems, faster than possible classically[13][14][15][16][17][18][19][20]. Major ongoing efforts continue to build larger, more densely connected QA devices, in the hope that the capability to embed larger optimization problems would eventually reveal the coveted quantum speedup[21][22][23][24][25].Understanding the robustness of QA optimization to errors that reduce the final ground state probability is critical. In this work, we consider perhaps the most optimistic setting where the only source of error is due to nonzero temperature. We analyze the theoretical scaling performance of ideal fixed-temperature quantum annealers for optimization. We show that even in the case where annealers are assumed to thermalize instantly (rather than only in the infinite runtime limit), the energies, or costs, of their output configurations would be computationally trivial to achieve (in a sense that we explain). We further derive a scaling law for QA optimizers and provide corroboration of our analytical findings by experimental results obtained from the commercial D-Wave 2X QA processor[26][27][28][29][30] as well as numerical simulations (our results equally apply to ideal thermal annealing devices). We discuss the implications of our results for both past benchmarking studies and for the engineering requirements of future QA devices.Fixed-temperature quantum annealers.-In the adiabatic limit, closed-system quantum annealers are guaranteed to find a ground state of the target cost func-tion, or final Hamiltonian H, they are to solve. The adiabatic theorem of quantum mechanics ensures that the overlap of the final state of the system with the ground state manifold of H, approaches unity as the duration of the process increases [31, 32]. For physical quantum annealers that operate at positive temperatures (T > 0), there is no equivalent guarantee of reaching the ground state with high probability. For long runtimes, an ideal finite-temperature quantum annealer is expected to sample the Boltzmann distribution of the final Hamiltonian at the annealer temperature [33].In what follows, we argue that even instantlythermalizing quantum annealers [34] are severely limited as optimizers due to their finite temperature. For concreteness, we restrict to annealers for which i) the number of couplers scales linearly with the number of qubits N [35], ii) the coupling strengths are discretized and are bounded independently of problem size, and iii) the scaling of the free energy with problem size is not pathological, i.e., that our system is not tuned to a critical point. Other than the above standard assumptions, our treatment is general (we discuss the performance of quantum annealers when some of these conditions are lifted later on). For clarity, we consider optimization problems written in terms of a Hamiltonian of the Ising-typewhere {s i = ±1} are binary Ising spin variables that are to be optimized over, {J ij , h i } are the coupling strengths between connected spins and external biases, respectively, and ij denotes the underlying connectivity graph of the model. The discussion that follows however is not restricted to any particular model. Under the above assumptions, the ground state energies, denoted E 0 , of any given problem class, scale linearly with increasing problem size (i.e., the energy is an extensive property as is generically expected from physical systems) while the classical minimal gap ∆ = E 1 −E 0 arXiv:1703.03871v2 [quant-ph] 15 Sep 2017 2 remains fixed. It follows then [36] that the thermal expectation values of the intensive energyand specific heatremain finite as N → ∞ for any fixed inversetemperature β = 1/T . The intensive energy is discretized in steps of ∆/N , yet its statistical dispersion σ β (e) = −c β /N is much larger. Treating e as a stochastic variable, for large enough values of N it can be treated as a continuous variable as the ratio of discretization versus dispersion is −∆ 2 /(c β N ) decaying to zero for large N . From the Boltzmann distribution it follows that the probability density of e goes as p β (e) = Z β −1 e N (s(e)−βe) , where Z β = n g n e −βEn is the partition function, g n is the degeneracy of the n-th level, i.e., the number of microstates with H({s i }) = E n , satisfying 2 N = n≥0 g n , and s(e) is the entropy density [37]. The linear combination Ψ β (e) = s(e)−βe plays the role of a large-deviations functional for e. The most probable value of e, which we denote by e * , is given by the maximum of Ψ β . SolvingClose to e * , Ψ β can be Taylor-expanded as Ψ β (e) ≈The probability density is thus approximately Gaussian in the vicinity of e * , although deviations from the Gaussian behavior are crucial [39]. Moreover, in the limit of large N , we findTherefore, the probability of finding by Boltzmannsampling any energy e < e * (equivalently, E < e * N ) is exponentially suppressed in N , scaling in fact as exp[−N ( Ψ β (e * )−Ψ β (e) )]. We thus arrive at the conclusion that even ideal fixed temperature quantum annealers that thermalize instantaneously to the Gibbs state of the classical Hamiltonian are exponentially unlikely to find the ground state since e * > e 0 ≡ E 0 /N . We now corroborate the above derivation by runs on the commercial DW2X quantum annealer[26][27][28][29]. To do so, we first generate random instances of differently sized sub-graphs of the DW2X Chimera connectivity graph [40, 41] and run them multiple times on the annealer, recording the obtained energies [42].Figure 1depicts typical resultant residual energy (E − E 0 ) distributions. As is evident, increasing the problem size N 'pushes' the energy distribution farther away from E 0 , as well as broadening the distribution and making it more gaussian-like. In the inset, we measure the departure of H β from E 0 and the spread of the energies σ β (H) over 100 'planted-solution' [18] instances per sub-graph size as a function of problem size N [43]. For sufficiently large problem sizes, we find that the scaling of H − E 0 β is close to linear while σ β (H) scales slightly faster than √ N . While the slight deviations from our analytical predictions suggest that the DW2X configurations have not fully reached asymptotic behavior[44], they exhibit a trend that closely matches our assumptions with the agreement getting better with growing problem sizes.FIG. 1. Distributions of residual energy, E − E0, from DW2X runs. As problem sizes grow, the distributions become more Gaussian-like. Inset: Gaussians' mean (blue) and standard deviation (red) as a function of problem size, averaged over 100 instances per size. The solid lines correspond to power-law fits of the average mean with power 0.98 ± 0.14 and average standard deviation scaling with power 0.63±0.09, taking into accounts all sizes but the smallest (1.01 ± 0.62 and 0.57 ± 0.37 respectively if the two smallest sizes are omitted).Given the scaling of the mean and standard deviation, we conclude that fixed-temperature quantum annealers will generate energies e with a fixed distance from e 0 , or in terms of extensive energies, configurations obtained from fixed-temperature annealers will have energies concentrated around E = (1 − )E 0 for some > 0 and E 0 < 0.One could now ask what the difficulty is for classical algorithms to generate energy values in the above range. This question has been recently answered by the discovery of a polynomial time approximation scheme (PTAS) for spin-glasses defined on a Chimera graph [45] (and which can be easily generalized to any locally connected model), where reaching such energies can be done efficiently [46]. While the scaling of the PTAS with 3 is not favorable, scaling as c 1/ for some constant c, in practice there exist algorithms (e.g., parallel tempering that we discuss later on) that are known to scale more favorably than PTAS. Scaling law for quantum annealing temperatures.-In light of the above, it may seem that quantum annealers are doomed to fail as optimizers as problem sizes increase. We now argue that success may be regained if the temperature of the QA device is appropriately scaled with problem size. Specifically, we address the question of how the inverse-temperature β should scale with N such that there is a probability of at least q of finding the ground state.An estimate for the required scaling can be given as follows. From the above analysis, it should be clear that the probability of finding a ground state at inverse temperature β will not decay exponentially with system size only if the ground state falls within the variation of the mean energy, specifically ifis comparable toThe third law of thermodynamics dictates that the specific heat c T ≡ d e /dT goes to zero when T → 0. Assuming a scaling of the form c T ∼ T α , or equivalently, −c β ∼ β −α−2 , givesFor a power-law specific heat, it thus follows that the sought scaling is β ∼ N 1/α . If on the other hand c β vanishes exponentially in β, the inverse-temperature scaling will be milder, of the form β ∼ log N . To illustrate the above, we next present an analysis of simulations of randomly generated instances on Chimera lattices (we study several problem classes and architectures, see the Supplemental Information). To study the energy distribution generated by a thermal sampler on these instances, we use parallel tempering (PT) [47, 48], a Monte Carlo method whereby multiple copies of the system at different temperatures are simulated [49]. InFig. 2, we show an example of how the energy distribution of a planted-solution instance changes with β. The qualitative behavior is similar to what we observe with increasing problem size, whereby decreasing β (increasing the temperature) pushes the energy distribution to larger energies and makes it more gaussian-like.The behavior of the specific heat c β as the inversetemperature β becomes large is shown inFig. 3. At large sizes, the scaling becomes c β ∝ exp(−∆β) as expected FIG. 2. Distributions of residual energy, E − E0, from PT simulations. For a planted-solution instance defined on an L = 12 Chimera graph, the distributions become more Gaussian-like as β decreases. For the case of β = 0.75, the mean residual energy and standard deviation are indicated. Inset: Scaling with problem size of the median mean energy and median standard deviation of the energy for β = 1.47 over 100 instances.(here, ∆ = 4 is the gap). Based on our predictions above, this should mean that if for a fixed q, the minimum β * such that p β * (E 0 ) ≥ q falls in this exponential regime, then we should observe a scaling β * ∝ log N . Indeed, the inset ofFig. 3, which shows simulation results of β * versus N , exhibits the expected log N behavior [50]. 0 0.5 1 1.5 2 2.5 3 4000 1 1.5 2 FIG. 3.Typical specific heat with inversetemperature. Behavior of the median specific heat (over 100 instances) for planted-solution instances with inversetemperature β for N = 3872. The behavior transitions from a polynomial scaling with β to an exponential scaling. Inset: Typical minimum inverse-temperature required for instances of size N such that the probability of the target energy ET = E0 + δ(N ) is at least q = 10 −1 . Also shown are fits to log N for all three cases and a power-law fit to cN α that finds α = 0.19 ± 0.05 for the δ = 0 case, which is almost indistinguishable from the logarithmic fit.4While for problem classes with a fixed minimum gap ∆, one may naively expect c β to vanish exponentially in general, implying that a logarithmic scaling of β will generally be sufficient as our simulations indeed indicate, it is important to note that two-dimensional spin glasses are known to exhibit a crossover between an exponential behavior to a power law [51][52][53][54]. This crossover is characterized by a constant θ ≈ 1/2, whereby the discreteness of the gap ∆ is evident only for sizes N θ/2 β. Beyond N θ/2 ∼ β, the 2d system behaves as if the coupling distribution is continuous[52,53]at which point the system can be treated as if with continuous couplings, for which the specific heat c T scales as T α with α c = 2ν [51], where ν = 3.53(7)[54]. Therefore, for an ideal quantum annealer operating beyond the crossover, a scaling of β ∼ N 1/(2ν)≈0.14 is required. We may thus expect the same crossover to appear for instances defined on the Chimera lattice, which is 2d-like. Interestingly, for the temperature scaling shown in the inset ofFig. 3, a power-law fit β ∼ N α with α = 0.19 ± 0.05 is almost indistinguishable from the logarithmic one, with a power that is consistent with the 2d prediction.Suboptimal metrics for optimization problems.-For many classically intractable optimization problems, when formulated as Ising models, it is crucial that solvers find a true minimizing bit assignment rather than low lying excited states. This is especially true for NP-complete/hard problems[55]where sub-optimal costs generally correspond to violated constraints that must be satisfied (otherwise the resultant configuration is nonsensical despite its low energy). Nonetheless, it is plausible to assume the existence of problems for which slightly sub-optimal configurations would still be of value[56]. We thus also study the necessary temperature scaling for cases where the target energies obey E T ≤ E 0 + δ(N ) with δ(N ) scaling sub-linearly with problem size. In the inset ofFig. 3, we plot the required scaling of β for δ(N ) = const and δ(N ) ∝ √ N . In both cases we find that a logarithmic scaling is still essential, albeit with smaller prefactors.Conclusions and discussion.-We have shown that fixed temperature quantum annealers can only sample 'easily reachable' energies in the large problem size limit, thereby posing fundamental limitation on their performance. We derived a temperature scaling law to ensure that quantum annealing optimizers find nontrivial energy values with sub-exponential probabilities. The scaling of the specific heat with temperature controls this scaling: if β lies in the regime where the specific heat scales exponentially with β, then the inverse-temperature of the annealer must scale as log N . However, further considerations are needed because of a possible crossover behavior in the specific heat with temperature and problem size. For Chimera graphs, because of their essentially two-dimensional structure, this may lead to a crossover to power law scaling. Little is known about this crossover in three dimensions or for different architectures, so this concern may not be mitigated by a more complex connectivity graph.Our results shed important light on benchmarking studies that have found no quantum speedups[17,18,[57][58][59], identifying temperature as a relevant culprit for their unfavorable performance. Our analysis is particularly relevant for both the utility as well as the design of future QA devices that have been argued to sample from thermal or close-to-thermal distributions [60], calling their role as optimization devices into question.One approach to scaling down the temperature with problem size is the (theoretically) equivalent scaling up of the overall energy scale of the Hamiltonian. However, the rescaling of the total Hamiltonian is also known to be challenging and may not represent a convenient approach for a scalable architecture. An alternative approach is to develop quantum error correction techniques to effectively increase the energy scale of the Hamiltonian by coupling multiple qubits to form a single logical qubit[61][62][63][64][65][66]in conjunction with classical postprocessing[67][68][69][70]or to effectively decouple the system from the environment [71-74].Our results reiterate the need for fault-tolerant error correction for scalable quantum annealing, however they do not preclude the utility of quantum annealing optimizers for large finite size problems, where engineering challenges may be overcome to allow the device to operate effectively at a sufficiently low temperature such that problems of interest of a finite size may be solved even in the absence of fault-tolerance. Our results only indicate that this 'window of opportunity' cannot be expected to continue as devices are scaled without further improvements in the device temperature or energy scale. While our arguments above indicate that fixedtemperature quantum annealers may not be scalable as optimizers, the current study does not pertain to the usage of quantum annealers as samplers[60,75,76], where the objective is to sample from the Boltzmann distribution. The latter objective is known to be very difficult task (it is #P-hard [77-79]) and little is known about when or if quantum annealers can provide an advantage in this regard[80].Acknowledgements.-TA and IH thank Daniel Lidar for useful comments on the manuscript. The computing resources were provided
Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.
Introduction.-Quantum computing devices are becoming sufficiently large to undertake computational tasks that are infeasible using classical computing [1][2][3][4][5][6][7]. The theoretical underpinning for whether such tasks exist with physically realizable quantum annealers remains lacking, despite the excitement brought on by recent technological breakthroughs that have made programmable quantum annealing (QA) [8][9][10][11][12] optimizers consisting of thousands of quantum bits commercially available. Thus far, no examples of practical relevance have been found to indicate a superiority of QA optimization, i.e., to find bit assignments that minimize the energy, or cost, of discrete combinatorial optimization problems, faster than possible classically [13][14][15][16][17][18][19][20]. Major ongoing efforts continue to build larger, more densely connected QA devices, in the hope that the capability to embed larger optimization problems would eventually reveal the coveted quantum speedup [21][22][23][24][25].
Understanding the robustness of QA optimization to errors that reduce the final ground state probability is critical. In this work, we consider perhaps the most optimistic setting where the only source of error is due to nonzero temperature. We analyze the theoretical scaling performance of ideal fixed-temperature quantum annealers for optimization. We show that even in the case where annealers are assumed to thermalize instantly (rather than only in the infinite runtime limit), the energies, or costs, of their output configurations would be computationally trivial to achieve (in a sense that we explain). We further derive a scaling law for QA optimizers and provide corroboration of our analytical findings by experimental results obtained from the commercial D-Wave 2X QA processor [26][27][28][29][30] as well as numerical simulations (our results equally apply to ideal thermal annealing devices). We discuss the implications of our results for both past benchmarking studies and for the engineering requirements of future QA devices.
Fixed-temperature quantum annealers.-In the adiabatic limit, closed-system quantum annealers are guaranteed to find a ground state of the target cost func-tion, or final Hamiltonian H, they are to solve. The adiabatic theorem of quantum mechanics ensures that the overlap of the final state of the system with the ground state manifold of H, approaches unity as the duration of the process increases [31,32]. For physical quantum annealers that operate at positive temperatures (T > 0), there is no equivalent guarantee of reaching the ground state with high probability. For long runtimes, an ideal finite-temperature quantum annealer is expected to sample the Boltzmann distribution of the final Hamiltonian at the annealer temperature [33].
In what follows, we argue that even instantlythermalizing quantum annealers [34] are severely limited as optimizers due to their finite temperature. For concreteness, we restrict to annealers for which i) the number of couplers scales linearly with the number of qubits N [35], ii) the coupling strengths are discretized and are bounded independently of problem size, and iii) the scaling of the free energy with problem size is not pathological, i.e., that our system is not tuned to a critical point. Other than the above standard assumptions, our treatment is general (we discuss the performance of quantum annealers when some of these conditions are lifted later on). For clarity, we consider optimization problems written in terms of a Hamiltonian of the Ising-type
H = ij J ij s i s j + i h i s i ,(1)
where {s i = ±1} are binary Ising spin variables that are to be optimized over, {J ij , h i } are the coupling strengths between connected spins and external biases, respectively, and ij denotes the underlying connectivity graph of the model. The discussion that follows however is not restricted to any particular model. Under the above assumptions, the ground state energies, denoted E 0 , of any given problem class, scale linearly with increasing problem size (i.e., the energy is an extensive property as is generically expected from physical systems) while the classical minimal gap ∆ = E 1 −E 0 remains fixed. It follows then [36] that the thermal expectation values of the intensive energy
e β = H β /N ,(2)
and specific heat
c β = ∂ e β /∂β = −N e 2 β − e 2 β ,(3)
remain finite as N → ∞ for any fixed inversetemperature β = 1/T . The intensive energy is discretized in steps of ∆/N , yet its statistical dispersion σ β (e) = −c β /N is much larger. Treating e as a stochastic variable, for large enough values of N it can be treated as a continuous variable as the ratio of discretization versus dispersion is −∆ 2 /(c β N ) decaying to zero for large N . From the Boltzmann distribution it follows that the probability density of e goes as p β (e) = Z β −1 e N (s(e)−βe) , where Z β = n g n e −βEn is the partition function, g n is the degeneracy of the n-th level, i.e., the number of microstates with H({s i }) = E n , satisfying 2 N = n≥0 g n , and s(e) is the entropy density [37]. The linear combination Ψ β (e) = s(e)−βe plays the role of a large-deviations functional for e. The most probable value of e, which we denote by e * , is given by the maximum of Ψ β . Solving Ψ β (e * ) = 0, we find [38]
β = ∂s ∂e e=e * .(4)
Close to e * , Ψ β can be Taylor-expanded as Ψ β (e) ≈ Ψ β (e * ) − |Ψ β (e * )| 2 (e − e * ) 2 , from which it follows that
p β (e) ≈ e N Ψ β (e * ) Z β exp − N |Ψ β (e * )| 2 (e − e * ) 2 .(5)
The probability density is thus approximately Gaussian in the vicinity of e * , although deviations from the Gaussian behavior are crucial [39]. Moreover, in the limit of large N , we find
e β = e * and c β = −1 |Ψ β (e * )| .(6)
Therefore, the probability of finding by Boltzmannsampling any energy e < e * (equivalently, E < e * N ) is exponentially suppressed in N , scaling in fact as exp[−N ( Ψ β (e * )−Ψ β (e) )]. We thus arrive at the conclusion that even ideal fixed temperature quantum annealers that thermalize instantaneously to the Gibbs state of the classical Hamiltonian are exponentially unlikely to find the ground state since e * > e 0 ≡ E 0 /N . We now corroborate the above derivation by runs on the commercial DW2X quantum annealer [26][27][28][29]. To do so, we first generate random instances of differently sized sub-graphs of the DW2X Chimera connectivity graph [40,41] and run them multiple times on the annealer, recording the obtained energies [42]. Figure 1 depicts typical resultant residual energy (E − E 0 ) distributions. As is evident, increasing the problem size N 'pushes' the energy distribution farther away from E 0 , as well as broadening the distribution and making it more gaussian-like. In the inset, we measure the departure of H β from E 0 and the spread of the energies σ β (H) over 100 'planted-solution' [18] instances per sub-graph size as a function of problem size N [43]. For sufficiently large problem sizes, we find that the scaling of H − E 0 β is close to linear while σ β (H) scales slightly faster than √ N . While the slight deviations from our analytical predictions suggest that the DW2X configurations have not fully reached asymptotic behavior[44], they exhibit a trend that closely matches our assumptions with the agreement getting better with growing problem sizes.
FIG. 1. Distributions of residual energy, E − E0, from DW2X runs. As problem sizes grow, the distributions become more Gaussian-like. Inset: Gaussians' mean (blue) and standard deviation (red) as a function of problem size, averaged over 100 instances per size. The solid lines correspond to power-law fits of the average mean with power 0.98 ± 0.14 and average standard deviation scaling with power 0.63±0.09, taking into accounts all sizes but the smallest (1.01 ± 0.62 and 0.57 ± 0.37 respectively if the two smallest sizes are omitted).
Given the scaling of the mean and standard deviation, we conclude that fixed-temperature quantum annealers will generate energies e with a fixed distance from e 0 , or in terms of extensive energies, configurations obtained from fixed-temperature annealers will have energies concentrated around E = (1 − )E 0 for some > 0 and E 0 < 0.
One could now ask what the difficulty is for classical algorithms to generate energy values in the above range. This question has been recently answered by the discovery of a polynomial time approximation scheme (PTAS) for spin-glasses defined on a Chimera graph [45] (and which can be easily generalized to any locally connected model), where reaching such energies can be done efficiently [46]. While the scaling of the PTAS with is not favorable, scaling as c 1/ for some constant c, in practice there exist algorithms (e.g., parallel tempering that we discuss later on) that are known to scale more favorably than PTAS. Scaling law for quantum annealing temperatures.-In light of the above, it may seem that quantum annealers are doomed to fail as optimizers as problem sizes increase. We now argue that success may be regained if the temperature of the QA device is appropriately scaled with problem size. Specifically, we address the question of how the inverse-temperature β should scale with N such that there is a probability of at least q of finding the ground state.
An estimate for the required scaling can be given as follows. From the above analysis, it should be clear that the probability of finding a ground state at inverse temperature β will not decay exponentially with system size only if the ground state falls within the variation of the mean energy, specifically if
σ β (H) = N σ β (e) = −N c β ,(7)
is comparable to
H β − E 0 = −N ∞ β d β c β .(8)
The third law of thermodynamics dictates that the specific heat c T ≡ d e /dT goes to zero when T → 0. Assuming a scaling of the form c T ∼ T α , or equivalently,
−c β ∼ β −α−2 , gives σ β (H) ∼ N β α+2 and H β − E 0 = N β α+1 .(9)
For a power-law specific heat, it thus follows that the sought scaling is β ∼ N 1/α . If on the other hand c β vanishes exponentially in β, the inverse-temperature scaling will be milder, of the form β ∼ log N . To illustrate the above, we next present an analysis of simulations of randomly generated instances on Chimera lattices (we study several problem classes and architectures, see the Supplemental Information). To study the energy distribution generated by a thermal sampler on these instances, we use parallel tempering (PT) [47, 48], a Monte Carlo method whereby multiple copies of the system at different temperatures are simulated [49]. In Fig. 2, we show an example of how the energy distribution of a planted-solution instance changes with β. The qualitative behavior is similar to what we observe with increasing problem size, whereby decreasing β (increasing the temperature) pushes the energy distribution to larger energies and makes it more gaussian-like.
The behavior of the specific heat c β as the inversetemperature β becomes large is shown in Fig. 3. At large sizes, the scaling becomes c β ∝ exp(−∆β) as expected (here, ∆ = 4 is the gap). Based on our predictions above, this should mean that if for a fixed q, the minimum β * such that p β * (E 0 ) ≥ q falls in this exponential regime, then we should observe a scaling β * ∝ log N . Indeed, the inset of Fig. 3, which shows simulation results of β * versus N , exhibits the expected log N behavior [50]. Typical specific heat with inversetemperature. Behavior of the median specific heat (over 100 instances) for planted-solution instances with inversetemperature β for N = 3872. The behavior transitions from a polynomial scaling with β to an exponential scaling. Inset: Typical minimum inverse-temperature required for instances of size N such that the probability of the target energy ET = E0 + δ(N ) is at least q = 10 −1 . Also shown are fits to log N for all three cases and a power-law fit to cN α that finds α = 0.19 ± 0.05 for the δ = 0 case, which is almost indistinguishable from the logarithmic fit.
While for problem classes with a fixed minimum gap ∆, one may naively expect c β to vanish exponentially in general, implying that a logarithmic scaling of β will generally be sufficient as our simulations indeed indicate, it is important to note that two-dimensional spin glasses are known to exhibit a crossover between an exponential behavior to a power law [51][52][53][54]. This crossover is characterized by a constant θ ≈ 1/2, whereby the discreteness of the gap ∆ is evident only for sizes N θ/2 β. Beyond N θ/2 ∼ β, the 2d system behaves as if the coupling distribution is continuous [52,53] at which point the system can be treated as if with continuous couplings, for which the specific heat c T scales as T α with α c = 2ν [51], where ν = 3.53(7) [54]. Therefore, for an ideal quantum annealer operating beyond the crossover, a scaling of β ∼ N 1/(2ν)≈0.14 is required. We may thus expect the same crossover to appear for instances defined on the Chimera lattice, which is 2d-like. Interestingly, for the temperature scaling shown in the inset of Fig. 3, a power-law fit β ∼ N α with α = 0.19 ± 0.05 is almost indistinguishable from the logarithmic one, with a power that is consistent with the 2d prediction.
Suboptimal metrics for optimization problems.-For many classically intractable optimization problems, when formulated as Ising models, it is crucial that solvers find a true minimizing bit assignment rather than low lying excited states. This is especially true for NP-complete/hard problems [55] where sub-optimal costs generally correspond to violated constraints that must be satisfied (otherwise the resultant configuration is nonsensical despite its low energy). Nonetheless, it is plausible to assume the existence of problems for which slightly sub-optimal configurations would still be of value [56]. We thus also study the necessary temperature scaling for cases where the target energies obey E T ≤ E 0 + δ(N ) with δ(N ) scaling sub-linearly with problem size. In the inset of Fig. 3, we plot the required scaling of β for δ(N ) = const and δ(N ) ∝ √ N . In both cases we find that a logarithmic scaling is still essential, albeit with smaller prefactors.
Conclusions and discussion.-We have shown that fixed temperature quantum annealers can only sample 'easily reachable' energies in the large problem size limit, thereby posing fundamental limitation on their performance. We derived a temperature scaling law to ensure that quantum annealing optimizers find nontrivial energy values with sub-exponential probabilities. The scaling of the specific heat with temperature controls this scaling: if β lies in the regime where the specific heat scales exponentially with β, then the inverse-temperature of the annealer must scale as log N . However, further considerations are needed because of a possible crossover behavior in the specific heat with temperature and problem size. For Chimera graphs, because of their essentially two-dimensional structure, this may lead to a crossover to power law scaling. Little is known about this crossover in three dimensions or for different architectures, so this concern may not be mitigated by a more complex connectivity graph.
Our results shed important light on benchmarking studies that have found no quantum speedups [17,18,[57][58][59], identifying temperature as a relevant culprit for their unfavorable performance. Our analysis is particularly relevant for both the utility as well as the design of future QA devices that have been argued to sample from thermal or close-to-thermal distributions [60], calling their role as optimization devices into question.
One approach to scaling down the temperature with problem size is the (theoretically) equivalent scaling up of the overall energy scale of the Hamiltonian. However, the rescaling of the total Hamiltonian is also known to be challenging and may not represent a convenient approach for a scalable architecture. An alternative approach is to develop quantum error correction techniques to effectively increase the energy scale of the Hamiltonian by coupling multiple qubits to form a single logical qubit [61][62][63][64][65][66] in conjunction with classical postprocessing [67][68][69][70] or to effectively decouple the system from the environment [71][72][73][74].
Our results reiterate the need for fault-tolerant error correction for scalable quantum annealing, however they do not preclude the utility of quantum annealing optimizers for large finite size problems, where engineering challenges may be overcome to allow the device to operate effectively at a sufficiently low temperature such that problems of interest of a finite size may be solved even in the absence of fault-tolerance. Our results only indicate that this 'window of opportunity' cannot be expected to continue as devices are scaled without further improvements in the device temperature or energy scale. While our arguments above indicate that fixedtemperature quantum annealers may not be scalable as optimizers, the current study does not pertain to the usage of quantum annealers as samplers [60,75,76], where the objective is to sample from the Boltzmann distribution. The latter objective is known to be very difficult task (it is #P-hard [77][78][79]) and little is known about when or if quantum annealers can provide an advantage in this regard [80]. Supplemental Information for "Temperature scaling law for quantum annealing optimizers"
DEVIATIONS TO THE GAUSSIAN PROBABILITY DENSITY
In the main text, we indicated that deviations from a Gaussian distribution for the marginal of the classical Boltzmann probability (at inverse temperature β) for the energy density p β is crucial. To see why this is the case, let us consider what happens when the probability density is exactly Gaussian:
p β (e) = N 2π|c β | e −N [e− e β ] 2 2|c β | .(10)
The probability density at any other inverse temperature β + δβ can be obtained as [86,87]
p β+δβ (e) = 1 Z p β (e) e −N δβe , Z = ∞ −∞ de p β (e) e −N δβe .
(11) Eq. (11) is fully general, and we are not assuming δβ to be small. Now, let us plug the Gaussian probability (Eq. (10)) into Eq. (11). We find
p β+δβ (e) = 1 Z exp − N 2|c β | [e − e β − c β δβ] 2 ,(12)
with Z = N 2π|c β | . Comparing Eqs. (10) and (12), we are led to the conclusion that the Gaussian probability implies that the energy density is a linear function of β and that the specific heat is constant:
Gaussian hypothesis : e β = e β=0 + βc β , dc β dβ = 0 .
Of course, Eq. (13) is grossly in error, because in the limit of large β (i.e. zero temperature) e β should reach the Ground State energy-density [rather than diverge as wrongy implied by Eq. (13)]. In fact, the specific heat is not constant. A straightforward application of the fluctuation-dissipation theorem tells us that
dc β dβ = N 2 e 3 − 3 e 2 e + 2 e 3 ] = N 2 [e − e ] 3 .(14)
We can introduce η, the fluctuating part of the energy (regarded as a stochastic variable):
e = e β + −c β N η .(15)
In combination with the fluctuation-dissipation theorem,
c β = −N [e − e ] 2 , we have η = 0 , η 2 = 1 .
Furthermore, Eq. (14) implies
η 3 = 1 √ N 1 [−c β ] 3/2 dc β dβ .(16)
However, if η be a normal variable N (0, 1) as demanded by Eq. (10), we would have η 3 = 0 and not what we have in Eq. (16). Hence, convergence to the main traits of the Gauss distribution law (symmetry under η ↔ −η, for instance) happens at a rate proportional to 1/ √ N .
THE DW2X EXPERIMENTAL QUANTUM ANNEALING OPTIMIZER
Description of the processor
The experimental results shown in the main text were taken on a 3rd generation D-Wave processor, the DW2X 'Washington' processor, installed at the Information Sciences Institute -University of Southern California (ISI). The processor connectivity is given by a 12 × 12 grid of unit cells, where each unit cell is composed of 8 qubits with a K 4,4 bipartite connectivity, forming the 'Chimera' graph [40, 41] with a total of 1152 qubits. Due to miscalibration, there are only 1098 operational qubits on the ISI machine. This is illustrated in Fig. 4.
The device implements the quantum annealing protocol given by the time-dependent Hamiltonian:
H QA (s) = A(s)H D + B(s)H(17)
where H D = − i σ x i is the standard transverse field driver Hamiltonian, H is the Ising Hamiltonian [Eq. (1) of the main text], and A(s), B(s) are the annealing schedules satisfying A(0) B(0), A(1) B(1), and s ≡ t/t f ∈ [0, 1] is the dimensional time annealing parameter. The predicted functional form for these schedules is shown in Fig. 5.
Details of the experiment and additional results
The randomly generated instances tested on the D-Wave processor were run with 20 random gauges [19] with 5000 reads per gauge/cycle for a total of 100, 000 anneals per instance. The annealing time chosen for the runs was the default 20µ-sec. We further corroborated the analytical derivations discussed in the main text using experiments on the commercial DW2X processor on randomly generated bi-modal J ij = ±1 instances. As with the planted-solution instances, we first generate random instances of differently sized sub-graphs of the DW2X Chimera connectivity graph [40, 41] and run them multiple times on the annealer, recording the obtained energies. Figure 6 depicts the resultant residual energy (E − E 0 ) distributions of a typical instance. As is evident, increasing the problem size N 'pushes' the energy distribution farther and farther away from the ground state value, as well as broadening the distribution and making it more gaussian-like. In the inset we measure the departure of H β from E 0 and the spread of the energies σ β (H) over 100 random bi-modal instances per sub-graph size as a function of problem size N . For sufficiently large problem sizes, we find that the scaling of H − E 0 β is almost linear while σ β (H) scales slightly faster than √ N . The results are slightly worse than the analytical prediction but conform to the general trend.
FIG. 6. Distributions of residual energy, E − E0, from DW2X simulations on random ±1 instances. As problem sizes grow, the distributions become more Gaussian-like. Inset: Gaussians' mean (blue) and standard deviation (red) as a function of problem size, averaged over 100 instances per size. The solid lines correspond to best fits to the form ln(y) = a + b ln(x), with a = −5.00 ± 0.53, b = 1.13 ± 0.08 and a = −2.92 ± 0.37, b = 0.68 ± 0.06 respectively, taking into accounts all sizes but the smallest.
SIMULATION METHODS
Instance generation
For the generation of instances in this work we have chosen one problem class to be that of the 'planted solution' type-an idea borrowed from constraint satisfaction (SAT) problems. In this problem class, the planted solution represents a ground-state configuration of the Hamiltonian that minimizes the energy and is known in advance. The Hamiltonian of a planted-solution spin glass is a sum of terms, each of which consists of a small number of connected spins, namely, H = j H j [18]. Each term H j is chosen such that one of its ground-states is the planted solution. It follows then that the planted solution is also a ground-state of the total Hamiltonian, and its energy is the ground-state energy of the Hamiltonian. Knowing the ground-state energy in advance circumvents the need to verify the ground-state energy using exact (provable) solvers, which rapidly become too expensive computationally as the number of variables grows. The interested reader will find a more detailed discussion of planted Ising problems in Refs. [18,88].
For the random ±1 instances on Chimera, we randomly (with equal probability) assign a value ±1 to all the edges of the Chimera graph. While the ground state energy for these instances is not known with 100% certainty, we ran the Hamze-Freitas-Selby algorithm (HFS) [89,90] for a sufficiently long time such that we were confident of having found the ground state for these instances.
For the 3-regular 3-XORSAT instances, for each spin, we randomly pick three other spins to which to couple. All couplings are picked to be antiferromagnetic with strength 1. Because all terms in the Hamiltonian are of the form +σ z i σ z j σ z k , the ground state is simply that all-spins-down state.
Parallel tempering
For the planted-solution instances, we first 'warmedup' our parallel tempering simulation with 5 × 10 5 (for the smaller sizes) to 2 × 10 6 (for the larger sizes) swaps with 10 Monte Carlo sweeps per swap. The temperature distribution is picked as follows:
β i = β 63 β 0 i/63 β 0 , i = 0, 1, . . . , 63(18)
with β 0 = 20 and β 63 = 0.1. After the warm-up, we sample the energy after every 50 swaps in order to minimize correlation between the energies. We use a total of 10 4 sample points, from which we extract the energies at different quantiles. In order to ensure that we have reached a thermal or near-thermal distribution, we performed the following check. The 10 4 sample points are divided into three blocks: (a) 5 × 10 3 samples from the last half of the samples; (b) 2.5 × 10 3 samples from the second quarter of the samples; (c) 1.25 × 10 3 samples from the second eighth of the samples. We then calculated the specific heat using the samples from each block separately; if the system has sufficiently thermalized and the samples are sufficiently uncorrelated, we expect to observe no change in the specific heat for the three sets of samples within the error bars. We show the results of this test in Fig. 7, where we indeed observe no significant difference.
RESULTS FOR PLANTED-SOLUTION INSTANCES WITH A TARGET ENERGY
In Fig. 8, we supplement the results presented in the main text with the scaling of β when the target energy need not be the ground state, specifically E T = E 0 + δ. We consider three cases: (i) a constant about the ground state, E T = E 0 + 8, (ii) a square-root scaling above the ground state, E T = E 0 + N/2, and a linear scaling above the ground state E T = E 0 + (4 + N/32). The specific values were picked so that the three cases would have the same target energy at the smallest size of N = 128. If we fit all curves with a logarithmic dependence on N , we observe a similar scaling for the cases of δ = constant, and the case of δ ∝ √ N still exhibits a logarithmic scaling but with a milder coefficient. For the case of δ ∝ N , the required β approaches a constant for sufficiently large problem sizes.
RESULTS FOR THE 3-REG 3XORSAT AND RANDOM ±1 CHIMERA INSTANCES
Here we provide the equivalent plots to Fig. (3) of the main text but for the 3-regular 3-XORSAT (Fig. 10) and random ±1 instances (Fig. 11). The random ±1 instances were warmed-up with up to 24 × 10 6 PT swaps depending on their size, while the XORSAT instances were warmed-up for with up to 200 × 10 6 swaps depending on their size. For both, as in the planted-solution case, 10 4 samples were taken with one sample after every 50 PT swaps. We perform the same thermalization test as for the planted-solution instances, and we observe no significant difference for the different blocks of samples (see Fig. 9).
We note that for both of these classes of instances, the β values required fall in the regime where the scaling of the specific heat with β is not yet exponential. The scaling behavior of β * is consistent with both a log N and a N 1/α behavior.
SCALING LAWS FOR TEMPERATURES: ANALYTICAL EXAMPLES
Let us consider the simple case of non-interacting spins in a global magnetic field. This case is particularly relevant if the initial state of the quantum annealer is prepared as the thermal state of the standard driver Hamiltonian − 1 2 N i=1 σ x i with no overall energy scaling. The
Note that each energy spectrum has a degeneracy that grows polynomially with N . The mean energy is given by:
µ/N = − 1 2 tanh(β/2)(20)
and the standard deviation is:
σ/ √ N = 1 2 sech(β/2)(21)
The ground state probability on a thermal state is then given by p 0 = e βN/2 Z , which we can then invert to write Behavior of the median specific heat (over 100 instances) for the 3-regular 3-XORSAT instances with inverse-temperature β for N = 100. The behavior transitions from a polynomial scaling with β to an exponential scaling with β. Inset: Typical minimum inverse-temperature required for instances of size N such that probability of the ground state is at least q = 10 −1 . Also shown are fits to log N anto d a power-law cN α with α = 0.39 ± 0.18, which is almost indistinguishable from the logarithmic fit for large size.
the inverse-temperature as:
β = − ln 1 − p −1/n 0(22)
If we pick p 0 to be some small but fixed (independent of system size) number and take the large N limit, we find that β = ln(N ) − ln(− ln p 0 ) + 1 2N ln p 0 + . . .
Therefore, we find that for this simple problem, in order to maintain a constant ground state probability while the system size grows, we must scale the inverse-temperature logarithmically with system size.
A Grover search problem [91,92] on the other hand yields the worst case scaling. In this case, we take a single state to have energy −N , while the remaining states have energy −N + 1. The partition function is given by:
Z = e βN 1 + (2 N − 1)e −β(24)
with mean energy:
µ = −N + 1 − e β 2 N − 1 + e β(25)
and standard deviation σ = e β/2
√ 2 N − 1 2 N − 1 + e β(26)
Unlike our other local example the σ does not scale as √ N . The ground state probability is given by p 0 = 1/Z. 11. Behavior of the median specific heat (over 100 instances) for the random ±1 instances with inversetemperature β for N = 1152. The behavior transitions from a polynomial scaling with β to an exponential scaling with β. Inset: Typical minimum inverse-temperature required for instances of size N such that probability of the target energy ET = E0 + δ(N ) is at least q = 10 −1 . Also shown are fits to log N for all three cases and to a power-law cN α with α = 0.30 ± 0.09 , which is almost indistinguishable from the logarithmic fit for large sizes.
Inverting this for β, we find:
β = ln(2 N − 1) − ln p −1 0 − 1(27)
Again, for a fixed and small p 0 , expanding for large N , we get:
β = N ln 2 − ln p −1 0 − 1 − 2 −N + . . .(28)
Therefore, in this case, β must grow linearly with N in order to maintain a constant p 0 . Note of course that the Grover Hamiltonian is highly non-local as it contains N -body terms.
FINAL GROUND STATE WEIGHT IN GIBBS DISTRIBUTIONS
We consider here the weight of the final ground state on the Gibbs distributions along a quantum annealing protocol. Let us consider a system of N decoupled qubits evolving under:
H(s) = − 1 2 (1 − s) N i=1 σ x i − 1 2 s N i=1 σ z i(29)
The probability of the final ground state, the state |0 ⊗N , in the instantaneous Gibbs distribution is given by:
p(s) = | 0|λ 0 (s) | 2 e βλ(s)/2 + | 0|λ 1 (s) | 2 e −βλ(s)/2 2 cosh(βλ(s)/2) N (30)
FIG. 2 .
2Distributions of residual energy, E − E0, from PT simulations. For a planted-solution instance defined on an L = 12 Chimera graph, the distributions become more Gaussian-like as β decreases. For the case of β = 0.75, the mean residual energy and standard deviation are indicated. Inset: Scaling with problem size of the median mean energy and median standard deviation of the energy for β = 1.47 over 100 instances.
FIG. 3. Typical specific heat with inversetemperature. Behavior of the median specific heat (over 100 instances) for planted-solution instances with inversetemperature β for N = 3872. The behavior transitions from a polynomial scaling with β to an exponential scaling. Inset: Typical minimum inverse-temperature required for instances of size N such that the probability of the target energy ET = E0 + δ(N ) is at least q = 10 −1 . Also shown are fits to log N for all three cases and a power-law fit to cN α that finds α = 0.19 ± 0.05 for the δ = 0 case, which is almost indistinguishable from the logarithmic fit.
Acknowledgements.-TA and IH thank Daniel Lidar for useful comments on the manuscript. The computing resources were provided by the USC Center for High Performance Computing and Communications. TA was supported under ARO MURI Grant No. W911NF-11-1-0268, ARO MURI Grant No. W911NF-15-1-0582, and NSF Grant No. INSPIRE-1551064. V. M.-M. was partially supported by MINECO (Spain) through Grant No. FIS2015-65078-C2-1-P (this contract partially funded by FEDER).
the time-to-target metric," ArXiv e-prints (2015), arXiv:1508.05087 [quant-ph].[31] T. Kato, "On the adiabatic theorem of quantum mechanics," J. Phys. Soc. Jap. 5, 435 (1950).[32] Sabine Jansen, Mary-Beth Ruskai, and Ruedi Seiler, "Bounds for the adiabatic approximation with applications to quantum computation," J. Math. Phys. 48, -
FIG. 5 .
5The DW2X annealing schedules. Energy units are for = 1, and the operating temperature of 12mK is shown as well.
FIG. 7 .
7The behavior of the median specific heat for the planted-solution instances at L = 22 using different blocks of the samples. Of the total of 10 4 samples, different partitions (as indicated by the legend) are used to calculate the specific heat.
FIG. 8 .
8The behavior of the median value (over 100 instances) of the minimum inverse-temperature required such that for q = 10 −3 the target energy is the ground state. Error bars correspond to the spacing between the β values of the PT simulations. Lines correspond to the fits β = a + b ln N with b = 0.2495 ± 0.0531, 0.2440 ± 0.0461, 0.1842 ± 0.0423, 0.1483 ± 0.1274 for δ = 0, 8, N/2, (4 + N/32) respectively, with the uncertainty representing the 95% confidence interval for the fit parameters.
FIG. 9 .
9The behavior of the median specific heat for the (a) 3-regular 3-XORSAT instances (N = 100) and (b) the bimodal instances (L = 12) using different blocks of the samples. Of the total of 10 4 samples, different partitions (as indicated by the legend) are used to calculate the specific heat.
FIG. 10. Behavior of the median specific heat (over 100 instances) for the 3-regular 3-XORSAT instances with inverse-temperature β for N = 100. The behavior transitions from a polynomial scaling with β to an exponential scaling with β. Inset: Typical minimum inverse-temperature required for instances of size N such that probability of the ground state is at least q = 10 −1 . Also shown are fits to log N anto d a power-law cN α with α = 0.39 ± 0.18, which is almost indistinguishable from the logarithmic fit for large size.
FIG. 11. Behavior of the median specific heat (over 100 instances) for the random ±1 instances with inversetemperature β for N = 1152. The behavior transitions from a polynomial scaling with β to an exponential scaling with β. Inset: Typical minimum inverse-temperature required for instances of size N such that probability of the target energy ET = E0 + δ(N ) is at least q = 10 −1 . Also shown are fits to log N for all three cases and to a power-law cN α with α = 0.30 ± 0.09 , which is almost indistinguishable from the logarithmic fit for large sizes.
This is equivalent to having a bounded degree connectivity graph.[36] The analysis is based on the equivalence between the Canonical and the Microcanonical Ensembles of Statistical Mechanics. This equivalence is reviewed in many places, see e.g., Ref.[83]. [37] Equivalently, N s(e) is the logarithm of the number of microstates with intensive energy e. [38] A relation best known as the second law of thermodynamics de = T ds. [39] An energy probability density that is precisely Gaussian implies that the energy density is a linear function of the inverse temperature β and hence the specific heat is a constant. We elaborate on this point in the Supplemental Information. [40] Vicky Choi, "Minor-embedding in adiabatic quantum computation: I. The parameter setting problem," Quant. By no means however, is it meant that PTAS is able to thermally sample from a Boltzmann distribution of the input problem. In fact, it should be clear that PTAS does not. [47] C. J. Geyer, "Parallel tempering," in Computing Science and Statistics Proceedings of the 23rd Symposium on the Interface, edited by E. M. Keramidas (American Statistical Association, New York, 1991) p. 156. [48] Koji Hukushima and Koji Nemoto, "Exchange monte carlo method and application to spin glass simulations,".
[33] Lorenzo Campos Venuti, Tameem Albash, Daniel A. Li-
dar, and Paolo Zanardi, "Adiabaticity in open quantum
systems," arXiv:1508.05558 (2015).
[34] The existence of small minimum gaps prior to the end
of the anneal suggests that it is extremely unlikely that
the Gibbs state before crossing these gaps would have a
larger overlap with the ground state manifold of the final
Hamiltonian than after. Therefore, measuring the system
midway through a quantum annealing process will generi-
cally yield lower success probabilities than measurements
taking place at the end of it [16, 81, 82]. We give two an-
alytical examples in the Supplemental Information.
[35] Inf. Proc. 7, 193-209 (2008).
[41] Vicky Choi, "Minor-embedding in adiabatic quantum
computation: II. Minor-universal graph design," Quant.
Inf. Proc. 10, 343-353 (2011).
[42] The reader is referred to the Supplemental Information
for further details.
[43] Details of these instances as well as similar results ob-
tained for other problem classes are given in the Supple-
mental Information.
[44] The D-Wave processors are known to suffer from addi-
tional sources of error such as problem specification errors
[58, 59] and freeze-out before the end of the anneal [84]
that prevent thermalization to the programmed problem
Hamiltonian.
[45] R. Saket, "A PTAS for the Classical Ising Spin Glass
Problem on the Chimera Graph Structure," ArXiv e-
prints (2013), arXiv:1306.6943 [cs.DS].
[46] Journal of the Physical Society of Japan 65, 1604-1608
(1996).
[49] Details of our PT implementation can be found in the
Supplemental Information.
[50] Similar scaling behavior for other classes of Hamiltonians,
specifically 3-regular 3-XORSAT instances and random
±1 instances, is also observed, and we give the results for
these instances in the Supplemental InformationSupple-
mental Information.
[51] T. Jörg, J. Lukic, E. Marinari, and O. C. Martin, "Strong
universality and algebraic scaling in two-dimensional
ising spin glasses," Phys. Rev. Lett. 96, 237205 (2006).
FIG. 4. A visualization of the DW2X graph. Operational qubits are shown in green, and inoperational ones are shown in red. Programmable couplers are shown as black lines connecting the qubits.0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
138
139
140
141
142
143
144
145
146
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
284
285
286
287
288
289
290
291
292
293
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
315
317
318
319
320
321
322
323
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
430
431
432
433
434
436
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
463
464
465
466
467
468
470
472
473
474
475
476
477
478
479
480
481
482
483
484
485
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
565
566
567
568
570
571
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
where |λ 0 (s) and |λ 1 (s) are the instantaneous ground state and first excited state for the single qubit system with eigenvalues −λ(s) and λ(s) respectively. Let us define Λ(s) ≡ | 0|λ 0 (s) | 2 = 1 − | 0|λ 1 (s) | 2 , so we can rewrite our expression as: p(s) = Λ(s) tanh(βλ(s)/2) + 1 1 + e βλ(s)We therefore have: 1 + e βλ(s) 2 (32) Note that Λ (s) > 0, 2Λ − 1 > 0∀s, and λ (s) < 0 for s < 0.5. We can therefore ask, is it possible for d ds p(s) = 0 for s < 0.5? Because of the exponential factors, for large β the first term will dominate the second term, and this will not occur. Let us therefore consider the small β case. If we expand the exponentials, we find Therefore, even in the high temperature limit, d ds p(s) remains positive. Numerically, we can confirm that d ds p(s) remains positive. Therefore, we can conclude that p(s) is monotonically increasing and achieves its maximum value at s = 1.For the Grover problem, we take[92]where |φ is the uniform superposition state and |m denotes the 'marked' state which is the ground state at s = 1. The spectrum is such that only the instantaneous ground state and first excited state have non-zero weight on the marked state for s < 1. These two states can be written as:with eigenvalues 1 2 (1−∆(s)) and 1 2 (1+∆(s)) respectively andThe probability of the final ground state in the instantaneous Gibbs distribution is given by:and it is clear from Eq. (37b) that this expression is positive for all s. Numerically, we can confirm that d ds p(s) remains positive for s ∈ [0, 1]. Therefore, we can conclude that p(s) is monotonically increasing and achieves its maximum value at s = 1.
Achieving quantum supremacy with sparse and noisy commuting quantum computations. Michael J Bremner, Ashley Montanaro, Dan J Shepherd, 10.22331/q-2017-04-25-81Michael J. Bremner, Ashley Montanaro, and Dan J. Shepherd, "Achieving quantum supremacy with sparse and noisy commuting quantum computations," Quantum 1, 8 (2017).
Quantum supremacy for simulating a translation-invariant ising spin model. Xun Gao, Sheng-Tao Wang, L.-M Duan, 10.1103/PhysRevLett.118.040502Phys. Rev. Lett. 11840502Xun Gao, Sheng-Tao Wang, and L.-M. Duan, "Quantum supremacy for simulating a translation-invariant ising spin model," Phys. Rev. Lett. 118, 040502 (2017).
Average-case complexity versus approximate simulation of commuting quantum computations. Michael J Bremner, Ashley Montanaro, Dan J Shepherd, 10.1103/PhysRevLett.117.080501Phys. Rev. Lett. 11780501Michael J. Bremner, Ashley Montanaro, and Dan J. Shepherd, "Average-case complexity versus approximate simulation of commuting quantum computations," Phys. Rev. Lett. 117, 080501 (2016).
Hardness of classically simulating the one-cleanqubit model. Tomoyuki Morimae, Keisuke Fujii, Joseph F Fitzsimons, 10.1103/PhysRevLett.112.130502Phys. Rev. Lett. 112130502Tomoyuki Morimae, Keisuke Fujii, and Joseph F. Fitzsi- mons, "Hardness of classically simulating the one-clean- qubit model," Phys. Rev. Lett. 112, 130502 (2014).
Photonic boson sampling in a tunable circuit. Matthew A Broome, Alessandro Fedrizzi, Justin Saleh Rahimi-Keshari, Scott Dove, Timothy C Aaronson, Andrew G Ralph, White, 10.1126/science.1231440Science. 339Matthew A. Broome, Alessandro Fedrizzi, Saleh Rahimi- Keshari, Justin Dove, Scott Aaronson, Timothy C. Ralph, and Andrew G. White, "Photonic boson sam- pling in a tunable circuit," Science 339, 794-798 (2013).
Boson sampling on a photonic chip. Justin B Spring, Benjamin J Metcalf, Peter C Humphreys, W Steven Kolthammer, Xian-Min Jin, Marco Barbieri, Animesh Datta, Nicholas Thomas-Peter, Nathan K Langford, Dmytro Kundys, James C Gates, Brian J Smith, G R Peter, Ian A Smith, Walmsley, 10.1126/science.1231692Science. 339Justin B. Spring, Benjamin J. Metcalf, Peter C. Humphreys, W. Steven Kolthammer, Xian-Min Jin, Marco Barbieri, Animesh Datta, Nicholas Thomas-Peter, Nathan K. Langford, Dmytro Kundys, James C. Gates, Brian J. Smith, Peter G. R. Smith, and Ian A. Walms- ley, "Boson sampling on a photonic chip," Science 339, 798-801 (2013).
S Boixo, S V Isakov, V N Smelyanskiy, R Babbush, N Ding, Z Jiang, M J Bremner, J M Martinis, H Neven, arXiv:1608.00263Characterizing Quantum Supremacy in Near-Term Devices. ArXiv e-prints. quant-phS. Boixo, S. V. Isakov, V. N. Smelyanskiy, R. Babbush, N. Ding, Z. Jiang, M. J. Bremner, J. M. Martinis, and H. Neven, "Characterizing Quantum Supremacy in Near- Term Devices," ArXiv e-prints (2016), arXiv:1608.00263 [quant-ph].
Quantum annealing: A new method for minimizing multidimensional functions. A B Finnila, M A Gomez, C Sebenik, C Stenson, J D Doll, 10.1016/0009-2614(94)00117-0Chemical Physics Letters. 219A. B. Finnila, M. A. Gomez, C. Sebenik, C. Stenson, and J. D. Doll, "Quantum annealing: A new method for min- imizing multidimensional functions," Chemical Physics Letters 219, 343-348 (1994).
Quantum annealing of a disordered magnet. J Brooke, D Bitko, T F Rosenbaum, G Aeppli, 10.1126/science.284.5415.779Science. 284J. Brooke, D. Bitko, T. F., Rosenbaum, and G. Aeppli, "Quantum annealing of a disordered magnet," Science 284, 779-781 (1999).
Quantum annealing in the transverse Ising model. Tadashi Kadowaki, Hidetoshi Nishimori, 10.1103/PhysRevE.58.5355Phys. Rev. E. 585355Tadashi Kadowaki and Hidetoshi Nishimori, "Quantum annealing in the transverse Ising model," Phys. Rev. E 58, 5355 (1998).
Edward Farhi, Jeffrey Goldstone, Sam Gutmann, Michael Sipser, arXiv:quant-ph/0001106Quantum Computation by Adiabatic Evolution. Edward Farhi, Jeffrey Goldstone, Sam Gutmann, and Michael Sipser, "Quantum Computation by Adiabatic Evolution," arXiv:quant-ph/0001106 (2000).
Theory of quantum annealing of an Ising spin glass. Giuseppe E Santoro, Roman Martoňák, Erio Tosatti, Roberto Car, 10.1126/science.1068774Science. 295Giuseppe E. Santoro, Roman Martoňák, Erio Tosatti, and Roberto Car, "Theory of quantum annealing of an Ising spin glass," Science 295, 2427-2430 (2002).
Size dependence of the minimum excitation gap in the quantum adiabatic algorithm. A P Young, S Knysh, V N Smelyanskiy, 10.1103/PhysRevLett.101.170503Phys. Rev. Lett. 101170503A. P. Young, S. Knysh, and V. N. Smelyanskiy, "Size dependence of the minimum excitation gap in the quan- tum adiabatic algorithm," Phys. Rev. Lett. 101, 170503 (2008).
Firstorder phase transition in the quantum adiabatic algorithm. A P Young, S Knysh, V N Smelyanskiy, 10.1103/PhysRevLett.104.020502Phys. Rev. Lett. 10420502A. P. Young, S. Knysh, and V. N. Smelyanskiy, "First- order phase transition in the quantum adiabatic algo- rithm," Phys. Rev. Lett. 104, 020502 (2010).
Exponential complexity of the quantum adiabatic algorithm for certain satisfiability problems. Itay Hen, A P Young, 10.1103/PhysRevE.84.061152Phys. Rev. E. 8461152Itay Hen and A. P. Young, "Exponential complexity of the quantum adiabatic algorithm for certain satisfiability problems," Phys. Rev. E 84, 061152 (2011).
Performance of the quantum adiabatic algorithm on random instances of two optimization problems on regular hypergraphs. E Farhi, D Gosset, I Hen, A W Sandvik, P Shor, A P Young, F Zamponi, arXiv:1208.3757Phys. Rev. A. 8652334E. Farhi, D. Gosset, I. Hen, A. W. Sandvik, P. Shor, A. P. Young, and F. Zamponi, "Performance of the quantum adiabatic algorithm on random instances of two optimiza- tion problems on regular hypergraphs," Phys. Rev. A 86, 052334 (2012), (arXiv:1208.3757 ).
Defining and detecting quantum speedup. F Troels, Zhihui Rønnow, Joshua Wang, Sergio Job, Sergei V Boixo, David Isakov, John M Wecker, Daniel A Martinis, Matthias Lidar, Troyer, 10.1126/science.1252319Science. 345Troels F. Rønnow, Zhihui Wang, Joshua Job, Sergio Boixo, Sergei V. Isakov, David Wecker, John M. Mar- tinis, Daniel A. Lidar, and Matthias Troyer, "Defining and detecting quantum speedup," Science 345, 420-424 (2014).
Probing for quantum speedup in spin-glass problems with planted solutions. Itay Hen, Joshua Job, Tameem Albash, F Troels, Matthias Rønnow, Daniel A Troyer, Lidar, http:/link.aps.org/doi/10.1103/PhysRevA.92.042325Phys. Rev. A. 9242325Itay Hen, Joshua Job, Tameem Albash, Troels F. Rønnow, Matthias Troyer, and Daniel A. Lidar, "Prob- ing for quantum speedup in spin-glass problems with planted solutions," Phys. Rev. A 92, 042325-(2015).
Experimental signature of programmable quantum annealing. Sergio Boixo, Tameem Albash, Federico M Spedalieri, Nicholas Chancellor, Daniel A Lidar, 10.1038/ncomms3067Nat. Commun. 42067Sergio Boixo, Tameem Albash, Federico M. Spedalieri, Nicholas Chancellor, and Daniel A. Lidar, "Experimen- tal signature of programmable quantum annealing," Nat. Commun. 4, 2067 (2013).
Consistency tests of classical and quantum models for a quantum annealer. Tameem Albash, Walter Vinci, Anurag Mishra, Paul A Warburton, Daniel A Lidar, http:/link.aps.org/doi/10.1103/PhysRevA.91.042314Phys. Rev. A. 9142314Tameem Albash, Walter Vinci, Anurag Mishra, Paul A. Warburton, and Daniel A. Lidar, "Consistency tests of classical and quantum models for a quantum annealer," Phys. Rev. A 91, 042314-(2015).
Quantum enhanced optimization (qeo. "Quantum enhanced optimization (qeo), https://www.iarpa.gov/index.php/research- programs/qeo." .
Fabrication process and properties of fully-planarized deep-submicron nb/al-alox/nb josephson junctions for vlsi circuits. S K Tolpygo, V Bolkhovsky, T J Weir, L M Johnson, M A Gouker, W D Oliver, 10.1109/TASC.2014.2374836IEEE Transactions on Applied Superconductivity. 25S. K. Tolpygo, V. Bolkhovsky, T. J. Weir, L. M. John- son, M. A. Gouker, and W. D. Oliver, "Fabrication pro- cess and properties of fully-planarized deep-submicron nb/al-alox/nb josephson junctions for vlsi circuits," IEEE Transactions on Applied Superconductivity 25, 1-12 (2015).
Inductance of circuit structures for mit ll superconductor electronics fabrication process with 8 niobium layers. S K Tolpygo, V Bolkhovsky, T J Weir, C J Galbraith, L M Johnson, M A Gouker, V K Semenov, 10.1109/TASC.2014.2369213IEEE Transactions on Applied Superconductivity. 25S. K. Tolpygo, V. Bolkhovsky, T. J. Weir, C. J. Galbraith, L. M. Johnson, M. A. Gouker, and V. K. Semenov, "In- ductance of circuit structures for mit ll superconductor electronics fabrication process with 8 niobium layers," IEEE Transactions on Applied Superconductivity 25, 1- 5 (2015).
Thermal and residual excited-state population in a 3d transmon qubit. X Y Jin, A Kamal, A P Sears, T Gudmundsen, D Hover, J Miloshi, R Slattery, F Yan, J Yoder, T P Orlando, S Gustavsson, W D Oliver, 10.1103/PhysRevLett.114.240501Phys. Rev. Lett. 114240501X. Y. Jin, A. Kamal, A. P. Sears, T. Gudmundsen, D. Hover, J. Miloshi, R. Slattery, F. Yan, J. Yoder, T. P. Orlando, S. Gustavsson, and W. D. Oliver, "Thermal and residual excited-state population in a 3d transmon qubit," Phys. Rev. Lett. 114, 240501 (2015).
Quantum annealing amid local ruggedness and global frustration. James King, Sheir Yarkoni, Jack Raymond, Isil Ozfidan, Andrew D King, Mayssam Mohammadi Nevisi, Jeremy P Hilton, Catherine C Mcgeoch, arXiv:1701.04579James King, Sheir Yarkoni, Jack Raymond, Isil Ozfi- dan, Andrew D. King, Mayssam Mohammadi Nevisi, Jeremy P. Hilton, and Catherine C. McGeoch, "Quan- tum annealing amid local ruggedness and global frustra- tion," arXiv:1701.04579 (2017).
A scalable control system for a superconducting adiabatic quantum optimization processor. P M W Johnson, Bunyk, Maibaum, A J Tolkacheva, E Berkley, M Chapple, Harris, Johansson, Lanting, Perminov, Ladizinsky, G Oh, Rose, Superconductor Science and Technology. 2365004M W Johnson, P Bunyk, F Maibaum, E Tolkacheva, A J Berkley, E M Chapple, R Harris, J Johansson, T Lant- ing, I Perminov, E Ladizinsky, T Oh, and G Rose, "A scalable control system for a superconducting adiabatic quantum optimization processor," Superconductor Sci- ence and Technology 23, 065004 (2010).
A scalable readout system for a superconducting adiabatic quantum optimization system. A J Berkley, M W Johnson, P Bunyk, Harris, Johansson, Lanting, Ladizinsky, M H S Tolkacheva, G Amin, Rose, Superconductor Science and Technology. 23105014A J Berkley, M W Johnson, P Bunyk, R Harris, J Johansson, T Lanting, E Ladizinsky, E Tolkacheva, M H S Amin, and G Rose, "A scalable readout sys- tem for a superconducting adiabatic quantum optimiza- tion system," Superconductor Science and Technology 23, 105014 (2010).
Experimental investigation of an eight-qubit unit cell in a superconducting optimization processor. R Harris, M W Johnson, T Lanting, A J Berkley, J Johansson, P Bunyk, E Tolkacheva, E Ladizinsky, N Ladizinsky, T Oh, F Cioata, I Perminov, P Spear, C Enderud, C Rich, S Uchaikin, M C Thom, E M Chapple, J Wang, B Wilson, M H S Amin, N Dickson, K Karimi, B Macready, C J S Truncik, G Rose, 10.1103/PhysRevB.82.024511Phys. Rev. B. 8224511R. Harris, M. W. Johnson, T. Lanting, A. J. Berkley, J. Johansson, P. Bunyk, E. Tolkacheva, E. Ladizinsky, N. Ladizinsky, T. Oh, F. Cioata, I. Perminov, P. Spear, C. Enderud, C. Rich, S. Uchaikin, M. C. Thom, E. M. Chapple, J. Wang, B. Wilson, M. H. S. Amin, N. Dickson, K. Karimi, B. Macready, C. J. S. Truncik, and G. Rose, "Experimental investigation of an eight-qubit unit cell in a superconducting optimization processor," Phys. Rev. B 82, 024511 (2010).
. P Bunyk, E M Hoskinson, M W Johnson, E Tolka, P. I Bunyk, E. M. Hoskinson, M. W. Johnson, E. Tolka-
Zero and low temperature behavior of the two-dimensional ±j ising spin glass. C K Thomas, D A Huse, A A Middleton, 10.1103/PhysRevLett.107.047203Phys. Rev. Lett. 10747203C. K. Thomas, D. A. Huse, and A. A. Middleton, "Zero and low temperature behavior of the two-dimensional ±j ising spin glass," Phys. Rev. Lett. 107, 047203 (2011).
Finite-size scaling in two-dimensional ising spinglass models. Francesco Parisen Toldin, Andrea Pelissetto, Ettore Vicari, 10.1103/PhysRevE.84.051116Phys. Rev. E. 8451116Francesco Parisen Toldin, Andrea Pelissetto, and Ettore Vicari, "Finite-size scaling in two-dimensional ising spin- glass models," Phys. Rev. E 84, 051116 (2011).
Universal critical behavior of the two-dimensional ising spin glass. L A Fernandez, E Marinari, V Martin-Mayor, G Parisi, J J Ruiz-Lorenzo, 10.1103/PhysRevB.94.024402Phys. Rev. B. 9424402L. A. Fernandez, E. Marinari, V. Martin-Mayor, G. Parisi, and J. J. Ruiz-Lorenzo, "Universal critical behavior of the two-dimensional ising spin glass," Phys. Rev. B 94, 024402 (2016).
Ising formulations of many NP problems. A Lucas, 10.3389/fphy.2014.00005Front. Phys. 25A. Lucas, "Ising formulations of many NP problems," Front. Phys. 2, 5 (2014).
Optimally stopped optimization. Walter Vinci, Daniel A Lidar, 10.1103/PhysRevApplied.6.054016Phys. Rev. Applied. 654016Walter Vinci and Daniel A. Lidar, "Optimally stopped optimization," Phys. Rev. Applied 6, 054016 (2016).
Evidence for quantum annealing with more than one hundred qubits. Sergio Boixo, F Troels, Sergei V Ronnow, Zhihui Isakov, David Wang, Daniel A Wecker, John M Lidar, Matthias Martinis, Troyer, 10.1038/nphys2900Nat. Phys. 10Sergio Boixo, Troels F. Ronnow, Sergei V. Isakov, Zhihui Wang, David Wecker, Daniel A. Lidar, John M. Martinis, and Matthias Troyer, "Evidence for quantum annealing with more than one hundred qubits," Nat. Phys. 10, 218- 224 (2014).
Unraveling quantum annealers using classical hardness. Victor Martin, -Mayor , Itay Hen, 10.1038/srep15324Scientific Reports. 5Victor Martin-Mayor and Itay Hen, "Unraveling quan- tum annealers using classical hardness," Scientific Re- ports 5, 15324 EP -(2015).
Best-case performance of quantum annealers on native spin-glass benchmarks: How chaos can affect success probabilities. Zheng Zhu, Andrew J Ochoa, Stefan Schnabel, Firas Hamze, Helmut G Katzgraber, 10.1103/PhysRevA.93.012317Phys. Rev. A. 9312317Zheng Zhu, Andrew J. Ochoa, Stefan Schnabel, Firas Hamze, and Helmut G. Katzgraber, "Best-case perfor- mance of quantum annealers on native spin-glass bench- marks: How chaos can affect success probabilities," Phys. Rev. A 93, 012317 (2016).
Searching for quantum speedup in quasistatic quantum annealers. H Mohammad, Amin, 10.1103/PhysRevA.92.052323Phys. Rev. A. 9252323Mohammad H. Amin, "Searching for quantum speedup in quasistatic quantum annealers," Phys. Rev. A 92, 052323 (2015).
Error-corrected quantum annealing with hundreds of qubits. Tameem Kristen L Pudenz, Albash, Lidar, 10.1038/ncomms4243Nat. Commun. 53243Kristen L Pudenz, Tameem Albash, and Daniel A Li- dar, "Error-corrected quantum annealing with hundreds of qubits," Nat. Commun. 5, 3243 (2014).
Quantum annealing correction for random Ising problems. Kristen L Pudenz, Tameem Albash, Daniel A Lidar, http:/link.aps.org/doi/10.1103/PhysRevA.91.042302Phys. Rev. A. 9142302Kristen L. Pudenz, Tameem Albash, and Daniel A. Li- dar, "Quantum annealing correction for random Ising problems," Phys. Rev. A 91, 042302 (2015).
Quantum annealing correction with minor embedding. Walter Vinci, Tameem Albash, Gerardo Paz-Silva, Itay Hen, Daniel A Lidar, http:/link.aps.org/doi/10.1103/PhysRevA.92.042310Phys. Rev. A. 9242310Walter Vinci, Tameem Albash, Gerardo Paz-Silva, Itay Hen, and Daniel A. Lidar, "Quantum annealing correc- tion with minor embedding," Phys. Rev. A 92, 042310- (2015).
Mean field analysis of quantum annealing correction. Shunji Matsuura, Hidetoshi Nishimori, Tameem Albash, Daniel A Lidar, arXiv:1510.07709Shunji Matsuura, Hidetoshi Nishimori, Tameem Albash, and Daniel A. Lidar, "Mean field analysis of quantum annealing correction," arXiv:1510.07709 (2015).
Nested quantum annealing correction. Walter Vinci, Tameem Albash, Lidar, 10.1038/npjqi.2016.17Npj Quantum Information. 216017Walter Vinci, Tameem Albash, and Daniel A Lidar, "Nested quantum annealing correction," Npj Quantum Information 2, 16017 EP -(2016).
Quantumannealing correction at finite temperature: Ferromagnetic p-spin models. Shunji Matsuura, Hidetoshi Nishimori, Walter Vinci, Tameem Albash, Daniel A Lidar, 10.1103/PhysRevA.95.022308Phys. Rev. A. 9522308Shunji Matsuura, Hidetoshi Nishimori, Walter Vinci, Tameem Albash, and Daniel A. Lidar, "Quantum- annealing correction at finite temperature: Ferromag- netic p-spin models," Phys. Rev. A 95, 022308 (2017).
Modernizing quantum annealing using local searches. Nicholas Chancellor, New Journal of Physics. 1923024Nicholas Chancellor, "Modernizing quantum annealing using local searches," New Journal of Physics 19, 023024 (2017).
Modernizing Quantum Annealing II: Genetic Algorithms and Inference. N Chancellor, arXiv:1609.05875ArXiv e-prints. quant-phN. Chancellor, "Modernizing Quantum Annealing II: Ge- netic Algorithms and Inference," ArXiv e-prints (2016), arXiv:1609.05875 [quant-ph].
Boosting quantum annealer performance via sample persistence. Hamed Karimi, Gili Rosenberg, 10.1007/s11128-017-1615-xQuantum Information Processing. 16166Hamed Karimi and Gili Rosenberg, "Boosting quantum annealer performance via sample persistence," Quantum Information Processing 16, 166 (2017).
Effective optimization using sample persistence: A case study on quantum annealers and various Monte Carlo optimization methods. H Karimi, G Rosenberg, H G Katzgraber, arXiv:1706.07826cs.DMH. Karimi, G. Rosenberg, and H. G. Katzgraber, "Effec- tive optimization using sample persistence: A case study on quantum annealers and various Monte Carlo optimiza- tion methods," ArXiv e-prints (2017), arXiv:1706.07826 [cs.DM].
Error-correcting codes for adiabatic quantum computation. S P Jordan, E Farhi, P W Shor, http:/link.aps.org/doi/10.1103/PhysRevA.74.052322Phys. Rev. A. 7452322S. P. Jordan, E. Farhi, and P. W. Shor, "Error-correcting codes for adiabatic quantum computation," Phys. Rev. A 74, 052322 (2006).
Error suppression in hamiltonian-based quantum computation using energy penalties. Adam D Bookatz, Edward Farhi, Leo Zhou, http:/link.aps.org/doi/10.1103/PhysRevA.92.022317Physical Review A. 9222317Adam D. Bookatz, Edward Farhi, and Leo Zhou, "Error suppression in hamiltonian-based quantum computation using energy penalties," Physical Review A 92, 022317- (2015).
Non-commuting two-local hamiltonians for quantum error suppression. Zhang Jiang, Eleanor G Rieffel, 10.1007/s11128-017-1527-9Quantum Information Processing. 1689Zhang Jiang and Eleanor G. Rieffel, "Non-commuting two-local hamiltonians for quantum error suppression," Quantum Information Processing 16, 89 (2017).
Error suppression for hamiltonian-based quantum computation using subsystem codes. Milad Marvian, Daniel A Lidar, 10.1103/PhysRevLett.118.030504Phys. Rev. Lett. 11830504Milad Marvian and Daniel A. Lidar, "Error suppression for hamiltonian-based quantum computation using sub- system codes," Phys. Rev. Lett. 118, 030504 (2017).
Application of quantum annealing to training of deep neural networks. H Steven, Maxwell P Adachi, Henderson, arXiv:1510.06356Steven H. Adachi and Maxwell P. Henderson, "Appli- cation of quantum annealing to training of deep neural networks," arXiv:1510.06356 (2015).
Quantum-assisted learning of graphical models with arbitrary pairwise connectivity. M Benedetti, J Realpe-Gómez, R Biswas, A Perdomo-Ortiz, arXiv:1609.02542ArXiv e-prints. quant-phM. Benedetti, J. Realpe-Gómez, R. Biswas, and A. Perdomo-Ortiz, "Quantum-assisted learning of graph- ical models with arbitrary pairwise connectivity," ArXiv e-prints (2016), arXiv:1609.02542 [quant-ph].
C H Papadimitriou, Computational Complexity. Reading, MassachusettsAddison Wesley LongmanC.H. Papadimitriou, Computational Complexity (Addi- son Wesley Longman, Reading, Massachusetts, 1995).
The complexity of enumeration and reliability problems. Leslie G Valiant, 10.1137/0208032SIAM Journal on Computing. 8Leslie G. Valiant, "The complexity of enumeration and reliability problems," SIAM Journal on Computing 8, 410-421 (1979).
Counting models for 2sat and 3sat formulae. Vilhelm Dahllöf, Peter Jonsson, Magnus Wahlström, 10.1016/j.tcs.2004.10.037Theoretical Computer Science. 332Vilhelm Dahllöf, Peter Jonsson, and Magnus Wahlström, "Counting models for 2sat and 3sat formulae," Theoret- ical Computer Science 332, 265 -291 (2005).
Relaxation vs. adiabatic quantum steady state preparation: which wins?. L Campos Venuti, T Albash, M Marvian, D Lidar, P Zanardi, arXiv:1612.07979ArXiv e-prints. quant-phL. Campos Venuti, T. Albash, M. Marvian, D. Li- dar, and P. Zanardi, "Relaxation vs. adiabatic quantum steady state preparation: which wins?" ArXiv e-prints (2016), arXiv:1612.07979 [quant-ph].
Anderson localization makes adiabatic quantum optimization fail. Boris Altshuler, Hari Krovi, Jrmie Roland, 10.1073/pnas.1002116107Proceedings of the National Academy of Sciences. 107Boris Altshuler, Hari Krovi, and Jrmie Roland, "Ander- son localization makes adiabatic quantum optimization fail," Proceedings of the National Academy of Sciences 107, 12446-12450 (2010).
Zero-temperature quantum annealing bottlenecks in the spin-glass phase. Sergey Knysh, 10.1038/ncomms12370Nature Communications. 712370Sergey Knysh, "Zero-temperature quantum annealing bottlenecks in the spin-glass phase," Nature Communi- cations 7, 12370 EP -(2016).
Microcanonical approach to the simulation of first-order phase transitions. V Martín-Mayor, 10.1103/PhysRevLett.98.137207Phys. Rev. Lett. 98137207V. Martín-Mayor, "Microcanonical approach to the sim- ulation of first-order phase transitions," Phys. Rev. Lett. 98, 137207 (2007).
Searching for quantum speedup in quasistatic quantum annealers. H Mohammad, Amin, 10.1103/PhysRevA.92.052323Phys. Rev. A. 9252323Mohammad H. Amin, "Searching for quantum speedup in quasistatic quantum annealers," Phys. Rev. A 92, 052323 (2015).
See Supplemental Material for additional details about the derivations and simulations, which includes Refs. 86-92See Supplemental Material for additional details about the derivations and simulations, which includes Refs. [86- 92].
Complex zeros in the partition function of the four-dimensional su(2) lattice gauge model. M Falcioni, E Marinari, M L Paciello, G Parisi, B Taglienti, 10.1016/0370-2693(82)91205-9Physics Letters B. 108M. Falcioni, E. Marinari, M.L. Paciello, G. Parisi, and B. Taglienti, "Complex zeros in the partition function of the four-dimensional su(2) lattice gauge model," Physics Letters B 108, 331 -332 (1982).
Optimized monte carlo data analysis. Alan M Ferrenberg, Robert H Swendsen, 10.1103/PhysRevLett.63.1195Phys. Rev. Lett. 63Alan M. Ferrenberg and Robert H. Swendsen, "Opti- mized monte carlo data analysis," Phys. Rev. Lett. 63, 1195-1198 (1989).
Performance of a quantum annealer on range-limited constraint satisfaction problems. Andrew D King, Trevor Lanting, Richard Harris, arXiv:1502.02098Andrew D. King, Trevor Lanting, and Richard Harris, "Performance of a quantum annealer on range-limited constraint satisfaction problems," arXiv:1502.02098 (2015).
From fields to trees. Firas Hamze, Nando De Freitas, UAI. David Maxwell Chickering and Joseph Y. HalpernArlington, VirginiaAUAI PressFiras Hamze and Nando de Freitas, "From fields to trees," in UAI , edited by David Maxwell Chickering and Joseph Y. Halpern (AUAI Press, Arlington, Virginia, 2004) pp. 243-250.
Efficient subgraph-based sampling of isingtype models with frustration. Alex Selby, arXiv:1409.3934Alex Selby, "Efficient subgraph-based sampling of ising- type models with frustration," arXiv:1409.3934 (2014).
Quantum mechanics helps in searching for a needle in a haystack. K Lov, Grover, http:/link.aps.org/doi/10.1103/PhysRevLett.79.325Phys. Rev. Lett. 79Lov K. Grover, "Quantum mechanics helps in searching for a needle in a haystack," Phys. Rev. Lett. 79, 325-328 (1997).
Quantum search by local adiabatic evolution. Jérémie Roland, Nicolas J Cerf, http:/link.aps.org/doi/10.1103/PhysRevA.65.042308Phys. Rev. A. 6542308Jérémie Roland and Nicolas J. Cerf, "Quantum search by local adiabatic evolution," Phys. Rev. A 65, 042308- (2002).
|
[] |
[
"SINGULARITIES OF PLANE COMPLEX CURVES AND LIMITS OF KÄHLER METRICS WITH CONE SINGULARITIES. I: TANGENT CONES",
"SINGULARITIES OF PLANE COMPLEX CURVES AND LIMITS OF KÄHLER METRICS WITH CONE SINGULARITIES. I: TANGENT CONES"
] |
[
"Martin De Borbon "
] |
[] |
[] |
The goal of this article is to provide a construction and classification, in the case of two complex dimensions, of the possible tangent cones at points of limit spaces of non-collapsed sequences of Kähler-Einstein metrics with cone singularities. The proofs and constructions are completely elementary, nevertheless they have an intrinsic beauty. In a few words; tangent cones correspond to spherical metrics with cone singularities in the projective line by means of the Kähler quotient construction with respect to the S 1 -action generated by the Reeb vector field, except in the irregular case C β 1 × C β 2 with β 2
|
10.1515/coma-2017-0005
|
[
"https://arxiv.org/pdf/1607.08230v2.pdf"
] | 119,681,277 |
1607.08230
|
752a8d6ec572513f3e141ff4191d2bd8c15abf8c
|
SINGULARITIES OF PLANE COMPLEX CURVES AND LIMITS OF KÄHLER METRICS WITH CONE SINGULARITIES. I: TANGENT CONES
Martin De Borbon
SINGULARITIES OF PLANE COMPLEX CURVES AND LIMITS OF KÄHLER METRICS WITH CONE SINGULARITIES. I: TANGENT CONES
The goal of this article is to provide a construction and classification, in the case of two complex dimensions, of the possible tangent cones at points of limit spaces of non-collapsed sequences of Kähler-Einstein metrics with cone singularities. The proofs and constructions are completely elementary, nevertheless they have an intrinsic beauty. In a few words; tangent cones correspond to spherical metrics with cone singularities in the projective line by means of the Kähler quotient construction with respect to the S 1 -action generated by the Reeb vector field, except in the irregular case C β 1 × C β 2 with β 2
Introduction
Kähler-Einstein (KE) metrics, and more generally constant scalar curvature and extremal Kähler metrics, are canonical metrics on polarized projective varieties and serve as a bridge between differential and algebraic geometry. More recently, after fundamental work of Donaldson [13], much of the theory has been extended to the setting of KE metrics with cone singularities along a divisor -which were previously introduced by Tian in [38]-. A remarkable application is the proof of existence of KE metrics on K-stable Fano manifolds, through the deformation of the cone angle method (see [7]); but besides that, KE metrics with cone singularities (KEcs) have intrinsic interest -as canonical metrics on pairs of projective varieties together with divisors-.
A major achievement in Kähler geometry in the past few years is the proof of a conjecture of Tian -on uniform lower bounds on Bergman kernels-which endows Gromov-Hausdorff limits, of non-collapsed sequences of smooth KE metrics on projective varieties, with an induced algebraic structure -see [15]-. There is a strong interaction between the non-collapsed metric degenerations on the differential geometric side and the so-called log terminal singularities on the algebraic counterpart. The situation is better understood in two complex dimensions; Odaka-Spotti-Sun [32] have shown that the Gromov-Hausdorff compactifications of KE metrics on Del Pezzo surfaces agree with algebraic ones. One would expect then parallel results for KEcs. The new feature is that the curves along which the metrics have cone singularities might now degenerate; and we want to relate the metric degeneration with the theory of singularities of plane complex curves. This paper is a first step along this road and we concentrate in the study of tangent cones at points of limit spaces.
Our main results are Propositions 1, 2 and 3 that follow. Proposition 2 follows immediately from 1; while 3 has already been established in [34] and its proof is included here only for the sake of completeness. The main interest is therefore in 1.
We work on C 2 with standard complex coordinates z, w. Let d ≥ 2 and take L j = {l j (z, w) = 0} for j = 1, . . . , d to be d distinct complex lines through the origin with defining linear equations l j . Let β 1 , . . . , β d ∈ (0, 1) satisfy the Troyanov condition (2) It has cone angle 2πβ j along L j for j = 1, . . . , d.
(3) Its volume form is
Vol(g F ) = |l 1 | 2β1−2 . . . |l d | 2β d −2 dzdwdzdw 4 .
Item 3 implies that the metric g F is Ricci-flat and, since it is a Riemannian cone of real dimension four, it must be flat. Item 1 on the Reeb vector field implies that the maps m λ (z, w) = (λz, λw) for λ > 0 must act by scalings of the metric, so that m * λ g F = λ 2c g F . Condition 3 on the volume form implies that
(1.2) c = 1 − d 2 + d j=1 β j 2 ;
note that 0 < c < 1.
We move on to a slightly different situation. Take co-prime integers 1 ≤ p < q. Let d ≥ 2 and C j = {z q = a j w p }, a j ∈ C for j = 1, . . . , d − 2, be distinct complex curves; let β 1 , . . . , β d−2 ∈ (0, 1) and β d−1 , β d ∈ (0, 1] be such that β 1 , . . . , β d−2 , (1/q)β d−1 , (1/p)β d satisfy the Troyanov condition 1.1 if d ≥ 3 and β d−1 /q = β d /p if d = 2.
Proposition 2. There is a unique Kähler cone metricg F on C 2 with apex at 0 such that (1) Its Reeb vector field generates the circle action e it (z, w) = (e ipt c z, e iqt c w) for some constantc > 0.
(2) It has cone angle 2πβ j along C j for j = 1, . . . , d − 2, 2πβ d−1 along {z = 0} and 2πβ d along {w = 0}.
(3) Its volume form is
Vol(g F ) = |z q − a 1 w p | 2β1−2 . . . |z q − a d−2 w p | 2β d−2 −2 |z| 2β d−1 −2 |w| 2β d −2 dzdwdzdw 4 .
Proposition 2 follows from 1, after pulling back by the map (z, w) → (z q , w p ). Similar comments as those after Proposition 1 apply. For λ > 0, letm λ (z, w) = (λ p z, λ q w). Thenm * λg F = λ 2cg F withc = pq 1 − d/2 + d−2 j=1 β j /2 + (1/2q)β d−1 + (1/2p)β d . It is straightforward to include the case of curves like C = {z n = w m } with m and n not necessarily co-prime; simply let m = dp and n = dq with p and q co-prime, so that C = ∪ d j=1 {z q = e 2πij/d w p }. The last result asserts that Propositions 1 and 2 provide a complete list, up to finite coverings, of the Ricci-flat Kähler cone metrics with cone singularities (RFKCcs) in two complex dimensions, except for one case.
Proposition 3. Let g C = dr 2 + r 2 g S be a RFKCcs and assume that its link is diffeomorphic to the 3-sphere; then there is an (essentially unique) holomorphic isometry of ((0, ∞) × S 3 , I, g C ) with one of the following (1) Regular case. A metric g F given by Proposition 1.
(2) Quasi-regular case. A metricg F given by Proposition 2.
(3) Irregular case. C β1 × C β2 for some 0 < β 1 < 1 and 0 < β 2 ≤ 1 with β 1 /β 2 / ∈ Q.
The g S are spherical metrics on the 3-sphere with cone singularities along Hopf circles and (p, q) torus knots. In Section 3 we construct the g S as lifts of spherical metrics with cone singularities on the projective line by means of the Hopf map in the regular case and a Seifert map in the quasi-regular case. Propositions 1, 2 and 3 are then proved in Section 4. Finally, in Section 5, we discuss relations to the theory of singularities of plane complex curves and algebraic geometry.
After writing a first version of this paper, the author red Panov's article Polyhedral Kähler Manifolds [34]. Our results overlap substantially with the content of Section 3 in [34] and we refer to this article for a beautiful geometric exposition. Nevertheless our approach to Proposition 1 is slightly different from Panov's; our proof goes along the lines of the well-known Calabi ansatz and suggests a higher dimensional generalization, replacing the spherical metrics with Kähler-Einstein metrics of positive Ricci curvature.
Acknowledgments. This article contains material from the author's PhD Thesis at Imperial College, founded by the European Research Council Grant 247331 and defended in December 2015. I wish to thank my supervisor, Simon Donaldson, for sharing his ideas with me. I also want to thank Song Sun and Cristiano Spotti for valuable conversations, and the Simons Center for Geometry and Physics for hosting me during their program on Kähler Geometry during October-November 2015.
Background
Most of this section reviews well known material. In Subsection 2.1 we recall the theory of spherical metrics on the projective line that we use. Subsection 2.2 is about Kähler-Einstein metrics with cone singularities along a divisor. Subsection 2.3 collects standard facts on Riemannian cones which are also Kähler. Finally, Subsection 2.4 introduces the concept of Ricci-flat Kähler cone metrics with cone singularities.
2.1. Spherical metrics with cone singularities on CP 1 . Fix 0 < β < 1; on R 2 \ {0} with polar coordinates (ρ, θ) let
(2.1) g β = dρ 2 + β 2 ρ 2 dθ 2 ,
this is the metric of a cone of total angle 2πβ. The apex of the cone is located at 0 and g β is singular at this point. The metric induces a complex structure on the punctured plane, given by an anti-clockwise rotation of angle π/2 with respect to g β ; a basic fact is that we can change coordinates so that this complex structure extends smoothly over the origin. Indeed, setting
(2.2) z = ρ 1/β e iθ we get (2.3) g β = β 2 |z| 2β−2 |dz| 2 .
We denote by C β the complex plane endowed with the singular metric 2.3. Consider a Riemann surface Σ, a point p ∈ Σ and a compatible metric g on Σ \ {p}.
Definition 1. ( [40]) We say that g has cone angle 2πβ at p if for any holomorphic coordinate z centered at p we have that g = e 2u |z| 2β−2 |dz| 2 ; with u a smooth function in a punctured neighborhood of the origin which extends continuously over 0.
There is an obvious extension of Definition 1 to the case of finitely many conical points. We are interested in the situation where Σ = CP 1 and g has constant Gaussian curvature 1 outside the singularities. In our state of affairs we can proceed more directly, giving a local model for the metric around the conical points.
From now on we set S n = {(x 1 , . . . , x n+1 ) ∈ R n+1 : n+1 i=1 x 2 i = 1} the n-sphere, thought as a manifold; we write S n (1) for the n-sphere with its inherited round metric of constant sectional curvature 1. Let W be a wedge in S 2 (1) defined by two geodesics that intersect with angle πβ. A local model for a spherical metric with a cone singularity is given by identifying two copies of W isometrically along their boundary. The expression of this metric in geodesic polar coordinates (ρ, θ) centered at the singular point is
(2.4) dρ 2 + β 2 sin 2 (ρ)dθ 2 .
If we set η = (tan(ρ/2)) 1/β e iθ , our model metric writes as
(2.5) 4β 2 |η| 2β−2 (1 + |η| 2β ) 2 |dη| 2 .
Let p 1 , . . . , p d be d distinct points in S 2 and let β j ∈ (0, 1) for j = 1, . . . , d. We say that g is a spherical metric on S 2 with cone singularities of angle 2πβ j at the points p j if g is locally isometric to S 2 (1) in the complement of the d points and around each point p j we can find polar coordinates in which g agrees with 2.4 with β = β j . It follows from what we have said that any such a metric g endows S 2 with the complex structure of the projective line with d marked points which record the cone singularities. The correspondence which associates to a spherical metric on S 2 a configuration of points in the projective line is the key to the classification of the former, as Theorem 1 below shows. Starting from the complex point of view we have the following: Definition 2. Let L 1 , . . . , L d ∈ CP 1 be d distinct points and β j ∈ (0, 1) for j = 1, . . . , d. We say that g is a compatible spherical metric on CP 1 with cone singularities of angle 2πβ j at the points L j if g is a compatible metric on CP 1 \ {L 1 , . . . , L d } of constant Gaussian curvature equal to 1 and around each singular point L j we can find a complex coordinate η centered at the point in which g is given by 2.5 with β = β j . Remark 1. It is equivalent to say that g has cone angle 2πβ j at the points L j , in the sense of Definition 1, and constant Gaussian curvature 1 on CP 1 \ {L 1 , . . . , L d }. This equivalence is a consequence of the following local regularity statement: If g is a compatible metric on a punctured disc D \ {0} ⊂ C of constant Gaussian curvature 1 and cone angle 2πβ at 0; then there is a holomorphic change of coordinates around the origin in which g agrees with 2.5. Example 1. The simplest example is when d = 2, by means of a Möbius map we can assume that the cone singularities are located at 0 and ∞. The expression 2.5 globally defines a spherical metric with cone angle 2πβ at the given points, this space is also known as the 'rugby ball'. It was shown by Troyanov [39] that 2.5 is the only compatible spherical metric with cone angle 2πβ at 0 and ∞; a consequence of his work is that there are no such metrics with two cone singularities and different cone angle, in particular there can't be a single conical point.
Example 2.
We can construct a spherical metric g with three cone singularities of angles 2πβ 1 , 2πβ 2 and 2πβ 3 by doubling a spherical triangle with interior angles πβ 1 , πβ 2 and πβ 3 . It follows from elementary spherical trigonometry that such a triangle T exists and is unique up to isometry if and only if the following two conditions hold:
•
(2.6) 3 j=1 β j > 1.
Indeed, the area of T is equal to π(
3 j=1 β j − 1). • (2.7) 1 − β i < j =i (1 − β j ) for i = 1, 2, 3.
This is the triangle inequality applied to the polar of T . In complex coordinates the metric g writes as g = e 2u |dz| 2 where u is a real function of the complex variable z and, by means of a Möbius map, we can assume that the cone singularities are located at 0, 1 and ∞. The metric g has an obvious symmetry given by switching the two copies of T , which means that u is invariant under the map z → z and is determined by its restriction to the upper half plane. By means of stereographic projection we can think of the triangle T as lying on the complex plane. Let w = Φ(z) be a Riemann mapping from the upper half plane to T , it is then clear that g is the pullback of the standard round metric 4 (1 + |w| 2 ) 2 |dw| 2 by Φ. It is a classical fact that such a map Φ is given as the quotient of two linearly independent solutions of the hypergeometric equation
(2.8) z(1 − z)w + (c − (a + b + 1)z)w − abw = 0 with β 1 = 1 − c, β 2 = a − b and β 3 = c − a − b.
Example 3. More generally we can consider a spherical convex polygon P with d edges and interior anngles πβ 1 , . . . , πβ d ; double it to obtain a spherical metric on CP 1 with cone singularities at some points L 1 , . . . , L d . These points are fixed by the symmetry that switches the two copies of P and this implies that, up to a Möbius map, we can assume that the points L 1 , . . . , L d lie on the real axis. Same as before, the metric in complex coordinates is given by the pullback of the spherical metric by a Riemann mapping from the upper half plane to the polygon. When d ≥ 4 most spherical metrics are not doublings of spherical polygons.
It is a fact that every spherical metric with cone singularities on S 2 is isometric to the boundary of a convex polytope inside S 3 (1), uniquely determined up to isometries of the ambient space; this includes the doubles of spherical polygons as degenerate cases where all the vertices of the polytope lie on a totally geodesic 2-sphere. Assume that d ≥ 3; by means of a triangulation and formulas 2.6 and 2.7 it is straightforward to show that a necessary condition for the existence of such a metric is that the Troyanov condition 1.1 holds.
Recall that c denotes the number given by 1.2, so that 2c = 2 − d + d j=1 β j . By means of a triangulation and the formula for the area of a spherical triangle, it is easy to show that the total area of a spherical metric is given by 4πc. In algebro-geometric terms, the Troyanov condition is equivalent to say that the pair (CP 1 , [26]) and the number 2c is the degree of the R-divisor
d j=1 (1 − β j )L j ) is log-K-polystable (see−(K CP 1 + d j=1 (1 − β j )L j ).
The main result we want to recall is the following: [40], Luo-Tian [29]) Assume that d ≥ 3, let L 1 , . . . , L d be d distinct points in CP 1 and let β j ∈ (0, 1) for j = 1, . . . , d. If the Troyanov condition 1.1 holds, then there is a unique compatible spherical metric g on CP 1 with cone singularities of angle 2πβ j at the points L j for 1 ≤ j ≤ d.
Theorem 1. (Troyanov
Remark 2.
It is an easy consequence of the uniqueness part, that the set of orientation preserving isometries of g agrees with the set of Möbius maps F , which preserve the set {L 1 , . . . , L d } and such that F
(L i ) = L j only if β i = β j .
Assume that L j = {z 1 = a j z 2 } with a j ∈ C for j = 1, . . . , d − 1 and L d = {z 2 = 0}. Set ξ = z 1 /z 2 , then g = e 2φ |dξ| 2 with φ a function of ξ. Recall that the Gaussian curvature of g is given by
K g = −e −2φ φ, where
= 4∂ 2 /∂ξ∂ξ. Then Theorem 1 is equivalent to the following statement: Let a 1 , . . . , a d−1 ∈ C and β 1 , . . . , β d ∈ (0, 1) satisfy the Troyanov condition 1.1; then there exists a unique function φ such that
• Solves the Liuville equation
φ = −e 2φ , in C \ {a 1 , . . . , a d−1 }. • u = φ − d−1 j=1 (β j − 1) log |ξ − a j |
is a continuous function in C, and φ + (β d + 1) log |ξ| is continuous at ∞. Let us fix β 1 , . . . , β d ∈ (0, 1) satisfying the Troyanov condition 1.1. SetP d =P d (β 1 , . . . , β d ) to be the space of all boundaries of labeled d-vertex convex polytopes in the round 3-sphere with total angle of 2πβ j at d distinct vertices, modulo the ambient isometries; 1.1 ensures thatP d is not empty. The spaceP d is endowed with the Hausdorff topology. LetM d be the space of d distinct ordered points in CP 1 modulo the action of Möbius transformations, this is a complex manifold of dimension d − 3. Each element ofP d represents a spherical metric on CP 1 with cone angle 2πβ j at d distinct points. There is a natural map Π :P d →M d obtained by recording the complex structure given by the metric. It is shown in [29] that Π is a homeomorphism.
Consider the case when β 1 = β 2 = . . . = β d = β. Denote by P d and M d the quotients ofP d and M d by the permutation group on d elements, which corresponds to forgetting the labels. We have an induced homeomorphism Π : P d → M d . The Hausdorff topology gives a natural compactification of P d , similarly the space M d has a natural GIT compactification; it is then natural to ask whether Π extends as an homeomorphism between these. A useful fact, established in [29], is that the Hausdorff limit of a sequence in P d is the boundary of a spherical convex polytope with at most d vertices. We look at the simplest non-trivial case when d = 4; the Troyanov condition 1.1 is then equivalent to 1/2 < β < 1. The space M 4 of four unordered points on the Riemann sphere is isomorphic to C and the GIT compactification is isomorphic to the projective line, the extra point added represents the configuration of two points counted with multiplicity two -this is the unique polystable point-. On the other hand if two cone singularities of angle 2πβ collide; what remains is a single cone singularity of angle 2πγ with γ = 2β − 1, see Figure 1 and [31]. Note that there is not any spherical triangle with angles πγ, πβ, πβ, since 2.7 would then imply that
1 − γ = 2 − 2β < (1 − β) + (1 − β)
. We conclude that if two of the vertices collide the other two must collide too; the Hausdorff compactification is obtained by adding a single point, represented by the 'rugby ball' with two cone singularities of angle 2πγ, which corresponds to the polystable configuration of two points in the projective line with multiplicity two.
2.2.
Kähler-Einstein metrics with cone singularities along a divisor (KEcs). We are concerned with metrics which are modeled, in transverse directions to a smooth divisor, by g β . To begin with we take the product C β × C n−1 ; if (z 1 , . . . , z n ) are standard complex coordinates on C n what we get is the model metric
(2.9) g (β) = β 2 |z 1 | 2β−2 |dz 1 | 2 + n j=2 |dz j | 2 ,
with a singularity along D = {z 1 = 0}. Set {v 1 , . . . , v n } to be the vectors
(2.10) v 1 = |z 1 | 1−β ∂ ∂z 1 , v j = ∂ ∂z j for j = 2, . . . n.
Note that, with respect to g (β) , these vectors are orthogonal and their length is constant. We move on and consider the situation of a complex manifold X of complex dimension n and a smooth divisor D ⊂ X. Let g be a smooth Kähler metric on X \ D and let p ∈ D. Take (z 1 , . . . , z n ) to be complex coordinates centered at p such that D = {z 1 = 0}. In the complement of D we have smooth functions g ij given by g ij = g(v i , v j ). Following Donaldson [13] we give a definition of a Kähler metric with cone singularities which is well suited for the development of a Fredholm theory linearizing the KE equation:
Definition 3. We say that g has cone angle 2πβ along D if, for every p ∈ D and holomorphic coordinates as above, the functions g ij admit a Hölder continuous extension to D. We also require the matrix (g ij (p)) to be positive definite and that g 1j = 0 when j ≥ 2 and z 1 = 0.
It is straightforward to check that this definition is independent of the holomorphic chart z 1 , . . . , z n . There is a Kähler potential φ ∈ C 2,α,β (see [13]) for g around points of D. It can be shown that the vanishing condition, g 1j = 0 for j ≥ 2 at z 1 = 0, is a consequence of the other conditions; this is related to the behavior of the Green's function for the Laplacian of g (β) -see [13]-and, more geometrically, to the fact that g (β) has non-trivial holonomy along simple loops that go around {z 1 = 0}. The tangent cone of g at points of D is C β × C n−1 and its Kähler form defines a co-homology class in X.
There are two types of coordinates we can consider around D: The first is given by holomorphic coordinates z 1 , . . . , z n in which D = {z 1 = 0} as before, in the second one we replace the coordinate z 1 with ρe iθ , with ρ = |z 1 | β and e iθ = arg(z 1 ), and leave z 2 , . . . , z n unchanged; we refer to the later as cone coordinates. In other words, there are two relevant differential structures on X in our situation: One is given by the complex manifold structure we started with, the other is given by declaring the cone coordinates to be smooth. The two structures are clearly equivalent, by a map modeled on (ρe iθ , z 2 , . . . , z n ) → (ρ 1/β e iθ , z 2 , . . . , z n ) in a neighborhood of D. Note that the notion of a function being Hölder continuous (without specifying the exponent) is independent of the coordinates that we use.
It is easy to come up with examples of metrics which satisfy Definition 3. Indeed, let F be a smooth positive function and let η be a smooth Kähler form, both defined on a domain in C n which contains the origin. Consider the (1, 1) form
(2.11) ω = η + i∂∂(F |z 1 | 2β ).
Straightforward calculation shows that, in a small neighborhood of 0, g defines a Kähler metric with cone angle 2πβ along D = {z 1 = 0}. More globally; if η is a Kähler form on a compact complex manifold X, D ⊂ X is a smooth divisor with a defining section s ∈ H 0 ([D]), > 0 is sufficiently small and h is a Hermitian metric on [D]. Then
ω = η + i ∂∂|s| 2β
h defines a Kähler metric on X with cone angle 2πβ along D in the same co-homology class as η.
We are mainly interested in Kähler-Einstein metrics with cone angle 2πβ along D (KEcs). These are metrics with cone singularities, as in Definition 3, such that the Ricci tensor is a constant multiple of the metric,
(2.12) Ric(g KE ) = λg KE ,
in the complement of D. From now on we assume that X is compact; among the many results in this area we want to recall the following ones:
• Existence Theory ( [3], [21], [18] [41], [10]). The result we want to refer to says that KEcs are 'polyhomogeneous'. Let p ∈ D and (z 1 , . . . , z n ) holomorphic coordinates centered at p in which D = {z 1 = 0}. We write z 1 = ρ 1/β e iθ and denote by y = (z 2 , . . . , z n ) the other coordinate functions. Let g KE be a Kähler-Einstein metric on X with cone angle 2πβ along D and β ∈ (1/2, 1), write ω KE for the associated Kähler form. The regularity theorem says that for every p ∈ D we can find holomorphic coordinates (z 1 , . . . , z n ) as above such that ω KE = i∂∂φ, with (2.13) φ = a 0 (y) + (a 01 (y) cos(θ) + a 10 (y) sin(θ))ρ 1/β + a 2 (y)ρ 2 + O(ρ 2+ ).
Where a 0 , a 01 , a 10 , a 2 are smooth functions of y and = (β) > 0. When β ∈ (0, 1/2] the same statement holds if we replace 1/β with 2 in the expansion 2.13.
In a different direction, there are results -see [10]-which guarantee that weak KEcs are indeed metrics with cone singularities in a Hölder sense, as in Definition 3.
• Chern-Weil formulae ( [35], [2], [28]). As shown in the paper of Song-Wang [35], the polyhomogeneous expansion implies that the norm of the Riemann curvature tensor of a KE metric with cone angle 2πβ is bounded by ρ 1/β−2 . The energy of such a metric g is defined to be
E(g) = 1 8π 2 X |Rm(g)| 2 = 1 8π 2 lim →0 X\U |Rm(g)| 2 ,
where U is a tubular neighborhood of D of radius , Rm(g) denotes the Riemann curvature tensor of g and we integrate using the volume form defined by g. It follows that E(g) is finite by comparison with the integral
1 0 ρ 2/β−3 dρ < ∞.
There is a topological formula for the energy which can be compared with the Chern-Weil formulae in [24] for connections with cone singularities. This formula expresses the energy of a KE metric of cone angle 2πβ along D in terms of c 1 (X), c 2 (X), β, c 1 ([D]) and the cohomology class of the Kähler form. When the complex dimension of X is equal to two, the formula reduces to
(2.14) E(g KE ) = χ(X) + (β − 1)χ(D).
• Compactness Theorem ( [9]). Let X i be a sequence of smooth Fano manifolds with a fixed Hilbert polynomial and
D i ⊂ X i smooth divisors with D i ∈ |λK −1 Xi | for some fixed rational number λ ≥ 1 . Fix 1 − λ −1 < β < 1. Assume that there exist KE metrics g i on X i with cone angle 2πβ along D i , we normalize so that Ric(g i ) = µg i , with µ = 1 − (1 − β)λ > 0.
(This normalization condition on the metrics g i allow us to think of their respective Kähler forms as the curvatures of correponding (singular) Hermitian metrics on K −1 X ). Approximating the metrics g i by smooth metrics with a uniform lower bound on the Ricci curvature and a uniform upper bound on the diameter (see [8]) and appealing to the standard Gromov's compactness theorem; shows that there is, taking a subsequence if necessary, a Gromov-Hausdorff limit of the sequence g i . The main result is then:
Theorem 2. (Chen-Donaldson-Sun [9]) There is a Q-Fano variety W and a Weil divisor ∆ ⊂ W such that: -The pair (W, (1 − β)∆) is KLT (Kawamata log terminal).
-There is a weak conical KE metric for the triple (W, ∆, β) which induces a distance d on W ; and such that (W, d) is isometric to the Gromov-Hausdorff limit of (X i , g i ). -There is m ∈ N with the property that, up to a subsequence, we have embeddings T i : X i → CP N and T : W → CP N defined by the complete linear systems H 0 (−mK Xi ) and H 0 (−mK W ) such that T i (X i ) converges to T (W ) as algebraic varieties and T i (D i ) → T (∆) as algebraic cycles.
We won't spell the algebraic geometry words necessary to explain what a KLT pair is, we limit ourselves to say a couple of things in the case of two complex dimensions: (1) The surface W has only finitely many singularities of orbifold type; ∆ is union of irreducible curves counted with multiplicity. (2) Let p be a point in the smooth locus of W which is a singular point in a component of multiplicity 1 of the curve ∆, in coordinates centered at p write ∆ = {f = 0} for a defining function f with an isolated singularity at 0; then |f | 2β−2 is locally integrable. Similarly, we don't need to write in detail the definition of a weak conical Kähler-Einstein metric -see [16]-but we just say that in the complement of ∆ it is a smooth orbifold Kähler-Einstein metric. At points which belong to the smooth locus of multiplicity 1 components of ∆ the metric cone singularities, in the sense of Definition 3, of cone angle 2πβ (Theorem 2 in [9]). On the other hand, at smooth points of ∆ of multiplicity k the metric has cone angle
(2.15) γ = kβ + 1 − k,
in the sense that the tangent cone at the point is C γ × C (Proposition 13 in [9]). It is perhaps better to write 2.15 in the form 1 − γ = k(1 − β); the situation is modeled, in a transverse direction to ∆, by k cone singularities colliding -see Figure 1 for the case k = 2-. The goal of this article is to construct and classify the possible tangent cones at singular points of ∆. We concentrate at smooth points of W , the general case follows by taking finite coverings. For our purposes we can restrict Propositions 1, 2 and 3 to the situation when β i = k i β + 1 − k i for some integers k i ; but it is unnatural to add this hypothesis to the statements of our results. [36]. Let (S, g S ) be a compact Riemannian manifold of real dimension 2n − 1. A Riemannian cone with link (S, g S ) consists of the space C = (0, ∞) × S endowed with the metric g C = dr 2 + r 2 g S , r is the coordinate in the (0, ∞) factor and is then characterized as measuring the intrinsic distance to the apex of the cone; more generally, there is the notion of a metric cone -see [5]-. We are particularly interested when Ric(g C ) ≡ 0, which is equivalent to Ric(g S ) = 2(n−1)g S . These Ricci-flat cones arise naturally -by means of Bishop-Gromov volume monotonicity theoremas tangent cones at isolated singularities of limit spaces of non-collapsed sequences of Riemannian manifolds with a lower bound on the Ricci curvature -see [6]-.
Kähler cone metrics. A basic reference for this topic is Sparks' survey
A Kähler cone is a Riemannian cone for which there is a parallel complex structure I, which makes C into an n-dimensional complex manifold. The function r 2 is a Kähler potential for g C , in the sense that its Kähler form is
ω C = i 2 ∂∂r 2 .
The Reeb vector field is defined as ξ = I r ∂ ∂r and its flow acts on the cone by holomorphic isometries.
We restrict our attention to Ricci-flat Kähler cones (RFKC), that is Kähler cones with Ric(g C ) = 0. There is a division RFKC into three types:
(1) Regular. The flow of ξ generates a free S 1 -action for which the function r 2 is a moment map. The Kähler quotient of (C, g C ) by this S 1 -action is an (n−1)-dimensional KE Fano manifold, this process can be reverted by means of the so-called Calabi ansatz. (2) Quasi-regular. The flow of ξ generates a locally-free -but not free-S 1 -action. Same as above, the Kähler quotient is a KE Fano orbifold. (3) Irregular. There is at least one non-closed orbit. The closure of the one parameter group generated by ξ is a k-dimensional torus, with k ≥ 2, which acts on the cone by holomorphic isometries. Let (Z, g KE ) be a normal complex variety with a weak KE metric and p ∈ Z an isolated singular point. Under suitable circumstances -for example when (Z, g KE ) is the Gromov-Hausdorff limit of a non-collapsed sequence of KE metrics on smooth projective varieties, see [14]-there is a unique tangent cone of g KE at p, this is a Ricci-flat Kähler metric cone (C, g C ). The space C is an affine algebraic variety, it is the Spec of the ring of holomorphic functions on C of polynomial growth with respect to g C . Alternatively, C can also be described in terms of a filtration on (Z, O p ) -the local ring of regular functions of Z at p-induced by g KE . It asked in [14] whether it is possible to determine (C, g C ) only in terms of (Z, O p ), and to relate this to a stability condition for the singularity. There has been recent progress in the case when a neighborhood of p is biholomorphic to a neighborhood of the apex of a regular RFKC, see [19].
2.4.
Ricci-flat Kähler cone metrics with cone singularities (RFKCcs). The notion of a flat Kähler metric with cone singularities is defined by means of the local model g (β) Definition 4. Let D ⊂ X be a smooth divisor in a complex manifold X. We say that g is a flat Kähler metric on X with cone angle 2πβ along D; if for every point in the complement of D we can find holomorphic complex coordinates in which g agrees with the stantard euclidean metric and for every p ∈ D there are holomorphic coordinates (z 1 , . . . , z n ) centered at p in which D = {z 1 = 0} and g agrees with g (β) .
Example 4. If Φ is any biholomorphism of C 2 then Φ * g (β) is clearly a flat Kähler metric with cone angle 2πβ along Φ −1 ({0} × C). So, if Φ(z, w) = (z − w 2 , w) then Φ * g (β) has cone angle 2πβ along the parabola z = w 2 .
It should be possible to show, using the polyhomogeneus expansion mentioned in Subsection 2.2, that if g is a Kähler metric on X with cone angle 2πβ along D -as in Definition 3-which is flat in the complement of D; then it is a flat Kähler metric according to Definition 4. Nevertheless, we don't need to use this result.
It is straightforward to combine the notions of Ricci-flat Kähler cone (RFKC) and Kähler metric with cone singularities to get the following Definition 5. Let D ⊂ X be a smooth divisor in a complex manifold X. We say that g is a RFKCcs on X if it is a RFKC on the complement of D and it has cone singularities, as in Definition 3, along D.
When the complex dimension is 2, a RFKCcs g is necessarily flat in the complement of D and it induces a metric of positive constant Gaussian curvature on its transversely Kähler foliation -see [36]-; Remark 1 implies that this is a spherical metric with cone singularities and therefore g is flat as in Definition 4.
We are mainly interested in the case when X = C 2 \ {0} and D is a bunch of complex lines which go through the origin and curves of the form {z m = aw n }; we allow diferent cone angles at the different components of D. The apex of the cone is at 0 and we say that g is a RFKCcs on C 2 -rather than on
C 2 \ {0}-.
Example 5. The product C β1 × C β2 provides an example of a RFKCcs C 2 with cone angle 2πβ 1 along {z 1 = 0} and 2πβ 2 along {z 2 = 0}. This includes g (β) as a particular case when β 2 = 1.
According to Panov [34], a polyhedral Kähler (PK) manifold is a polyhedral manifold whose holonomy is conjugate to a subgroup of U (n) and every co-dimension 2 face with cone angle 2πk, k ≥ 2, has a holomorphic direction. If the cone angle at every co-dimension 2 face is less than 2π, then the PK metric is said to be non-negatively curved ; we restrict to this case. When the PK manifold is also a metric cone, then it is called a PK cone. It is shown in [34] that the complex structure defined on the complement of the co-dimension 2 faces of a PK cone extends; and it defines a RFKCcs on C 2 . Conversely, any RFKCcs whose link is diffeomorphic to the 3-sphere is a PK cone.
Spherical metrics with cone singularities on the 3-sphere
Let us first describe a local model for a spherical metric in three real dimensions with cone singularities along a codimension two submanifold. Write R 4 = R 2 × R 2 and take polar coordinates (r 1 , θ 1 ), (r 2 , θ 2 ) on each factor. Consider the product of a standard cone of total angle 2πβ with an Euclidean plane
(3.1) g (β) = dr 2 1 + β 2 r 2 1 dθ 2 1 + dr 2 2 + r 2 2 dθ 2 2 .
We want to write g (β) as a Riemannian cone; it is a general fact that the product of two metric cones is a metric cone. In our case this amounts to check that, if we define r ∈ (0, ∞) and ρ ∈ (0, π/2) by r 1 = r sin ρ, r 2 = r cos ρ;
then g (β) = dr 2 + r 2 g (β) , where (3.2) g (β) = dρ 2 + β 2 sin 2 (ρ)dθ 2 1 + cos 2 (ρ)dθ 2 2 .
We think of g (β) as a metric on the 3-sphere with a cone singularity of angle 2πβ transverse to the circle given by the intersection of {0} × R 2 with S 3 . It is now straightforward to state the following Definition 6. Let S be a closed 3-manifold and let L ⊂ S be a smooth closed submanifold of codimension two, so that L = L 1 ∪ . . . ∪ L d is a disjoint union of embedded circles L j . Take β j ∈ (0, 1) for j = 1, . . . , d. We say that g is a spherical metric on S with cone singularities of angle 2πβ j along the L j if g is locally isometric to the round sphere of radius 1 in the complement of L and around each point of L j there is a neighborhood in which g agrees with g (βj )
It shouldn't be hard to argue that if S admits such a metric, then S must be diffeomorphic to a spherical space form.
Example 6. As above, we consider R 4 = R 2 × R 2 with polar coordinates in each factor. The product of two cones of total angles 2πβ 1 and 2πβ 2 is given by
dr 2 1 + β 2 1 r 2 1 dθ 2 1 + dr 2 2 + β 2 2 r 2 2 dθ 2 2 .
Let r ∈ (0, ∞) and ρ ∈ (0, π/2) be given by by r 1 = r sin ρ and r 2 = r cos ρ; so that the product of the cones writes dr 2 + r 2 g, where
(3.3) g = dρ 2 + β 2 1 sin 2 (ρ)dθ 2 1 + β 2 2 cos 2 (ρ)dθ 2 2 .
It is easy to check that g defines a spherical metric on the 3-sphere with cone singularities of angles 2πβ 1 and 2πβ 2 along the Hopf link L 1 ∪L 2 given by the intersection of the unit sphere in R 4 with the real planes {0}×R 2 and R 2 × {0}. Nevertheless -unless β 2 = 1-there is no neighborhood of L 1 isometric to a neighborhood of the singular circle in the model g (β1) , neither for L 2 .
3.1. Hopf bundle. Let S 3 = {|z 1 | 2 + |z 2 | 2 = 1} ⊂ C 2 and consider the Hopf map H :
S 3 → CP 1 , H(z 1 , z 2 ) = [z 1 : z 2 ]
. This is an S 1 -bundle with respect to the circle action e it (z 1 , z 2 ) = (e it z 1 , e it z 2 ). The contraction of the euclidean metric with the derivative of the S 1 -action gives a 1-form on S 3 , referred as the Hopf connection α H . Denote by g F S the Fubini-Study metric on the projective line. By means of stereographic projection (CP 1 , g F S ) is canonically identified with the round sphere of radius 1/2 and the Hopf map with
H(z 1 , z 2 ) = (z 1 z 2 , |z 2 | 2 − |z 1 | 2 2 ) ∈ S 2 (1/2) ⊂ R 3 .
It is straightforward to check that the round metric on the 3-sphere is given by
(3.4) g S 3 (1) = H * g F S + α 2 H . Moreover dα = H * ( 1 2 K F S dV F S ); where K F S ≡ 4
is the Gaussian curvature of the Fubiny-Study metric and dV F S is its area form.
Let d ≥ 2, L = L 1 ∪ . . . ∪ L d be d distinct complex lines going through the origin in C 2 and let β 1 , . . . , β d ∈ (0, 1) satisfy the Troyanov condition 1.1 (0 < β 1 = β 2 < 1 if d = 2). Denote by g the unique compatible metric on CP 1 of constant Gaussian curvature 4 and cone angle 2πβ j at the points L j , note that this is 1/4 times the spherical metric we considered in Subsection 2.1. We shall lift the metric g to a spherical metric on the 3-sphere by means of a suitable connection on the Hopf bundle, in a way analogous to 3.4. We write K g for the Gaussian curvature of g -which is identically 4-and dV g for its area form. The total area of g is πc and we write this as a Gauss-Bonnet integral
(3.5) 1 2π CP 1 K g dV g = 2c.
Claim 1. There is a connection α, unique up to gauge equivalence, such that:
(1) It has curvature dα = (1/2c)H * (K g dV g ).
(2) If p ∈ CP 1 is a point in L and γ is a loop that shrinks to p as → 0, then the holonomy of α along γ gets trivial as → 0.
We think of α as a 1-form on S 3 singular along L. Given a smooth map f : S 2 → S 1 , it defines a gauge transformationf : S 3 → S 3 ,f (p) = f (H(p)) · p; this provides an identification of the group of gauge transformations of the Hopf bundle with the set of maps from the 2-sphere to S 1 . Two connections which differ by the pull-back of an exact 1-form on the base are gauge equivalent. The uniqueness statement in the claim follows from the fact that the first de Rham co-homology group of the punctured 2-sphere is generated by simple loops which go around the points L j , j = 1, . . . , d. We prove Claim 1 by writing α explicitly in terms of g; before doing this we recall the standard trivializations
(3.6) S 3 \ {z 2 = 0} ∼ = C × S 1 , given by (z 1 , z 2 ) → ξ = z 1 z 2 , e it = arg(z 2 ) ; and (3.7) S 3 \ {z 1 = 0} ∼ = C × S 1 , given by (z 1 , z 2 ) → η = z 2 z 1 , e is = arg(z 1 ) .
These are related via
(3.8) η = 1/ξ, e is = arg(ξ)e it .
It is easy to write their inverses as
(ξ, e it ) → z 1 = ξ 1 + |ξ| 2 e it , z 2 = 1 1 + |ξ| 2 e it , (η, e is ) → z 1 = 1 1 + |η| 2 e is , z 2 = η 1 + |η| 2 e is .
We are ready to prove the claim
Proof. W.l.o.g. we assume that L j = {z 1 = a j z 2 } with a j ∈ C for j = 1, . . . , d − 1 and L d = {z 2 = 0}. Set ξ = z 1 /z 2 , then g = e 2φ |dξ| 2 with φ a function of ξ. Set u = φ − d−1 j=1 (β j − 1) log |ξ − a j |,
this is a continuous function on C. Moreover
(3.9) lim ξ→aj |ξ − a j | ∂u ∂ξ = 0 for j = 1, . . . , d − 1. Indeed, if η is a complex coordinate centered at a j in which g = β 2 |η| 2β−2 (1 + |η| 2β ) 2 |dη| 2 . Then φ = log β + (β − 1) log |η| − log(1 + |η| 2β ) and lim η→0 |η| ∂ ∂η log(1 + |η| 2β ) = 0. On C \ {a 1 , . . . , a d−1 } define the real 1-form (3.10) α 0 = i 2c (∂u − ∂u).
It follows from 3.9 that, for j = 1, . . . , d − 1,
(3.11) lim →0 C (aj ) α 0 = 0, where C (a j ) = {|ξ − a j | = }.
On the other hand
(3.12) dα 0 = − i c ∂∂u = 1 2c K g dV g ,
so 3.5 gives us that (3.13) 1 2π C dα 0 = 1.
On the trivial S 1 -bundle C\{a 1 , . . . , a d−1 }×S 1 with coordinates (ξ, e it ) consider the connection α = dt+α 0 . By means of the trivialization map 3.6 we think of α as a connection on the Hopf bundle. It follows from 3.11 and 3.12, that we only need to verify the holonomy condition -second item of the claim-at ξ = ∞, which corresponds to the point L d . We use the coordinates 3.7, where η = 1/ξ. The change of coordinates 3.8 implies that α = dt
+ α 0 = ds + β 0 with β 0 = d(arg η) + α 0 . Now lim →0 |η|= α 0 = − lim N →∞ |ξ|=N α 0 .
It follows from 3.11, 3.13 and Stokes' theorem that lim N →∞ |ξ|=N α 0 = 2π. As a result lim →0 |η|= β 0 = 0.
We proceed with the construction of the spherical metric on the 3-sphere with cone singularities at the Hopf circles L. There is, at least locally, a description for the model metric g (β) analogous to 3.4. Take polar coordinates (ρ, θ) on a disc D centered at the origin in R 2 and consider the metric on D × S 1 given by (3.14) dρ 2 + β 2 sin 2 (2ρ) 4 dθ 2 + (dt + β sin 2 (ρ)dθ) 2 .
We claim that 3.14 is locally isometric to the model g (β) at the points {0} × S 1 . Indeed if we set t = θ 2 and θ = θ 1 − θ 2 ; then 3.14 writes as dρ 2 + β 2 sin 2 (ρ)dθ 2 1 + β 2 cos 2 (ρ)dθ 2 2 and this agrees with the metric given in Example 6 with β = β 1 = β 2 . We let α = dt + β sin 2 (ρ)dθ and think of it as a connection on the trivial bundle D × S 1 ; it is then easy to check that dα = (1/2)K g dV g where g = dρ 2 + β 2 sin 2 (2ρ) 4 dθ 2 .
Lemma 1. There is a -unique up to a bundle isometry-spherical metric g on S 3 with cone angle 2πβ j along L j for j = 1, . . . , d such that
• g is invariant under the S 1 action e it (z 1 , z 2 ) = (e it z 1 , e it z 2 ).
• H : (S 3 \ L, g) → (CP 1 \ L, g) is a Riemannian submersion with geodesic fibers of constant length.
Proof. Set
(3.15) g = g + c 2 α 2 .
The S 1 invariance and Riemannian submersion properties of g are evident from its definition. Let us check that g is a spherical metric according to Definition 6. Let p ∈ S 3 ; we use the coordinates 3.6 and 3.7, w.l.o.g. we assume that p belongs to the domain of definition of the coordinates 3.6 so that p = (ξ 0 , e it0 ). There are polar coordinates (ρ, θ) around ξ 0 in which
g = dρ 2 + β 2 sin 2 (2ρ) 4 dθ 2 ;
where β = 1 if p / ∈ L and β = β j if p ∈ L j . Write the connection α = dt + α 0 ; in these coordinates dα 0 = (1/2c)K g dV g = (1/c)β sin(2ρ)dρdθ. It follows from the holonomy condition on α that, up to a gauge transformation, we can assume α 0 = (1/c)β sin 2 (ρ)dθ. Then cα = cdt + β sin 2 (ρ)dθ. Finally we take a point distinct from p and on the same fiber, remove it and scale the circle coordinate to obtain the desired expression. More precisely; if we assume t 0 ∈ (−π, π), say, and define t = ct we have that
g = dρ 2 + β 2 sin 2 (2ρ) 4 dθ 2 + (dt + β sin 2 (ρ)dθ) 2 .
Which agrees with 3.14.
Finally, we prove uniqueness. Let g be a metric satisfying the conditions of the Lemma. The lengths l of the Hopf circles is constant, write l = 2πc for somec > 0. We obtain a 1-formα by contracting g with the derivative of the circle action, thenα =cα with α a connection and g = H * g +c 2 α 2 . The fact that at the singular fibers g is locally isometric to the models g (βj ) implies that α must satisfy the holonomy condition; Stokes' Theorem then implies that (1/2π) CP 1 dα = 1. The Riemannian submersion property gives us that dα = (1/2c)K g dV g , thereforec = c. The uniqueness then follows from 1.
Remark 3. The proof above gives us that the fibers of H have constant length 2πc. Since Vol(g) = πc we have Vol(g) = 2π 2 c 2 . The volume of the round 3-sphere of radius 1 is 2π 2 , so we get that Vol(g)/Vol(S 3 (1)) = c 2 . This ratio is a relevant quantity in Riemannian convergence theory: If p is a point in a limit space with a tangent cone with link g; then this volume ratio measures how singular the limit space is at p. Roughly speaking the smaller the volume ratio the worse the singularity.
Lemma 1 is also established in [34]; for the sake of completeness we repeat the arguments in [34], these make clear why the fibers of g must have length 2πc: If Ω ⊂ S 2 (1/2) is a contractible domain; then the universal cover of H −1 (Ω) ⊂ S 3 (1) is diffeomorphic to Ω × R, and its inherited constant curvature 1 metric is invariant under translations on the R factor. The planes orthogonal to the fibers define a horizontal distribution, hence a connection ∇ on Ω × R. The holonomy of ∇ along a closed curve γ ⊂ Ω is equal to the parallel translation by twice the algebraic area bounded by γ. On the other hand; for any l > 0 we can take the quotient of Ω × R by lZ to obtain a metric g of contant curvature 1 on Ω × S 1 such that all the fibers are geodesics of length l. Given the metric g on S 2 with cone singularities and Gaussian curvature 4; we can cut S 2 by geodesic segments with vertices at all the conical points and obtain a contractible polygon P which can be immersed -by its enveloping map-in S 2 (1/2). Consider the metric g on P × S 1 with l = 2Area(P ). It follows that the holonomy of the fibration along the border of P is trivial -as it makes one full rotation-; and the gluing of P which gives g can be lifted to a gluing of P × S 1 to obtain the metric g of Lemma 1.
Seifert bundles and branched coverings.
Let p and q be positive co-prime integers, w.l.o.g. we can assume that 1 ≤ p < q. Consider the S 1 -action on S 3 = {|z 1 | 2 + |z 2 | 2 = 1} ⊂ C 2 given by (3.16) e it (z 1 , z 2 ) = (e ipt z 1 , e iqt z 2 ) together with the Seifert map S (p,q) : S 3 → CP 1 given by S (p,q) (z 1 , z 2 ) = [z q 1 , z p 2 ]. The map S (p,q) is invariant under the S 1 -action 3.16 and restricts to an S 1 -bundle over CP 1 \ {[1, 0], [0, 1]}. The fiber of S (p,q) over a point in the projective line distinct from the poles is a torus knot of type (p, q). Around the pole [1,0] there is a disc U and an S 1 -equivariant diffeomorphism from S −1 (p,q) (U ) to the solid torus D × S 1 with the S 1 -action e it (z, e iθ ) = (e ipt z, e iqt e iθ ), similarly there is a disc around the pole [0, 1] and an S 1 -equivariant diffeomorphism from the preimage of the disc to the solid torus with S 1 -action e it (z, e iθ ) = (e iqt z, e ipt e iθ ). In this section we lift the spherical metrics on the projective line to the 3-sphere by means of S (p,q) , we do this by means of a branched covering map Ψ (p,q) : S 3 → S 3 given by
(3.17) Ψ (p,q) (z 1 , z 2 ) = z q 1 |z 1 | 2q + |z 2 | 2p , z p 2 |z 1 | 2q + |z 2 | 2p .
Note that S (p,q) = H •Ψ (p,q) . The map Ψ (p,q) is a branched pq-fold cover, branched along the two exceptional fibers of S (p,q) . It is equivariant with respect to the circle actions (e ipt z 1 , e iqt z 2 ) and (e ipqt z 1 , e ipqt z 2 ).
(e ipθ / √ 2, e iqθ / √ 2), θ ∈ [0, 2π]}.
The following Lemma is an immediate consequence of Lemma 1, pulling-back the metric g with the map Ψ (p,q) . Let β 1 , . . . , β d ∈ (0, 1) satisfy the Troyanov condition; g be the spherical metric on CP 1 with cone angle 2πβ j at L j and g be the lifted metric on S 3 by means of the Hopf map. We set (4.1) g F = dr 2 + r 2 g, to be the Riemannian cone with (S 3 , g) as a link, this is a metric on (0, ∞) × S 3 with cone singularities along the products of the singular Hopf circles with the radial coordinate. We shall prove that there is a natural complex structure I with respect to which g F is Kähler; that there is a natural identification of this complex manifold with C 2 , with repect to which the Reeb vector field generates the circle action e it (z, w) = (e it/c z, e it/c w) and the singularities of g F are along the original set of lines L j we started with. We use the same coordinates as in the proof of Lemma 1, where the Hopf bundle is trivialized; so that points in R >0 × (S 3 \ L) ∼ = (0, ∞) × C \ {a 1 , . . . , a d−1 } × S 1 have coordinates (r, ξ, e it ). Write ξ = x + iy. Consider the almost-complex structure given by
I∂ ∂x =∂ ∂y , I ∂ ∂r = 1 cr ∂ ∂t where∂ ∂x = ∂ ∂x − α ∂ ∂x ∂ ∂t ,∂ ∂y = ∂ ∂y − α ∂ ∂y ∂
∂t are the horizontal lifts of ∂/∂x and ∂/∂y. Finally set ω F = g F (I., .). Claim 2. (0, ∞) × C \ {a 1 , . . . , a d−1 } × S 1 , g F , I is a Kähler manifold. I.e. dω F = 0 and I is integrable. Moreover,
(4.2) ω F = i 2 ∂∂r 2 .
Proof. We compute in the coframe {dx, dy, dr, α} where ω F = r 2 e 2φ dx ∧ dy + crdr ∧ α, so that dω F = 2re 2φ drdxdy − cr(2/c)e 2φ drdxdy = 0. The integrability of I amounts to check that
∂ ∂x + i∂ ∂y , ∂ ∂r + i 1 cr ∂ ∂t = 0.
Finally dId(r 2 ) = d(2rIdr) = −2cd(r 2 α) = −4crdr ∧ α − 4r 2 e 2φ dx ∧ dy. Using that 2i∂∂ = −dId we deduce 4.2
ω 2 F = |l 1 | 2β1−2 . . . |l d | 2β d −2 Ω ∧ Ω. Proof.
It is easy to see that the pair (z, w) defines a diffeomorphism between the corresponding spaces. The Cauchy-Riemann equations for a function h to be holomorphic with respect to I are given by
∂h ∂r + i 1 cr ∂h ∂t = 0, ∂h ∂x + i ∂h ∂y = α ∂ ∂x + i ∂ ∂y ∂h ∂t .
If we ask h to have weight 1 with respect to the circle action the equations become ∂h ∂r
= 1 cr h, ∂h ∂ξ = iα ∂ ∂ξ h = 1 2c ∂u ∂ξ h.
It is now easy to check that z and w are holomorphic. Now we compute the volume form of g F in the complex coordinates z, w. First define a basis {τ 1 , τ 2 } of the (1, 0) forms (4.5)
τ 1 = dr + icrα, τ 2 = e φ rdξ.
Up to a factor of √ 2 this is an orthonormal basis for the (1, 0) forms in C 2 \ L, i.e.
ω F = (i/2)τ 1 τ 1 + (i/2)τ 2 τ 2 . Define a two by two matrix (a ij ) by means of dz = a 11 τ 1 + a 12 τ 2 , dw = a 21 τ 1 + a 22 τ 2 .
From here we get Ω ∧ Ω = | det(a ij )| 2 ω 2 F . Since z = ξw we have that a 11 = ξa 21 and a 12 = ξa 22 + we −φ r −1 . It follows that det(a ij ) = −we −φ r −1 a 21 . We can easily compute, from the formula given for w, that a 21 = (1/cr)w. We put things together to get
ω 2 F = c 2 |w| −4 r 4 e 2φ Ω ∧ Ω. Now we use that r 4 = (1/c 2 )|w| 2c e −2u , φ − u = d−1 j=1 (β j − 1) log |(z/w) − a j | and 4c − 4 = d j=1 (2β j − 2) to conclude that ω 2 F = |z − a 1 w| 2β1−2 . . . |z − a d−1 w| 2β d−1 −2 |w| 2β d −2 Ω ∧ Ω. This is formula 4.4.
Note that we have two natural systems of coordinates: the complex coordinates (z, w) and the spherical coordinates (r, θ), where θ denotes a point in the 3-sphere. For λ > 0 define D λ (r, θ) = (λr, θ) and m λ (z, w) = (λz, λw). Equation 4.3 gives that D λ = m λ 1/c and Equation 4.2 implies that m * λ g F = λ 2c g F . The proof of the existence part proof of Proposition 1 is now complete.
We have obtained a a recipe which allows us to go from the flat metric g F on C 2 in Proposition 1 to the corresponding spherical metric g on CP 1 and vice versa. From 4.3 we get (4.6)
r 2 = 1 c |w| 2c e −u .
We recall that
(4.7) u = φ − d−1 j=1 (β j − 1) log |ξ − a j |, g = e 2φ |dξ| 2 .
Where φ a function of ξ = z/w. We are writing the lines as L j = {z = a j w} with a j ∈ C for j = 1, . . . , d − 1 and L d = {w = 0}. 4.6 together with 4.7 allow us to write g F explicitly in terms of g and vice-versa. As a check, let us recall the rugby ball metric
(4.8) g = β 2 |ξ| 2β (1 + |ξ| 2β ) 2 |dξ| 2 ,
We use our formula 4.6 to get r 2 = β −2 (|z| 2β + |w| 2β ), so that g F = |z| 2β−2 |dz| 2 + |w| 2β−2 |dw| 2 . Up to a constant normalizing factor this is the space C β × C β . Remark 4. Since the lenght of any Hopf circle with respect to g is 2πc; we conclude that the restriction of g F to any complex line which goes through the origin, is the metric of a 2-cone with total angle 2πc.
The uniqueness statement in Proposition 1 is a consequence of the uniqueness of spherical metrics -Theorem 1-; since given the metric g F , we can use equations 4.6 and 4.7 to get the corresponding spherical metric on the projective line.
4.2.
Proof of Proposition 2. Let d ≥ 2, 1 ≤ p < q co-prime and C j = {z q = a j w p }, a j ∈ C for j = 1, . . . , d − 2 be distinct complex curves through the origin in C 2 . Let β 1 , . . . , β d−2 ∈ (0, 1) and 0 < β d−1 , β d ≤ 1 be such that β 1 , . . . , β d−2 , (1/q)β d−1 , (1/p)β d satisfy the Troyanov condition 1.1 if d ≥ 3 and β d−1 /q = β d /p if d = 2. In C 2 with complex coordinates (u, v) consider the metric g F given by Proposition 1 with cone angles β 1 , . . . , β d−2 , (1/q)β d−1 , (1/p)β d along the lines L j = {u = a j v} for j = 1, . . . , d − 2, {u = 0} and {v = 0}. Let S : C 2 → C 2 be given by (4.9) (u, v) = S(z, w) = (z q , w p ).
Proposition 2 follows by settingg F = S * g F ; it is also clear thatg F is isometric to the Riemannian cone with link the metricg given by Lemma 2.
As an example, we let 1 − 1/m − 1/n < β < 1 − 1/m + 1/n and set g F to be the flat metric in C 2 with cone angles 2π(1/n) along {u = 0}, 2π(1/m) along {v = 0} and β along {u = v}, theng F has cone angle along the curve {z n = w m }. In particular we have that for any 1/6 < β < 5/6 there is a flat Kähler cone metric in C 2 with cone angle 2πβ along the cuspidal cubic {w 2 = z 3 }.
4.3.
Proof of Proposition 3. The proof of Proposition 3 is included only for the sake of completeness. We follow the arguments given in Lemma 3.9 and Proposition 3.10 of [34]; and refer to [34] for a more detailed exposition.
Let g C = dr 2 + r 2 g S be a flat Kähler metric with cone singularities, with link S diffeomorphic to the 3sphere. The fact that g C is flat implies that g S is spherical. There is a orthogonal parallel complex structure I on (0, ∞) × S; the Reeb vector field is ξ = I r ∂ ∂r .
We think of S as lying inside the cone by means of the isometric embedding which takes p ∈ S to (p, 1) ∈ S ×(0, ∞). The restriction of ξ to S is a unit length Killing vector field and its orbits define a one-dimensional foliation of S. There are two cases to consider:
• All the orbits are periodic. The flow of ξ defines a locally free S 1 -action on S by isometries which provides S with the structure of a Seifert bundle. The classification of Seifert bundles whose total space is the 3-sphere -see [33]-, implies that -up to a conjugation by a diffeomorphism-the S 1 -action is given by e it (z 1 , z 2 ) = (e it z 1 , e it z 2 ) if it is free and e it (z 1 , z 2 ) = (e imt z 1 , e int z 2 ) with 1 ≤ m < n for some co-prime numbers m and n if not. We can push g S to the quotient to obtain a metric on the 2-sphere with cone singularities and constant curvature 4. The uniqueness statements in Lemma 1 and Lemma 2, imply that g S must be isometric to one of the metrics g of Lemma 1 in the free case or to one of the metricsg of Lemma 2 in the locally free but not free case. It follows that g C must agree with one of the metric of Propositions 1 or 2. • If there is a non-closed orbit of ξ then there is a 2-dimensional torus T 2 which acts by holomorphic isometries on C. Write L for the singular locus of g C and let E be the enveloping map, which goes from the universal cover of C \ L to C 2 and sends the apex of the cone to 0. There is an induced action of R 2 on the euclidean C 2 which fixes 0 and makes E equivariant. This action factors through T 2 and we can assume that it is given by rotations on each of the factors C × C. The branching locus of E is the union of lines through 0 invariant by T 2 , so it must be the set {z 1 z 2 = 0}. It follows that
E : E −1 (C 2 \ {z 1 z 2 = 0}) → C 2 \ {z 1 z 2 = 0}
is a covering map; and therefore g C is a product of two 2-cones.
4.4.
Hermitian metrics on line bundles: A different approach. We mention another approach to Proposition 1 which gives the metric in C 2 directly in terms of the metric in the projective line, avoiding to go through the 3-sphere. We take the point of view of a Kähler metric as the curvature form of a Hermitian metric on a complex line bundle. We discuss the Hopf bundle case, for the Seifert bundle case there is a parallel discussion in which one replaces the projective line with the weighthed P(m, n).
We think of C 2 as the total space of O CP 1 (−1) with the zero section collapsed at 0. The bundle projection is given by Π :
C 2 \ {0} → CP 1 , Π(z, w) = [z : w].
We can then identify (smooth) Hermitian metrics on O CP 1 (−1) with (smooth) functions h : C 2 → R ≥0 such that h(λp) = |λ| 2 h(p) for all λ ∈ C, p ∈ C 2 and h(p) = 0 only when p = 0. The first basic fact we need is that an area form ω in CP 1 induces a Hermitian metric h ω . We use coordinates ξ = z/w, η = w/z on CP 1 . Write ω = e 2φ (i/2)dξdξ with φ = φ(ξ) on U = Π({w = 0}) and ω = e 2ψ (i/2)dηdη with ψ = ψ(η) on V = Π({z = 0}). Then h ω is given by (4.10) h
ω = |w| 2 e −φ , if w = 0; h ω = |z| 2 e −ψ , if z = 0.
The second basic fact is that a Hermitian metric h gives a 2-form ω h on CP 1 by means of (4.11) ω h = i∂∂ log h(ξ, 1) on U, and ω h = i∂∂ log h(1, η) on V.
We also mention that h induces Hermitian metrics on the other complex line bundles over CP 1 . A linear function l(z, w) = z − aw on C 2 can be regarded as a section of O CP 1 (1), then we have |l| 2 h = h(ξ, 1) −1 |ξ − a| 2 on U and a corresponding expression on V .
One can then rephrase the existence of the spherical metric with cone singularities g on CP 1 by saying that there is a Hermitian metric h, continuous on C 2 and smooth outside L such that
(4.12) h = |l 1 | β1−1 h . . . |l d | β d −1 h h ω h
Where by |l| h we mean |l| h •Π. Here we could be more precise and instead of saying that h is merely continuous we could give a local model for h around the singular points. From 4.12 one gets that ω h has constant Gaussian curvature equal to 2c = 2 − d + d j=1 β j outside L and one can argue that (2π) −1 CP 1 ω h = 1. The potential for ω F is then given by r 2 = ah c for some constant a > 0 determined by the volume normalization.
Quotients and Unitary Reflection Groups.
We begin by recalling the well-known Du Val singularities. Let Γ ⊂ SU (2) be a finite subgroup, up to conjugation, we can assume that it is one of following list: C m -cyclic of order m-for some m ≥ 2; D 2m -binary dihedral of order 4m-for some m ≥ 2; T -binary tetrahedral-; O -binary octahedral-; I -binary icosahedral-. Basic work of Klein shows that there are three homogeneous polynomials z, w, t ∈ C[x 1 , x 2 ], and p ∈ C[z, w, t], which define a complex isomorphism between the orbit space C 2 /Γ and the complex surface S = {p(z, w, t) = 0} ⊂ C 3 . This surface has an isolated singular point at 0, referred as a Du Val -or simple-singularity; the list of these is w) is a double cover, branched along the curve C ⊂ C 2 composed by the points (z, w) such that (z, w, 0) ∈ S. This curve has an isolated singularity at the origin, these are the so-called simple plane curve singularities (4.13) A m :
• A m , m ≥ 1: S = {t 2 + w 2 = z m+1 }, Γ = C m+1 . • D m , m ≥ 4: S = {t 2 + zw 2 = z m−1 }, Γ = D 2(m−2) . • E 6 : S = {t 2 + w 3 = z 4 }, Γ = T . • E 7 : S = {t 2 + w 3 = wz 3 }, Γ = O. • E 8 : S = {t 2 + w 3 = z 5 }, Γ = I. The map S → C 2 given by (z, w, t) → (z,w 2 = z m+1 , D m : zw 2 = z m−1 , E 6 : w 3 = z 4 , E 7 : w 3 = wz 3 , E 8 : w 3 = z 5 .
The group Γ acts freely on S 3 , it preserves the round metric g S 3 (1) so we get a constant curvature metric g S 3 /Γ(1) on S 3 /Γ. The push-forward of the euclidean metric on C 2 by Γ is a flat Kähler cone metric on S, isometric to dr 2 + r 2 g S 3 /Γ(1) . In the search of flat metrics with cone singularities on C 2 , it is natural to ask whether it is possible to extend Γ to a finite group G ⊂ U (2) so that Γ ⊂ G is normal and the quotient H = G/Γ acts on S in a way that S/H ∼ = C 2 -note that S/H = C 2 /G-. For example, we can look for G such that H ∼ = Z 2 acts on S as (z, w, t) → (z, w, −t) so that we can push-forward the euclidean metric to get a metric with cone angle π along the plane curve C; as we shall explain this is always possible. Fortunately, finite groups of unitary matrices with the property that C 2 /G ∼ = C 2 are well understood; these are called unitary reflection groups, we refer to [25] for the results regarding their classification.
A unitary linear map A of C n is called a reflection if A fixes a hyperplane and A m is the identity for some m ≥ 2; equivalently there is an orthonormal basis of C n with respect to which A is represented as a diagonal matrix diag( , 1, . . . , 1) with m = 1. The smallest m is called the order of A. A finite group of unitary matrices G ⊂ U (n) is called a unitary reflection group if it is generated by reflections. A classical theorem of Shephard-Todd-Chevalley characterizes unitary reflection groups as the only finite groups G of unitary matrices with the property that the orbit space C n /G is isomorphic to C n ; or equivalently the algebra of invariant polynomials C[x 1 , . . . , x n ] G is isomorphic to C[X 1 , . . . , X n ]. Shephard-Todd classified these groups.
Given a unitary reflection group G ⊂ U (2) let X 1 , X 2 ∈ C[x 1 , x 2 ] be homogeneous polynomials of smallest degree invariant under the action of G and such that the map Φ : C 2 → C 2 defined as Φ = (X 1 , X 2 ), factors through the quotient to give an isomorphism C 2 /G ∼ = C 2 . Let F = ∪ r i=1 F i be the union of all complex lines F i through the origin which are fixed by some reflection in G, this set F coincides with the set of critical points of Φ. We can then push-forward the euclidean metric with Φ to get a flat Kähler metric Φ * g euc in C 2 with cone singularities along Φ(F ). This metric has cone angle 2πβ i along Φ(F i ) where β i = 1/m i , with m i being the least common multiple of the orders of the reflections which fix F i . Since the group G preserves the distance of the points to the origin, it is clear that Φ * g euc is a Riemannian cone with its apex at the origin, its link is a spherical metric on the three-sphere with cone angle 2πβ i along the intersection of Φ(F i ) with the unit sphere. Since the S 1 -action e it (x 1 , x 2 ) = (e it x 1 , e it x 2 ) commutes with the action of G there is an induced S 1 action on C 2 /G under which Φ * g euc is invariant. Indeed this S 1 -action can be identified with the action generated by the Reeb vector field of Φ * g euc and it follows that Φ * g euc must be given by either Proposition 1 or Proposition 2 and therefore it must correspond to a spherical metric in the projective line.
Before diving into the classification of reflection groups, we analyze the case of Γ = C m . Write ω m = e 2πi/m , so that
C m = ω m 0 0 ω −1 m ⊂ SU (2). The invariant polynomials w = (1/2)(x m 1 + x m 2 ), t = (1/2i)(x m 1 − x m 2 ) and z = x 1 x 2 give us the complex isomorphism C 2 /C m ∼ = {(z, w, t) ∈ C 3 : w 2 + t 2 = z m }.
Consider the transposition T (x 1 , x 2 ) = (x 2 , x 1 ) and let G(m, m, 2) -notation to be explained later-be the group generated by C m and T , so that C m ⊂ G(m, m, 2) is a normal subgroup of index two. The action of T on (z, w, t) ∈ C 2 /C m sends (z, w, t) → (z, w, −t). We conclude that Φ(x 1 , x 2 ) = (z, w) is invariant under the action of G(m, m, 2) and gives us a complex isomorphism C 2 /G(m, m, 2) ∼ = C 2 the metric Φ * g euc has cone angle π along the curve {w 2 = z m }.
We go further and consider the group G(2m, 2, 2) given by
G(m, m, 2) = ω m 0 0 ω −1 m , 0 1 1 0 ⊂ G(2m, 2, 2) = G(m, m, 2), ω 2m 0 0 ω 2m .
So that G(m, m, 2) ⊂ G(2m, 2, 2) is a normal subgroup of index 2m. The quotient G(2m, 2, 2)/G(m, m, 2) is cyclic and its generator acts on (z, w) by sending it to (ω m z, −w). We conclude that u = z m and v = w 2 are invariant under the action of G(2m, 2, 2) and Ψ(x 1 , x 2 ) = (u, v) gives a complex isomorphism between the orbit space and C 2 . Note that Ψ = S (2,m) • Φ, where S (2,m) (z, w) = (z m , w 2 ). The metric Ψ * g euc has cone angle π along the complex lines {v = 0} and {u = v} and cone angle 2π(1/m) along {u = 0}. We will now see that this correspond under Proposition 1 to a spherical metric g on CP 1 with cone angle π at 1 and ∞ and cone angle 2π(1/m) at 0. Indeed the components of the map Ψ(x 1 ,
x 2 ) = (x m 1 x m 2 , (1/4)(x m 1 + x m 2 ) 2 )
are homogeneous polynomials of degree 2m and therefore induce a map of the projective line to itself of degree 2m; in the complex coordinate η = x 1 /x 2 this map writes
Ψ(η) = 4η m (1 + η m ) 2 .
We have that Ψ(1) = 1, Ψ(ω 2m ) = ∞ and Ψ(0) = 0; 1 and ω 2m are critical points of Ψ of order 1 and 0 is a critical point of order m − 1. Let T be the spherical triangle delimited by the arc of the unit circle between 1 and ω 2m and the two segments of length 1 connecting ω 2m and 1 to 0. We recognize Ψ as a Riemann mapping of T and the spherical metric g on CP 1 as the doubling of T . The potential for the euclidean metric is |x 1 | 2 + |x 2 | 2 , expressing this in terms of u and v gives the potential for the metric g F = Ψ * g euc (r 2 in Proposition 1 ), up to a constant factor it is
h + (h 2 − |u| 2 ) 1/2 1/m + h − (h 2 − |u| 2 ) 1/2 1/m where h = |v| + |u − v|.
We can use equations 4.6 and 4.7 to obtain the corresponding expression for the spherical metric g and check that indeed the pull-back of g by Ψ agrees with the standard round metric. When m = 2 the expressions simplify to give
r 2 = a (|u| + |v| + |u − v|) 1/2
where a = 8 √ 2 is determined by the volume normalization condition; and (using 4.6, 4.7)
g = 1 8 1 |ξ||ξ − 1| + |ξ| 2 |ξ − 1| + |ξ||ξ − 1| 2 |dξ| 2 .
If we write ξ = Ψ(η) = (1 + η 2 ) 2 /(4η 2 ), then Ψ * g = (1 + |η| 2 ) −2 |dη| 2 the standard round metric of curvature 4.
The unitary reflection groups which act irreducibly on C 2 divide into two types: primitive and imprimitive. The group G is called imprimitive if we can find a direct sum decomposition C 2 = Cv 1 ⊕ Cv 2 such that the action of G permutes the subspaces Cv 1 and Cv 2 , otherwise it is called primitive. The subspaces Cv 1 , Cv 2 are said to be a system of imprimitivity for G.
• Let m > 1 be a natural number and set ω m = e 2πi/m . Write C m for the cyclic group of m-roots of unity generated by ω m . Let p ≥ 1 be a natural number that divides m and set H to be the subgroup of the direct product C m × C m consisting of all pairs (ω i m , ω j m ) such that (ω i m ω j m ) m/p = 1; note that if p = 1 then H = C m × C m . We embed H in U (2) by means of the diagonal action (ω i m , ω j m )(x 1 , x 2 ) = (ω i m x 1 , ω j m x 2 ) and define G(m, p, 2) to be the subgroup of U (2) generated by H and the transposition T (x 1 , x 2 ) = (x 2 , x 1 ). The lines Ce 1 and Ce 2 form a system of imprimitivity for this group. The notation G(m, p, 2) is due to Shephard-Todd, the 2 at the end simply means that we are working in two complex dimensions. The fact is that, for (m, p) = (2, 2), the group G(m, p, 2) is a unitary imprimitive reflection group which acts irreducibly in C 2 and that any such a group is conjugate to some G(m, p, 2) for some values of m and p. G(m, p, 2) is a normal subgroup of G(m, 1, 2) of index p, the order of G(m, p, 2) is 2m 2 /p. There is a natural inclusion G(m, 1, 2) ⊂ G(2m, 2, 2) as an index two subgroup induced from C m ⊂ C 2m .
We have already discussed in detail the case of G(2m, 2, 2), the corresponding quotient metric g F has cone singularities along three complex lines of angles π, π and 2π(1/m). The general case of the group G(m, p, 2) follows by pulling-back g F with (z, w) → (z 2 , w p ) = (u, v) thus fitting in with Proposition 2. Let G be a primitive unitary reflection group of U (2) for g ∈ G take λ g ∈ C such that λ 2 g = det(g). Definê G = {±λ −1 g g : g ∈ G} ⊂ SU (2). Since G is primitive it follows thatĜ must be a binary tetrahedral T , octahedral O or icosahedral I group, this splits the primitive subgroups into three types. Shephard-Todd classified these primitive groups by looking at the algebra of invariant polynomials and using the work of Klein on invariant theory for finite subgroups of SU (2)
H(t) = (t 2 + 2i √ 3t + 1) 3 t(t 2 − 1) 2 .
The map Φ has degree 12 and looking at its critical points it can be seen to be a Riemann mapping for a spherical triangle with angles π/2, π/3 and π/3. The quotient of the euclidean metric by the group T is then identified with the lift g F of the spherical metric with cone angles π, 2π/3 and 2π/3 by means of the Hopf bundle given by Proposition 1. The quotient of the euclidean metric by the remaining 3 tetrahedral groups are obtained as pull-backs of g F by means of suitable branched covers as in Proposition 2.
H(t) = (t 2 + 14t + 1) 3 (t 3 − 33t 2 − 33t + 1) 2 .
The map Φ has degree 24 and looking at its critical points it can be seen to be a Riemann mapping for a spherical triangle with angles π/2, π/3 and π/4. The quotient of the euclidean metric by the group O is then identified with the lift g F of the spherical metric with cone angles π, 2π/3 and 2π/3 by means of the Hopf bundle given by Proposition 1. The quotient of the euclidean metric by the remaining 7 octahedral groups are obtained as pull-backs of g F by means of suitable branched covers as in Proposition 2.
• There are 7 groups of icosahedral type, all of them are subgroups of I. The order of I is 60 2 = 3600.
The invariant polynomials can be taken to be homogeneous polynomials of degree 60 which define a map Φ in the projective line Φ(η) = H(t), where η = x 1 /x 2 , t = η 5 and
H(t) = (t 4 − 228t 3 + 494t 2 + 228t + 1) 3 (t 6 + 522t 5 − 10005t 4 − 10005t 2 − 522t + 1) 2 .
The map Φ has degree 60 and looking at its critical points it can be seen to be a Riemann mapping for a spherical triangle with angles π/2, π/3 and π/5. The quotient of the euclidean metric by the group T is then identified with the lift g F of the spherical metric with cone angles π, 2π/3 and 2π/5 by means of the Hopf bundle given by Proposition 1. The quotient of the euclidean metric by the remaining 6 icosahedral groups are obtained as pull-backs of g F by means of suitable branched covers as in Proposition 2.
Recall that a Riemann mapping from the upper half-plane to a triangle T whose sides are circle arcs is obtained as the quotient of two linearly independent solutions of the hypergeometric equation, with a suitable value of the parameters a, b, c given in terms of the angles of T . For some particular rational values of the parameters a, b, c the hypergeometric equation has finite monodromy and its solutions are rational functions, these special values are tabulated into the so-called Schwarz's list. In this context; the rational biholomorphisms we have given from the spherical triangles with angles (π/2, π/2, π/m), (π/2, π/3, π/3), (π/2, π/3, π/4) and (π/2, π/3, π/5), correspond precisely with the cases in Schwarz's list in which the associated triangle is spherical and its angles are integer quotients of π.
5.
Limits of Kähler-Einsten metrics with cone singularities 5.1. Singular points of plane complex curves. Let C = {f = 0} ⊂ C 2 be a complex curve with an isolated singularuty at 0 and let B be a small ball around the origin. Fix 0 < β < 1; we want to discuss possible notions of a Kähler metric g on B with cone angle 2πβ along C. Outside the origin there are standard definitions so that the key point is to say what is the behavior of g at 0. First thing to say is that we want the volume form of g to be locally integrable, moreover we also require that
Vol(g) = G|f | 2β−2 Ω ∧ Ω
where G is a continuos function and Ω = dzdw is the standard holomorphic volume form. This leads to the so-called complex singularity exponent of the curve c 0 (f ). The number c 0 (f ) is defined as the supremum of all c > 0 such that |f | −2c Ω ∧ Ω is locally integrable. This is always a rational number and can be computed in algebro-geometric terms by means of successive blow-ups of the singularity. It is clear that 0 < c 0 ≤ 1 and indeed c 0 (f ) = 1 only when the curve is smooth or has a simple double point at 0. Write f = P d + (h.o.t.) with P d a homogeneous polynomial of degree d and (h.o.t.) meaning higher order terms; according to [23]
(5.1) c 0 (f ) = 1 d + 1 e ,
where e/d is the first Puiseux exponent of f (see [4]). In terms of cone angles we must take
(5.2) β > 1 − c 0 .
Indeed, if 5.2 holds then there is the notion of a weak Kähler metric (see [16]). Pluri-potential theory provides, for any 1 − c 0 < β < 1, a weak Kähler metric g with Vol(g) = |f | 2β−2 Ω ∧ Ω. A draw-back of this approach is that little can be said on the geometry of the metric, in particular there is no guarantee of the existence of a tangent cone at 0. On the other hand if the metric g arises as the Gromov-Hausdorff limit of a sequence of Kähler metrics with cone singularities along smooth curves then, under a suitable assumptions, it will have a tangent cone at 0. We discuss plausible, stronger notions according to the type of singularity of C.
• Ordinary multiple points. In this case the zero set of P d consists of d distinct complex lines and c 0 (f ) = 2/d, so that 1 − c 0 = (d − 2)/d. On the other hand the Troyanov condition 1.1 is equivalent to d − 2 d < β < 1 when all the angles β i are equal to β. Therefore, for β in this range we have the flat metric g F given by Proposition 1. Let us assume first that there are suitable holomorphic coordinates around 0 in which C = {P d = 0}; then we can require the condition that g − g F g F = O(r ) for some > 0 as r → 0. Indeed if this condition holds in a little bit stronger Hölder sense, then it is straightforward to show that g has a tangent cone at 0 which agrees with g F . In the general case such a holomorphic change of coordinates doesn't exists, but we can use a diffeomorphism Φ of the ball, sufficiently close to the identity, which takes C to the zero set of P d . Indeed, in a small ball around the origin, the curve C consists of d branches, each of which is the graph of a holomophic function over one of the lines of {P d = 0}. It is then not hard to construct Φ by means of suitable cut-off functions, moreover Φ can be taken to be holomorphic in a suitable neighborhood of the curve. • Non-ordinary multiple points. Let us consider first, as a model example, the case of the cusp
C = {w 2 = z 3 }.
In this case the complex singularity exponent is equal to c 0 = 5/6, so that 1 − c 0 = 1/6. We have seen that for any 1/6 < β < 5/6 we have the flat cone metricg F given by Proposition 2 with Vol(g F ) = |w 2 − z 3 | 2β−2 Ω ∧ Ω. The same as before we can consider metrics g which satisfy the condition g −g F g F = O(r ) for some > 0 as r → 0; and if this condition holds in a Hölder sense, then g has a unique tangent cone at 0 which agrees withg F . The question is what to do when 5/6 ≤ β < 1. We look back at the picture, Figure 1, of two cone angles of total angle 2πβ that collide and produce a cone angle 2πγ with γ = 2β − 1. We expect that, in transverse directions to C, we should see two cone singularities coming together. As a simple model consider the metric
(5.3) g = |dz| 2 + |w 2 − z 3 | 2β−2 |dw| 2 .
This is not a Kähler metric, but it has the right volume form. If we fix z 0 and embed C into C 2 by means of τ z0 (w) = (z 0 , w); then
τ * z0 g = |w − a| 2β−2 |w + a| 2β−2 |dw| 2 , where a 2 = z 3
0 . This is a flat metric in C with two cone singularities of angle 2πβ at a and −a. If we let z 0 → 0 then a → 0 and τ * 0 g = |w| 2γ−2 |dw| 2 with γ = 2β −1. We shall see now that if β > 5/6 then the tangent cone at 0 of g is the metric g (γ) = |dz| 2 + |dw| 2γ−2 |dw| 2 . Let D λ (z, w) = (λz, λ 1/γ w), so that D λ g (γ) = λ 2 g (γ) . It requires a simple computation to check that λ −2 D * λ g = |dz| 2 + |w 2 − λ 3−2/γ z 3 | 2β−2 |dw| 2 . We see that λ −2 D * λ g converges to g (γ) as λ → 0 provided that 3 − 2/γ > 0, which is the same to say β > 5/6.
The same discussion applies to the more general case of the curve C = {w m = z n } with 2 ≤ m < n. So that c 0 = 1/m + 1/n and we have seen that for 1 − 1/m − 1/n < β < 1 − 1/m + 1/n there is a flat cone metricg F with volume form |w m − z n | 2β−2 Ω ∧ Ω. This time the tangent cone at 0 should be given, when 1 − 1/m + 1/n < β, by g (γ) with 1 − γ = m(1 − β).
We have little to say in the general case of an arbitrary singularity. If m is the order and n/m is the first non-zero Puiseux exponent; then the intersection of the curve with small spheres around the origin is an iterated torus knot over the (m, n)-torus knot -see [30]-and there is no homeomorphism which takes the curve to the singular set of a flat cone metric. Naively, we might expect that the tangent cone is the metricg F given by Proposition 2 with cone angle β along {w m = z n } when 1 − 1/m − 1/n < β < 1 − 1/m + 1/n; and it is g (γ) with 1 − γ = m(1 − β) when 1 − 1/m + 1/n < β.
5.2.
Blow-up analysis. As we said in the Introduction, the L 2 -norm of the Riemannian curvature tensor of a Kähler-Einstein metric g on a complex surface X with cone angle 2πβ along a smooth curve D is given in terms of topological data by means of the formula
(5.4) E(g) = 1 8π 2 |Riem| 2 = χ(X) + (β − 1)χ(D),
where χ denotes the Euler characteristic. The number E(g) is the so-called 'energy' of g. In four real dimensions the energy is a scale-invariant quantity, i.e. E(g) = E(λg) for any λ > 0. This fits the theory into a blow-up analysis framework which parallels the case of smooth Einstein metrics on four manifolds [1]:
The only possible way in which a non-collapsing sequence of solutions g i can degenerate is when the energy distributions |Riem(g i )| 2 develop Dirac deltas, this can happen only at finitely many points. Let p be such a point. Re-scaling the metrics g i at p in order to keep the Riemannian curvature bounded one gets in the limit a Ricci-flat metric with finite energy on a non-compact space, so-called 'ALE gravitational instanton' or, more generically, a 'bubble'. There might be many different (but finite) blow-up limits at p; and these can be arranged into a 'bubble tree' associated to p, the tangent cone at infinity of the 'deepest bubble' in the tree agrees with the tangent cone of the (non-scaled) limiting solution at p. If we add the energy of the singular limit space with the energy of the blow-up limits we recover the energy of the original sequence g i . We are interested in the case of a non-collapsed sequence of Kähler-Einstein metrics g i on a complex surfaces X i with cone angle 2πβ along smooth curves D i within some fixed numerical data. There is a weak Kähler-Einstein metric on the Gromov-Hausdorff limit W with cone singularities along a Weil divisor ∆. There is a decomposition ∆ = ∪ M k=1 ∆ k where ∆ k is the component of ∆ of multiplicity k. For the sake of definiteness we consider the case when p is a singular ordinary multiple-point of ∆ 1 which lies on the smooth part of W . The tangent cone of W at p must be given by Proposition 1 with β j = β for j = 1, . . . d.
The blow-up of the metrics g i at p results in a Ricci-flat metric on C 2 with cone angle 2πβ along a complex curve of degree d with d distinct asymptotic lines and g F as its tangent cone at infinity. These metric were shown to exist in [12]. Yau's work on the Calabi conjecture has been extended to the setting of ALE and asymptotically conical manifolds - [22], [11]-, and to the context of metrics with cone singularities - [3], [21]-; the proof of the existence theorem in [12] is a mix of these articles.
Remark 5. These blow-up limits of Kähler-Einstein metrics with cone singularities arose first in the context of the 'deformation of the cone angle method' used to establish the existence of Kähler-Einstein metrics on K-stable Fano manifolds, see [13]. Let X be a Fano manifold and D ⊂ X a smooth anti-canonical divisor, it is known that for small values of β there is a KE metric on X with cone angle 2πβ along D. The question is to understand the behavior of these metrics as β increases in manifolds which are not K-stable. We refer to [13] and [37] for a discussion when X is the complex projective plane blown-up at one and two points.
There is a well-known formula for the energy of an ALE gravitational instanton. If Γ is a finite subgroup of SU (2) acting freely on S 3 and g is a Ricci-flat metric on M asymptotic to the cone over S 3 /Γ, then
(5.5) E(g) = χ(M ) − 1 |Γ| ,
where |Γ| is the order of Γ. Note that 1/|Γ| is the volume ratio Vol(S 3 /Γ)/Vol(S 3 ). Let now g RF be a Ricciflat Kähler metric on C 2 with cone angle 2πβ along the smooth complex curve C ⊂ C 2 . Let the tangent cone at infinity be the cone over the spherical metric on the 3-sphere g with cone singularities. Under suitable assumptions on the regularity and asymptotic behavior of g RF it is reasonable to expect that the energy is given by a formula which mixes 5.4 and 5.5 (see [12])
(5.6) E(g RF ) = 1 + (β − 1)χ(C) − Vol(g) 2π 2 .
The number 1 is the Euler characteristic of a ball. As we mentioned before, the volume ratio ν = Vol(g)/2π 2 measures how bad the singularity of the limit W is at p. The energy of the bubbles at p are bigger as ν is smaller.
On the other hand; a straight-forward application of the Bishop-Gromov volume monotonicity formula gives us a lower bound on the volume ratio, see [32]. If we assume that the curves D i all lie in the linear system H 0 (L), then
(5.7) ν ≥ 1 9 (c 1 (X) − (1 − β)c 1 (L)) 2 .
This inequality can be used to rule out, for example, the degeneration of KE metrics in CP 2 with cone angle 2πβ along smooth cubics to a cubic with a cuspidal point {w 2 = z 3 }. Indeed at such a singular point the tangent cone should be C γ × C with γ = 2β − 1 when β > 5/6, so that ν = 2β − 1. Replacing this into 5.7 we get 2β − 1 ≥ β 2 , which holds only when β = 1. Line Arrangements. Consider a collection of lines L 1 , . . . , L k in CP 2 . An r-tuple point is a point where r lines of the arrangement meet, we denote by t r the number of r-tuple points. Since any two lines meet at exactly one point we get the identity
k(k − 1) 2 = r≥2 t r r(r − 1) 2 .
The arrangement is said to have the Hirzebruch property if k = 3n, n ≥ 2 and each line intersects the others at exactly n + 1 points. Such arrangements where considered by Hirzebruch [20], in a construction of compact quotients of the unit ball as ramified covers of the projective plane. It was shown by Panov in [34] that there is a polyhedral Kähler metric with cone angle β = n−1 n along any of these arrangements, which is unique up to scale. The only known examples of arrangements which satisfy the Hirzebruch property so far are associated with unitary reflection groups of U (3), there are two infinite families and five exceptional cases. The simplest example of these arrangements, A 0 (2), consists of the extended sides of a triangle together with its three bisectrices.
Let P 0 be a homogeneous polynomial of degree k = 3n such that {P 0 = 0} ⊂ CP 2 is a line arrangement L 1 , . . . , L k with the Hirzebruch property and let g 0 be the polyhedral Kähler metric with cone angle β = n−1 n along the arrangement. Let C = {P = 0} ⊂ CP 2 for > 0 be a family of smooth curves of degree k which converge to the arrangement as → 0 and let g be the Ricci-flat metric in the projective plane with cone angle 2πβ along the curve C . One might expect that, under suitable hypothesis, the metrics g converge to g 0 in the Gromov-Hausdorff sense as → 0. Write E for the energy of the metrics g , which can be computed from 5.4 and the degree-genus formula. Let E r be the energy of a Ricci-flat metric on C 2 with cone angle 2πβ along a smooth curve of degree r with r different asymptotic lines given by equation 5.6. Since the metric g 0 is flat, and therefore it has 0 energy, one would expect the identity
(5.8) E = r t r E r
in the absence of bubble tree phenomena. It follows from straightforward computations that equation 5.8 holds for all the listed arrangements:
Moduli Spaces.
In general lines we can say that if we fix β ∈ (0, 1) and consider pairs (X, D) of a complex manifold together with a smooth divisor which satisfy some fixed numerical data, then the Gromov-Hausdorff compactification M GH β of the moduli space of KE metrics on X with cone angle 2πβ along D is expected to agree with a suitable algebraic compactification. Of course these compactifications depend on the parameter β but, since they agree on the open subset of smooth divisors, all of them are birrationaly equivalent. More precisely, there should be a discrete sequence of cone angles 0 < . . . < β 2 < β 1 < β 0 = 1 such that the spaces M It is shown in [32] that the GIT compactication of cubic surfaces in CP 3 corresponds to the Gromov-Hausdorff compactification of the space of KE metrics on del Pezzo surfaces of degree three. On the other hand; Garcia-Gallardo [17] used GIT to construct, for each t ∈ [0, 1] ∩ Q, a compactification of the moduli space of cubic surfaces together with anticanonical divisors. It is expected that these agree with the Gromov-Hausdorff compactifications of the space of corresponding KEcs metrics with cone angle β = 1 − t, see [17].
A somewhat different case is that of smooth curves of degree n ≥ 3 in CP 2 . It is known that for (n − 3)/n < β < 1 there is a KEcs, unique up to scale, on the projective plane with cone angle 2πβ along a given smooth curve of degree n. The set A of all these curves in the projective plane modulo projective transformations has the structure of an affine algebraic variety and a natural GIT compactification A GIT = P(Sym n C 3 )//SL(3, C).
It is expected that for β sufficiently close to 1 it holds that M GH β ∼ = A GIT . We illustrate these ideas in the particular cases of n = 3 and n = 4. n = 3: Elliptic curves in CP 2 . A ∼ = C and there is only one algebraic compactification obtained by adding a single point. In the GIT compactification, A GIT = P(Sym 3 C 3 )//SL(3, C) ∼ = CP 1 , this extra point is represented by the polystable curve C 0 = {x 0 x 1 x 2 = 0} ⊂ CP 2 . On the other hand, if we fix 0 < β < 1, there is an explicit KE (indeed constant holomorphic sectional curvature) metric g 0 with cone angle 2πβ along C 0 obtained as a Kähler quotient of (C * β ) 3 by the S 1 action e iθ (x 1 , x 2 , x 3 ) = (e iθ x 1 , e iθ x 2 , e iθ x 3 ); if β = 1/k then g 0 is the push forward of the Fubini-Study metric under the map [x 0 , x 1 , x 2 ] → [x k 0 , x k 1 , x k 2 ]. The curve C 0 has three ordinary double point singularities and the tangent cone of g 0 at any of these points is C β × C β . We expect that M GH β = M β ∪ {g 0 }. We relate this picture to the blow-up phenomena discussed in the previous section. Let C = {x 0 x 1 x 2 − (x 3 0 + x 3 1 + x 3 2 ) = 0}. These are smooth pair-wise non-isomorphic elliptic curves for small > 0. Set g to be the corresponding metrics in M β . We expect that g → g 0 in the Gromov-Hausdorff sense as → 0. Take coordinates centered at p = [0, 0, 1] given by (u, v) → [u, v, 1], so that C = {uv = (u 3 + v 3 + 1)}. Write u = √ z and v = √ w so that C = {zw = 3/2 z 3 + 3/2 w 3 + 1}. In the coordinates (z, w) the curves C converge to C = {zw = 1} as → 0. By means of the Gibbons-Hawking ansatz, Donaldson [13] constructed a Ricci-flat Kähler metric g RF on C 2 invariant under the S 1 -action e iθ (z, w) = (e iθ z, e −iθ w) with cone angle 2πβ along C = {zw = 1} and tangent cone at infinity C β × C β . We expect that (CP 2 , λ g , p) → (C 2 , g RF , 0) in the pointed Gromov-Hausdorff sense, where λ is a fixed constant multiple of |Riem(g )|(p). The same discussion applies if p is replaced [1, 0, 0] or [0, 1, 0]. Donaldson computed the Riemann curvature tensor of g RF and it is not hard from here to compute the energy of the metric to obtain E(g RF ) = 1 − β 2 (which agrees with 5.6 since χ(C) = 0 and Vol(g) = 2π 2 β 2 ). On the other hand, by 5.4, E(g ) = χ(CP 2 ) + (β − 1)χ(C ) = 3. It is easy to show that E(g 0 ) = 3β 2 . Our speculations are then compatible with the fact that E(g ) − E(g 0 ) = 3 − 3β 2 = 3(1 − β 2 ) = 3E(g RF ).
The limiting metric g 0 has less energy than the metrics in the family g and the energy lost is due to the formation of three bubbles g RF at the double points of C 0 . n = 4: Genus 3 curves in CP 2 . The affine variety parameterizing smooth quartic curves modulo projective transformations has complex dimension six, dim C A = 6. The geometric invariant theory for quartic curves is well-understood: The stable points of A GIT parametrize quartic curves with at worst singularities of type A 1 or A 2 ; the polystable points form a 1-dimensional family which parametrizes either the double conic Y 1 = {Q 2 = 0} ⊂ CP 2 , where Q = x 2 2 + x 0 x 1 , or a union of two reduced conics that are tangential at two points and at least one of them is smooth Y λ = {P λ = 0} where P λ = (λ 1 x 2 2 + x 0 x 1 )(λ 2 x 2 2 + x 0 x 1 ) with λ = [λ 1 , λ 2 ] ∈ CP 1 \ { [1,1]}. Note that [λ 1 , λ 2 ] parametrizes the same curve as [λ 2 , λ 1 ]; when λ is 0 = [0, 1] or ∞ = [1, 0] the curve Y 0 = Y ∞ is referred as the ox, otherwise it is called a cateye (the names are suggested by the graphs of the curves). A cateye has two A 3 singularities (also known as tacnodes); the ox has two tacnodes and one A 1 singularity.
Set A 1/2 to be the blow-up of A GIT at the double conic. If E ⊂ A 1/2 denotes the exceptional divisor, then E is identified with the space of hyperelliptic Riemann surfaces of genus 3 or equivalently the GIT quotient of the space of eight unordered points in the Riemann sphere by the action of Möbius transformations E ∼ = P(Sym 8 C 2 )//SL(2, C). There is a classical dichotomy for Riemann surfaces of genus 3: Either the bi-canonical map defines an embedding of the curve in the projective plane or the canonical map is a degree two map to the projective line branched at eight points (the hyperelliptic case). Therefore A 1/2 is a compactification of the space of genus 3 Riemann surfaces.
On the other hand, the anti-canonical map of any degree two Del Pezzo surface X defines a 2-sheeted covering of the projective plane branched along a smooth quartic curve and vice-versa; the deck transformation σ is known as the Geiser involution. There is a KE metric on X -unique up to scale-which is necessarily invariant under σ; the push-forward of this metric to the projective plane is a KE metric with cone angle π (β = 1/2) along the quartic. It is shown in [32] that M GH 1/2 ∼ = A 1/2 , in the sense that the natural map which sends a Kähler metric to its parallel complex structure defines a homeomorphism. The curve singularities which appear at limit spaces are either of type A 1 , A 2 or A 3 , and we have discussed in detail their tangent cones -for β = 1/2-in Subsection 4.5.
Fix now β sufficiently close to 1 and let 1 − γ = 2(1 − β). For γ > 1/4 there is a unique KE metric g 0 in 2πc 1 (CP 2 ) with positive Ricci curvature and cone angle 2πγ along the conic C 0 = {Q = 0} (see [27]). Let F be a generic polynomial of degree 4 and let C = {Q 2 − F } = 0. Write Z = {F = 0}, so that for a typical F the intersection Z ∩ C 0 consists of 8 distinct points p 1 , . . . , p 8 . For small and non-zero the curve C is smooth; orthogonal projection to C 0 is an 'approximately' holomorphic double cover from C to C 0 , branched over the points {p 1 , . . . p 8 }. Let g be the metric on M β corresponding to C ; we expect that g → g 0 in the Gromov-Hausdorff sense as → 0. The divisor E in M We say a few words on the blow-up limits that might arise in this situation. Let C = {w = z 2 } ⊂ C 2 ; there should be a Ricci-flat metric g RF with cone angle 2πβ along C asymptotic to the cone C γ × C. The energy of g RF should be given by 5.6 : E(g RF ) = 1 + (β − 1) − γ = 1 − β.
We expect that if we re-scale g around small balls centered at any of the points p 1 , . . . p 8 we get the metric g RF in the limit as → 0. We know what is the energy of the metrics g and g 0 : E(g ) = 3 + (β − 1)χ(C ) = 3 + (β − 1)(−4) = 7 − 4β E(g 0 ) = 3 + (γ − 1)χ(C 0 ) = 3 + (2β − 2)2 = 4β − 1.
Our speculation is in agreement with the fact that E(g ) − E(g 0 ) = 8(1 − β) = 8E(g RF ).
When Z ∩ C 0 consists of less than eight points, we might expect to see a bubble tree phenomena at the multiple points of the intersection.
d ≥ 3 and β 1 = β 2 if d = 2.Proposition 1. There is a unique Kähler cone metric g F on C 2 with apex at 0 such that (1) Its Reeb vector field generates the circle action e it (z, w) = (e it c z, e it c w) for some constant c > 0.
Figure 1 .
11 − γ = 2(1 − β). When two cone singularities collide the complements of the angles add.
. The existence results for KEcs parallel the well-known theorems regarding the Calabi conjecture in the case of smooth metrics: i) If c 1 (X) − (1 − β)c 1 ([D]) < 0, then there exists a unique KEcs with λ = −1. ii) If c 1 (X) − (1 − β)c 1 ([D]) = 0; then, in any Kähler class on X, there exists a unique KEcs with λ = 0. iii) If c 1 (X) − (1 − β)c 1 ([D]) > 0 and the twisted K-energy is proper, then there exists a unique (up to biholomorphisms which preserve D) KEcs with λ = 1. • Regularity Theory ([21],
Example 7 .
7Let g be the spherical metric on CP 1 with cone angles 2π(1/q) at [0, 1], 2π(1/p) at [1, 0] and 2πβ at[1,1]. The triple (1/q, 1/p, β) satisfies the Troyanov condition 1.1 if and only if 1 − 1/p − 1/q < β < 1 − 1/p + 1/q. Write g for the lift of (1/4)g to S 3 by means of the Hopf map. Setg = Ψ * g. It is then clear thatg is a spherical metric on S 3 , invariant under the S 1 -action 3.16 and has a cone singularity of angle 2πβ along the (p, q)-torus knot {
Lemma 2 .
2Set d ≥ 3. Let β 1 , . . . , β d−2 ∈ (0, 1) and β d−1 , β d ∈ (0, 1]. Assume that the numbers β 1 , . . . , β d−2 , (1/q)β d−1 , (1/p)β d satisfy the Troyanov condition 1.1. Let g be the spherical metric on CP 1 with cone angles 2π(1/q)β d−1 at [0, 1], 2π(1/p)β d at [1, 0] and 2πβ j at s j for 1 ≤ j ≤ d − 2. Then there is a spherical metricg on S 3 with cone angles 2πβ d−1 and 2πβ d at the Hopf circles which fiber over [0, 1] and [1, 0] respectively; and cone angles 2πβ j at the (p, q)-torus knots which fiber over the points s j , 1 ≤ j ≤ d − 2. The metricg is invariant under the S 1 -action 3.16 and S (p,q) : (S 3 ,g) → (CP 1 , (1/4)g) is a Riemannian submersion with geodesic fibers of constant length 2πc at the non-exceptional orbits. Furthermore, the metricg is unique up to a S 1 -equivariant isometry which induces the identity on CP 1 4. Flat Kähler cone metrics on C 2 4.1. Proof of Propostion 1. Let L j = {l j (z, w) = 0} for j = 1, . . . , d be d distinct complex lines through the origin in C 2 with defining linear equations l j .
Claim 3 .
3The functions(4.3) z = ξw, w = c 1/2c r 1/c e u/2c e it give a biholomorphism between (0, ∞) × C \ {a 1 , . . . , a d−1 } × S 1 with the complex structure I and C 2 \ L. If we write Ω = ( √ 2) −1 dzdw, then
, in total there are 19 primitive groups in U (2) up to conjugation. Denote by C m the cyclic group of order m of scalar matrices. Define T = C 12 • T , O = C 24 • O and I = C 60 • I; the circle in the notation means the subgroup generated in U (2) by the respective cyclic and binary groups. It turns out that these are primitive unitary groups and thatĜ ⊂ T (O or I) if and only if G ⊂ T (O or I). • There are 4 groups of tetrahedral type, all of them are subgroups of T . The order of T is 12 2 = 144. The invariant polynomials can be taken to be f 3 and t 2 where f = 2 − x 1 x 5 2 . The map Φ = (f 3 , t 2 ) induces a map in the projective line Φ(η) = H(t), where η = x 1 /x 2 , t = η 2 and
•+ x 12 2 .
2There are 8 groups of octahedral type, all of them are subgroups of O. The order of O is 24 2 = 576. The invariant polynomials can be taken to be h 3 and t 2 where h = The map Φ = (h 3 , t 2 ) induces a map in the projective line Φ(η) = H(t), where η = x 1 /x 2 , t = η 4 and
isomorphic if β ∈ (β i , β i−1 ) and ifβ ∈ (β i+1 , β i ) then M of a suitable blow-up. The situation can be compared with that of variations of GIT quotients.
A
point p ∈ E is represented by a homogeneous degree 8 polynomial in two variables p = [f 8 ] and {f 8 = 0} ⊂ CP 1 is a configuration of (at most eight) points with multiplicities, p is stable if each point has multiplicity at most three and there is only one polystable point which corresponds to the configuration of two points with multiplicity four. The point p parametrizes the degree 8 curve Y p = {x 2 2 = f 8 (x 0 , x 1 )} ⊂ P(1, 1, 4). The curve Y p does not go through the 1 4 (1, 1)-orbifold point [0, 0, 1] ∈ P(1,1, 4); if p is stable the curve Y p has at worst singularities of type A 1 and A 2 (which correspond to points in the configuration of multiplicity 2 and 3 respectively), the polystable point parametrizes the curve {x 2
)
Ricci curvature bounds and Einstein metrics on compact manifolds. Michael T Anderson, J. Amer. Math. Soc. 23Michael T. Anderson. Ricci curvature bounds and Einstein metrics on compact manifolds. J. Amer. Math. Soc., 2(3):455- 490, 1989.
Curvature, cones and characteristic numbers. Michael Atiyah, Claude Lebrun, Math. Proc. Cambridge Philos. Soc. 1551Michael Atiyah and Claude Lebrun. Curvature, cones and characteristic numbers. Math. Proc. Cambridge Philos. Soc., 155(1):13-37, 2013.
Ricci flat Kähler metrics with edge singularities. Simon Brendle, Int. Math. Res. Not. IMRN. 24Simon Brendle. Ricci flat Kähler metrics with edge singularities. Int. Math. Res. Not. IMRN, (24):5727-5766, 2013.
Plane algebraic curves. Egbert Brieskorn, Horst Knörrer, Modern Birkhäuser Classics. Birkhäuser/Springer Basel AG. John Stillwell2012] reprint of the 1986 editionEgbert Brieskorn and Horst Knörrer. Plane algebraic curves. Modern Birkhäuser Classics. Birkhäuser/Springer Basel AG, Basel, 1986. Translated from the German original by John Stillwell, [2012] reprint of the 1986 edition.
A course in metric geometry. Dmitri Burago, Yuri Burago, Sergei Ivanov, Graduate Studies in Mathematics. 33American Mathematical SocietyDmitri Burago, Yuri Burago, and Sergei Ivanov. A course in metric geometry, volume 33 of Graduate Studies in Mathe- matics. American Mathematical Society, Providence, RI, 2001.
Degeneration of Einstein metrics and metrics with special holonomy. Jeff Cheeger, Surveys in differential geometry. Boston, MA; VIII; Somerville, MAInt. PressVIIIJeff Cheeger. Degeneration of Einstein metrics and metrics with special holonomy. In Surveys in differential geometry, Vol. VIII (Boston, MA, 2002), Surv. Differ. Geom., VIII, pages 29-73. Int. Press, Somerville, MA, 2003.
Kähler-Einstein metrics and stability. Xiuxiong Chen, Simon Donaldson, Song Sun, Int. Math. Res. Not. IMRN. 8Xiuxiong Chen, Simon Donaldson, and Song Sun. Kähler-Einstein metrics and stability. Int. Math. Res. Not. IMRN, (8):2119-2125, 2014.
Kähler-Einstein metrics on Fano manifolds. I: Approximation of metrics with cone singularities. Xiuxiong Chen, Simon Donaldson, Song Sun, J. Amer. Math. Soc. 281Xiuxiong Chen, Simon Donaldson, and Song Sun. Kähler-Einstein metrics on Fano manifolds. I: Approximation of metrics with cone singularities. J. Amer. Math. Soc., 28(1):183-197, 2015.
Kähler-Einstein metrics on Fano manifolds. II: Limits with cone angle less than 2π. Xiuxiong Chen, Simon Donaldson, Song Sun, J. Amer. Math. Soc. 281Xiuxiong Chen, Simon Donaldson, and Song Sun. Kähler-Einstein metrics on Fano manifolds. II: Limits with cone angle less than 2π. J. Amer. Math. Soc., 28(1):199-234, 2015.
On the regularity problem of complex Monge-Ampere equations with conical singularities. Xiuxiong Chen, Yuanqi Wang, arXiv:1405.1021arXiv preprintXiuxiong Chen and Yuanqi Wang. On the regularity problem of complex Monge-Ampere equations with conical singulari- ties. arXiv preprint arXiv:1405.1021, 2014.
. J Ronan, Hans-Joachim Conlon, Hein, Asymptotically conical Calabi-Yau manifolds, i. Duke Mathematical Journal. 16215Ronan J Conlon, Hans-Joachim Hein, et al. Asymptotically conical Calabi-Yau manifolds, i. Duke Mathematical Journal, 162(15):2855-2902, 2013.
Asymptotically conical Ricci-flat Kähler metrics with cone singularities. Borbon Martin De, Imperial College, London, UKPhD thesisMartin de Borbon. Asymptotically conical Ricci-flat Kähler metrics with cone singularities. PhD thesis, Imperial College, London, UK, 2015.
Kähler metrics with cone singularities along a divisor. S K Donaldson, Essays in mathematics and its applications. HeidelbergSpringerS. K. Donaldson. Kähler metrics with cone singularities along a divisor. In Essays in mathematics and its applications, pages 49-79. Springer, Heidelberg, 2012.
Gromov-Hausdorff limits of Kähler manifolds and algebraic geometry, ii. Simon Donaldson, Song Sun, to appear in J. Differential GeomSimon Donaldson and Song Sun. Gromov-Hausdorff limits of Kähler manifolds and algebraic geometry, ii. to appear in J. Differential Geom.
Gromov-Hausdorff limits of Kähler manifolds and algebraic geometry. Simon Donaldson, Song Sun, Acta Math. 2131Simon Donaldson and Song Sun. Gromov-Hausdorff limits of Kähler manifolds and algebraic geometry. Acta Math., 213(1):63-106, 2014.
Singular Kähler-Einstein metrics. Philippe Eyssidieux, Vincent Guedj, Ahmed Zeriahi, J. Amer. Math. Soc. 223Philippe Eyssidieux, Vincent Guedj, and Ahmed Zeriahi. Singular Kähler-Einstein metrics. J. Amer. Math. Soc., 22(3):607- 639, 2009.
Patricio Gallardo, Jesus Martinez-Garcia, arXiv:1607.03697Moduli of cubic surfaces and their anticanonical divisors. arXiv preprintPatricio Gallardo and Jesus Martinez-Garcia. Moduli of cubic surfaces and their anticanonical divisors. arXiv preprint arXiv:1607.03697, 2016.
Conic singularities metrics with prescribed Ricci curvature: general cone angles along normal crossing divisors. Henri Guenancia, Mihai Pȃun, J. Differential Geom. 1031Henri Guenancia and Mihai Pȃun. Conic singularities metrics with prescribed Ricci curvature: general cone angles along normal crossing divisors. J. Differential Geom., 103(1):15-57, 2016.
Hans-Joachim Hein, Song Sun, arXiv:1607.02940Calabi-Yau manifolds with isolated conical singularities. arXiv preprintHans-Joachim Hein and Song Sun. Calabi-Yau manifolds with isolated conical singularities. arXiv preprint arXiv:1607.02940, 2016.
Algebraic surfaces with extreme Chern numbers (report on the thesis of Th. Friedrich Hirzebruch, Russian Mathematical Surveys. 404Friedrich Hirzebruch. Algebraic surfaces with extreme Chern numbers (report on the thesis of Th. Höfer, Bonn 1984). Russian Mathematical Surveys, 40(4):135-145, 1985.
Kähler-Einstein metrics with edge singularities. Thalia Jeffres, Rafe Mazzeo, Yanir A Rubinstein, Ann. of Math. 1832Thalia Jeffres, Rafe Mazzeo, and Yanir A. Rubinstein. Kähler-Einstein metrics with edge singularities. Ann. of Math. (2), 183(1):95-176, 2016.
Compact manifolds with special holonomy. D Dominic, Joyce, Oxford University PressDominic D Joyce. Compact manifolds with special holonomy. Oxford University Press, 2000.
Singularities of pairs. János Kollár, Algebraic geometry-Santa Cruz. Providence, RIAmer. Math. Soc62János Kollár. Singularities of pairs. In Algebraic geometry-Santa Cruz 1995, volume 62 of Proc. Sympos. Pure Math., pages 221-287. Amer. Math. Soc., Providence, RI, 1997.
Gauge theory for embedded surfaces. P B Kronheimer, T S Mrowka, I. Topology. 324P. B. Kronheimer and T. S. Mrowka. Gauge theory for embedded surfaces. I. Topology, 32(4):773-826, 1993.
Unitary reflection groups. Gustav I Lehrer, Donald E Taylor, Australian Mathematical Society Lecture Series. 20Cambridge University PressGustav I. Lehrer and Donald E. Taylor. Unitary reflection groups, volume 20 of Australian Mathematical Society Lecture Series. Cambridge University Press, Cambridge, 2009.
Remarks on logarithmic K-stability. Chi Li, Commun. Contemp. Math. 17217Chi Li. Remarks on logarithmic K-stability. Commun. Contemp. Math., 17(2):1450020, 17, 2015.
Conical Kähler-Einstein metrics revisited. Chi Li, Song Sun, Comm. Math. Phys. 3313Chi Li and Song Sun. Conical Kähler-Einstein metrics revisited. Comm. Math. Phys., 331(3):927-973, 2014.
Riemannian geometry of conical singular sets. Dong Zhong, Zhongmin Liu, Shen, Ann. Global Anal. Geom. 161Zhong-Dong Liu and Zhongmin Shen. Riemannian geometry of conical singular sets. Ann. Global Anal. Geom., 16(1):29-62, 1998.
Liouville equation and spherical convex polytopes. Feng Luo, Gang Tian, Proc. Amer. Math. Soc. 1164Feng Luo and Gang Tian. Liouville equation and spherical convex polytopes. Proc. Amer. Math. Soc., 116(4):1119-1129, 1992.
Singular points of complex hypersurfaces. John Milnor, Annals of Mathematics Studies. 61University of Tokyo PressJohn Milnor. Singular points of complex hypersurfaces. Annals of Mathematics Studies, No. 61. Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1968.
Spherical Metrics with Conical Singularities on a 2-Sphere: Angle Constraints. Gabriele Mondello, Dmitri Panov, Int. Math. Res. Not. IMRN. 16Gabriele Mondello and Dmitri Panov. Spherical Metrics with Conical Singularities on a 2-Sphere: Angle Constraints. Int. Math. Res. Not. IMRN, (16):4937-4995, 2016.
Compact moduli spaces of del Pezzo surfaces and Kähler-Einstein metrics. Yuji Odaka, Cristiano Spotti, Song Sun, J. Differential Geom. 1021Yuji Odaka, Cristiano Spotti, and Song Sun. Compact moduli spaces of del Pezzo surfaces and Kähler-Einstein metrics. J. Differential Geom., 102(1):127-172, 2016.
Seifert manifolds. Peter Orlik, Lecture Notes in Mathematics. 291Springer-VerlagPeter Orlik. Seifert manifolds. Lecture Notes in Mathematics, Vol. 291. Springer-Verlag, Berlin-New York, 1972.
. Dmitri Panov, Polyhedral Kähler manifolds. Geom. Topol. 134Dmitri Panov. Polyhedral Kähler manifolds. Geom. Topol., 13(4):2205-2252, 2009.
The greatest Ricci lower bound, conical Einstein metrics and Chern number inequality. Jian Song, Xiaowei Wang, Geom. Topol. 201Jian Song and Xiaowei Wang. The greatest Ricci lower bound, conical Einstein metrics and Chern number inequality. Geom. Topol., 20(1):49-102, 2016.
Sasaki-Einstein manifolds. James Sparks, Surveys in differential geometry. Volume XVI. Geometry of special holonomy and related topics. Somerville, MAInt. Press16James Sparks. Sasaki-Einstein manifolds. In Surveys in differential geometry. Volume XVI. Geometry of special holonomy and related topics, volume 16 of Surv. Differ. Geom., pages 265-324. Int. Press, Somerville, MA, 2011.
. Gábor Székelyhidi, A remark on conical Kähler-Einstein metrics. Math. Res. Lett. 203Gábor Székelyhidi. A remark on conical Kähler-Einstein metrics. Math. Res. Lett., 20(3):581-590, 2013.
Kähler-Einstein metrics on algebraic manifolds. Gang Tian, Transcendental methods in algebraic geometry. BerlinSpringer1646Gang Tian. Kähler-Einstein metrics on algebraic manifolds. In Transcendental methods in algebraic geometry (Cetraro, 1994), volume 1646 of Lecture Notes in Math., pages 143-185. Springer, Berlin, 1996.
Metrics of constant curvature on a sphere with two conical singularities. Marc Troyanov, Differential geometry. Peñíscola; BerlinSpringer1410Marc Troyanov. Metrics of constant curvature on a sphere with two conical singularities. In Differential geometry (Peñíscola, 1988), volume 1410 of Lecture Notes in Math., pages 296-306. Springer, Berlin, 1989.
Prescribing curvature on compact surfaces with conical singularities. Marc Troyanov, Trans. Amer. Math. Soc. 3242Marc Troyanov. Prescribing curvature on compact surfaces with conical singularities. Trans. Amer. Math. Soc., 324(2):793- 821, 1991.
Expansion formula for complex Monge-Ampere equation along cone singularities. Hao Yin, Kai Zheng, arXiv:1609.03111arXiv preprintHao Yin and Kai Zheng. Expansion formula for complex Monge-Ampere equation along cone singularities. arXiv preprint arXiv:1609.03111, 2016.
|
[] |
[
"Visualization and orbital-free parametrization of the large-Z scaling of the kinetic energy density of atoms",
"Visualization and orbital-free parametrization of the large-Z scaling of the kinetic energy density of atoms"
] |
[
"Antonio C Cancio \nDepartment of Physics and Astronomy\nBall State University\n47306MuncieIndiana\n",
"Jeremy J Redd \nDepartment of Physics\nUtah Valley University\n84058 *OremUtah\n"
] |
[
"Department of Physics and Astronomy\nBall State University\n47306MuncieIndiana",
"Department of Physics\nUtah Valley University\n84058 *OremUtah"
] |
[] |
The scaling of neutral atoms to large Z, combining periodicity with a gradual trend to homogeneity, is a fundamental probe of density functional theory, one that has driven recent advances in understanding both the kinetic and exchange-correlation energies. Although research focus is normally upon the scaling of integrated energies, insights can also be gained from energy densities. We visualize the scaling of the positive-definite kinetic energy density (KED) in closed-shell atoms, in comparison to invariant quantities based upon the gradient and Laplacian of the density. We notice a striking fit of the KED within the core of any atom to a gradient expansion using both the gradient and the Laplacian, appearing as an asymptotic limit around which the KED oscillates. The gradient expansion is qualitatively different from that derived from first principles for a slowly-varying electron gas and is correlated with a nonzero Pauli contribution to the KED near the nucleus. We propose and explore orbital-free meta-GGA models for the kinetic energy to describe these features, with some success, but the effects of quantum oscillations in the inner shells of atoms makes a complete parametrization difficult. We discuss implications for improved orbital-free description of molecular properties.
|
10.1080/00268976.2016.1246757
|
[
"https://arxiv.org/pdf/1605.07751v1.pdf"
] | 118,599,434 |
1605.07751
|
c149c4bc4fd5a31114a86011594e11cd109207b8
|
Visualization and orbital-free parametrization of the large-Z scaling of the kinetic energy density of atoms
Antonio C Cancio
Department of Physics and Astronomy
Ball State University
47306MuncieIndiana
Jeremy J Redd
Department of Physics
Utah Valley University
84058 *OremUtah
Visualization and orbital-free parametrization of the large-Z scaling of the kinetic energy density of atoms
Density Functional TheoryKinetic Energy Densityorbital-free DFTmeta-GGAThomas-Fermi Theory
The scaling of neutral atoms to large Z, combining periodicity with a gradual trend to homogeneity, is a fundamental probe of density functional theory, one that has driven recent advances in understanding both the kinetic and exchange-correlation energies. Although research focus is normally upon the scaling of integrated energies, insights can also be gained from energy densities. We visualize the scaling of the positive-definite kinetic energy density (KED) in closed-shell atoms, in comparison to invariant quantities based upon the gradient and Laplacian of the density. We notice a striking fit of the KED within the core of any atom to a gradient expansion using both the gradient and the Laplacian, appearing as an asymptotic limit around which the KED oscillates. The gradient expansion is qualitatively different from that derived from first principles for a slowly-varying electron gas and is correlated with a nonzero Pauli contribution to the KED near the nucleus. We propose and explore orbital-free meta-GGA models for the kinetic energy to describe these features, with some success, but the effects of quantum oscillations in the inner shells of atoms makes a complete parametrization difficult. We discuss implications for improved orbital-free description of molecular properties.
I. INTRODUCTION
The basic insight of density functional theory (DFT) [1] is that the ground state energy and related quantities are functionals of the particle density alone. Historically, however, functionals have nearly always been implemented in the Kohn-Sham approach which uses auxiliary orbitals derived from the solution to an equivalent effective noninteracting problem. Orbitals prove very important to describe features in the kinetic energy such the effect of the quantum oscillations of the shell structure of atoms. However, the project of developing a true orbital-free DFT, using the density only to obtain energies and electronic structure remains a challenge. This challenge has taken on new impetus with the demand for applications in which the use of orbitals is prohibitive [2]. Such situations include the simulation of mesoscale systems [3] and of warm dense matter [4,5] matter at high density, at temperatures roughly of the fermi temperature, where a macroscopic portion of electrons are thermally excited. Given robust orbital-free models of exchange and correlation in the form of generalized gradient approximations (GGA's) [6][7][8], there remains an ongoing need for developing improved orbitalfree models of the Kohn-Sham kinetic energy (KE).
Much work in this area [9][10][11][12][13][14] has centered on development of GGA's for the KE -corrections to the Thomas Fermi approximation [15,16] constructed from the local density and its gradient. These include nonempirical or semi-nonempirical models based on the satisfaction of exact constraints [9,11]. A common but not always ac- * [email protected] curate [17] design principle is that of conjointness with exchange [18] -the development of forms that can be adapted to describe both exchange and kinetic energies. A second area of research is the construction of nonlocal or two-point functionals, which incorporate quantum oscillations such as Friedel oscillations and shell structure at the cost of a nonlocal dependence upon density [19][20][21][22][23] These have had success for very large solid-state applications [24,25], but rely upon material-dependent functionals.
The goal of this paper is to bring together two disparate themes in density functional theory and bring them to bear upon the problem of orbital-free functionals.
The first is as old as density functional theory itselfthe large-Z limit of the neutral atom. As one proceeds down the periodic table, increasing both nuclear charge Z and electron number N to maintain charge neutrality, and allowing both to increase indefinitely, one gradually turns off the effects of inhomogeneity on the quantum many-body system in a quantifiable way. The infinite-Z limit for both density and energy is given exactly [26,27] by the Thomas-Fermi model of the atom [15,16,28],a semiclassical solution that is essentially a completely orbital-free local density approximation. The general trend of corrections to this picture as Z is brought down to realistic values has also long been known [29][30][31], leading to a series expansion in 1/Z 1/3 . These corrections include gradient corrections to the kinetic energy [32,33] as well as the introduction of exchange and correlation corrections [34][35][36] as both vanish relative to the KE as Z → ∞. At even lower values of Z, atoms like third-row transition metals provide open challenges to traditional DFT approaches like GGA's and meta-GGA's (function-als that use a third variable, either the Laplacian of the density or the KED in addition to the local density and its gradient [36,37].) This scaling thus serves as a natural, disciplined way to study the gradual introduction of inhomogeneity into density functionals. Essentially, to ascend to large-Z is tantamount to descending the Jacob's Ladder of functionals from complicated orbitaldependent ones to the local-density approximation.
However, it is only fairly recently that the implications of this scaling behavior have made their way explicitly into density functional development. Work has been done in improving the understanding of the connection between the large-Z scaling of atomic energies and density functional theory [35][36][37][38][39][40] and along the way, developing new functionals for the kinetic energy [11,39,41,42], exchange [11,38,42], and most recently, correlation [36,37].
The other theme in DFT development that we will explore exploits the modeling of the Kohn-Sham kinetic energy density -the contribution to the KS KE on a point-by-point basis. The KED is an important measure of electronic structure first of all in a qualitative sense -as the basis for the electron localization factor or ELF [43,44] that identifies regions of electron localization such as atomic shells and covalent bonds from regions with localized electrons. It is also the key ingredient in meta-GGA's [45,46] -where the ELF's ability to diagnose different types of bonds can be used to construct functionals that work well for a large variety of systems. Recent work on the orbital-free modeling of the KED, and thus implicitly the ELF [47][48][49][50][51][52][53] demonstrates that the gradient and Laplacian of the density taken together can be used to construct effective meta-GGA functionals of the KE density. This approach has the promise of bringing the insights into electronic structure gained from the ELF to the context of OFDFT development.
This paper is an attempt to combine these two complementary approaches. Although the KE density of atoms has been the subject of numerous studies [40,48,54,55], little has been done to visualize and analyze their scaling properties as Z → ∞. An issue of interest is how different regions of the atom scale with Z. There should be a contrast between the interior of the atom where the shell structure that characterizes finite atoms tends to the smooth Thomas-Fermi limit and the near-nuclear core and classically forbidden tunnelling region far from the nucleus, both of which never converge to the Thomas-Fermi limit. Particularly, the universal limiting behavior of the KED in these regions could offer important guidance for functional development as they provide important boundary conditions that those functionals should try to meet. A related question is why the gradient expansion works as well as it does [56] for these systems despite the significant departures from homogeneity in the valence shell and at the nucleus.
In this paper we discuss preliminary results of the visualization of scaling behavior of the gradient and Laplacian of the density as a function of Z, and of the Kohn-Sham KED as a function of these quantities. We show that there are at least two types of scaling behavior as Z tends to ∞, a highly nonanalytic behavior describing the near-nuclear region, and the other describable by an empirical gradient expansion in the rest of the atom. Notably, the empirical gradient expansion is different from that canonically derivable from the slowly-varying electron gas, and thus from that used in most GGA and meta-GGA functionals. This difference may have significant impact on the ability of these functionals to predict binding in molecules. The rest of this paper is organized as follows: Sec. II describes the theoretical background of the paper -the density functional theory of the kinetic energy density, and in particular in the context of the atomic problem. Sec. III covers the basic methodology used for calculations. Sec. IV details the chief results of visualization, and their implications for the total energy of atoms and Sec. V presents a discussion of these results and our conclusions.
II. THEORY
The kinetic energy density in Kohn-Sham theory is given by
τ KS = 1 2 occup i f i |∇φ i | 2 ,(1)
where φ i are Kohn-Sham orbitals from which the electron density is constructed:
n = occup i f i |φ i | 2 ,(2)
and f i is the occupation number of each orbital. Integration over all space gives the kinetic energy
T KS [n] = τ KS (r)d 3 r.(3)
A generalization in terms of the spin density and spindecomposed KED's may be constructed by restricting the sums in the equations above to a specific spin species. An alternative KED, completely equivalent to Eq. (1), is
τ KS = − 1 2 occup i f i φ * i ∇ 2 φ i = τ KS − 1 4 ∇ 2 n.(4)
Note that the difference is the divergence of a vector function, whose integral is zero, leaving the integrated KE unchanged. Eq. (1) however is conveniently positivedefinite.
A key principle is that τ KS , like any other property of an electronic system, is a functional of the ground state electron density n. At the same time, this functional relationship can only be approximated. A "semilocal" approximation to T KS [n] defines τ KS at some position r in terms of the local density, density gradient and possibly its Laplacian:
T approx
KS
[n] = τ approx [n(r), ∇n(r), ∇ 2 n(r)]d 3 r (5) [9,10,[12][13][14]53]. Another approach, not considered here, involves nonlocal functionals with integrals over two spatial variables [19][20][21][22]56].
The lowest level of semilocal functional -the equivalent to the LDA in XC functionals -is the Thomas-Fermi model,
τ T F = 3 10 k 2 F n ∼ n 5/3 ,(6)
with k F = (3π 2 n) 1/3 the fermi wavevector of the homogeneous electron gas. At a next level of approximation is the gradient expansion (GEA): [57,58] τ GEA = τ T F + 1 72
|∇n| 2 /n + 1 6 ∇ 2 n + O(∇ 4 ).(7)
Terms up to fourth [59] and sixth order [60] in this expansion are known.
As is the case with exchange, it is natural to recast the derivatives of the density into scale-invariant quantities, here defined as
p = |∇n| 2 4k 2 F n ,(8)q = ∇ 2 n 4k 2 F n .(9)
Then the GEA becomes
τ GEA = 1 + 5 27 p + 20 9 q τ T F ,(10)
and any generalization of it that preserves the proper scaling of T KS under the uniform scaling of the charge density is constructable from an enhancement factor F S (p, q) such that
τ semilocal = F S (p, q)τ T F .(11)
Note however even higher order derivatives than ∇ 2 n may be considered [61], but may prove impractical in applications. The enhancement factor F S for the kinetic energy plays a role equivalent to that for exchange, F X , with E X ∼ F X e LDA X being the equivalent construction. The similarities are strong enough to posit a "conjointness conjecture" [18], that the two enhancement factors F S and F X are nearly identical.
For the KED, the most crucial issue for large inhomogeneity p, q 1 is the limit of the one-or two-particle spin-singlet system. In this case the Kohn-Sham KED reduces to the the von Weizsäcker [62] functional:
τ vW = 1 8 |∇n| 2 n ,(12)
the exact result for a system of N particles obeying Bose statistics and having the density n(r). The KED needed to create n(r) with fermions, the energetic cost of Pauli exclusion, is given by the difference between the Kohn-Sham and von Weizsäcker KED's τ P auli = τ KS − τ vW (13) from which one can define a Pauli enhancement factor:
F P auli = τ − τ vW τ T F ,(14)
which must hold true for both τ KS or τ approx . The Pauli enhancement factor is positive definite:
F P auli ≥ 0(15)
because of the positive cost of Pauli exclusion [63]. Moreover, the response of the fermionic system with respect to changes in density must be larger than that of the Bose system: the Pauli potential δT P auli (r)/δn(r) ≥ 0 [64].
Notably, this von Weizsäcker lower bound [Eq. (15)] is not respected by the GEA. The enhancement factor for τ vW is F vW S = 5p/3 which gives it a coefficient to p that is nine times larger than that of the GEA. The resulting Pauli enhancement factor is
F GEA P auli = 1 + 20 9 q − 40 27 p(16)
For q = 0, (or alternately, dropping the term proportional to q as is done in GGA's) τ GEA < τ vW for the relatively modest value of p = 27/40. We note here that the gradient expansion correction that is linear in q integrates identically to zero. Thus q will only affect energy expectations to fourth order in the gradient expansion. The simplest semilocal functionals normally then are constructed as generalized functions of the remaining variable p -generalized-gradient approximations or GGA's. These can draw upon a long experience in developing GGA's for exchange and are easy to implement. The problem is that, even more so than with exchange, functionals at this level are not flexible enought to be competitive with orbital-dependent models. Two recent GGA take complementary approaches to address this situation. The APBE [11], based on the conjointness conjecture [18] takes nearly identical forms for the exchange and kinetic energy enhancement factor, and fit both to the large-Z expansion of atoms to a high degree of accuracy. This takes advantage of a powerful tool -the scaling of atoms to high-Z is an instance of Lieb-Simon scaling [26] in which the effects of inhomogeneity in a finite system are turned off in a controlled fashion. Quite possibly this is an ideal way to construct a GGA [11,35,36]. The cost of conjointness however, is to break the von Weizsäcker bound for any finite-Z atom. The VT84F [9] imposes both the slowly-varying gas limit for small p and is limited at large p to F P auli (p) > 0 which guarantees the von Weizsäcker bound. Possibly more importantly, it guarantees the positive-definite bound on the Pauli potential. It has generally however a poor prediction of total KE's [47].
A natural way around the problem of conflicting constraints is to put the extra degree of freedom q back into the functional, that is, to create a meta-GGA. An instructive attempt is the Perdew-Constantin mGGA [53], which was developed explicitly to model the kinetic energy density, as a replacement for the KED in meta-GGA-level XC functionals. It starts from a conventional meta-GGA exact up to fourth order in the gradient expansion (the GE4-M) to describe the slowly varying limit. In order to impose the von Weizsäcker bound in the limit of strong electron localization, it interpolates between this functional and the von Weizsäcker form using a nonanalytic but smooth function of the difference between the enhancement factors z = F GE4−M −F vW S . Despite an attractive design philosophy, the mGGA has deficiencies as a practical tool for OFDFT [10,41,47]. However, it is of value as a an approach for thinking about OFDFT -building from the basis of the kinetic energy density which is an important tool for visualization and quantitative modeling of electronic structure.
Along these lines, perhaps the most physically significant role played by the KED in a meta-GGA is as a measure of electron localization [45,46,65]. This is done by taking the ratio of the Pauli contribution to the Kohn-Sham KED to that of the Thomas-Fermi model,
α = τ KS − τ vW τ T F .(17)
In regions where the KE density is determined predominantly by a single molecular orbital, τ KS approaches τ vW and α → 0. This limit describes single covalent bonds and lone pairs, and generally situations in which the selfinteraction errors in the GGA and LDA are most acute. The homogeneous electron gas, and presumably systems formed by metallic bonds, corresponds to τ KS = τ T F , τ vW ∼ 0 and α ∼ 1. Between atomic shells and at low density one finds α 1, tending to ∞ for an exponentially decaying density if τ P auli vanishes more slowly than n 5/3 . This limit can be used to detect weak bonds such as van-der-Waals interactions and define interstitial regions in semiconductor systems. The information on the local environment can then be used to customize gradient approximations for specific subsystems [46]. The electron localization factor or ELF [43,44] is often used in visualization as it converts α into a function with a range between zero and one:
ELF = 1 1 + α 2 .(18)
Note that the different contexts developing meta-GGA's and OFDFT's hides an important fact: F P auli = α for the true Kohn-Sham enhancement factor. Thus developing an OFDFT is essentially the same problem for both kinetic and exchange-correlation energies -that of modeling an orbital-free ELF. In recent work [47] we proposed to revise the mGGA following two simple points: imposing the von Weizsäcker lower bound τ KS > τ vW and relying on the second-order gradient expansion otherwise. This satisfies the constraints for the two main limiting cases of the KEDthat of delocalized electrons with slowly-varying density and that of strong electron localization, and otherwise keeps physically reasonable behavior for classically forbidden regions with high inhomogeneity. We defined a measure of electron localization z as
z = F GEA S − F vW S − 1 = 20 9 q − 40 27 p,(19)
which in a sense can be thought of as an orbital-free expression for α. A suitable nonanalytic transition between F GEA S and F vW S may then be used to impose the von Weizsäcker bound, which is otherwise broken by the GEA at z ≤ −1. Adapting a form recently used to construct a ∇ 2 n-based exchange function [66] results in the enhancement factor
F mGGArev S = F vW S + 1 + zI(z),(20)
where
I(z) = {1 − exp −(1/|z| α ) [1 − H(z)]} 1/α(21)
and H is the Heaviside step function. The interpolation function I(z) is one for z > 0 and tends monotonically to 1/|z| as z → −∞, thus enforcing F mGGArev S → F vW S in this limit. Otherwise the functional mimics the GEA, which returns the slowly varying electron gas for z ∼ 0, and has the correct scaling behavior for z → +∞ for a density exponentially decaying to zero. The differences between this approach and the mGGA are firstly the simplification of the functional used in the slowlyvarying limit, a gradient expansion rather than a meta-GGA. Secondly the form of interpolator between slowlyvarying and von Weizsäcker limits obeys a constraint that τ is greater than both τ GEA and τ vW while the mGGA interpolates in between the two limits. This difference proves to be helpful for modeling the KED of covalent bonds [47]. The factor α is used to control the rate at which the interpolating function switches between GEA and vW, with the leading correction to F vW
S being lim z→−∞ F mGGArev S − F vW S ∼ 1 z α .(22)
A factor of α = 1 was considered in the original formulation; however this changes the value of the cusp in the kinetic energy density (dτ KS (r)/dr) r=0 in the vicinity of a nucleus. For hydrogen, this is can be shown to be exactly −2Z/a 0 , but because the definition involves taking two derivatives of the particle density, this value is not universal. For small atoms it is identical to the cusp condition of the von Weizsäcker potential, but as discussed in the next section, it is altered for larger atoms by the occupation of p-orbitals which have a non-zero contribution to the KED at the nucleus. A safe choice may be α = 4 which does not contribute to the cusp of the KED and produces a Pauli potential that is zero at the nucleus. This is presumably the optimal choice for small atoms, like H where the Pauli KED should be small relative to the von Weizsäcker KED, but possibly not for larger atoms, as the Pauli contribution has to eventually become the dominant piece of the puzzle. Finally we note that this approach is not completely new -earlier work of Yang et al. [55] suggested a functional τ = max(τ vW , τ GEA ), essentially the α → ∞ limit of the current model.
A. The Kohn-Sham kinetic energy density for atoms
The radial Kohn-Sham equation for an atom is
E nl u nl (r) = 1 2 − d 2 dr 2 + l(l + 1) r 2 − Z r u nl (r),(23)
where u nl (r) = rR nl (r), n is the principle quantum number, l is the angular momentum quantum number, and R nl (r) is the radial wave function and r the radial distance from the nucleus. The KS density for a closed-shell, spherical atom is given by
n(r) = L l=0 N n=1 f nl |R nl (r)| 2 ,(24)
where f nl is the occupation number for the n, l subshell. This is strictly correct only for atoms with filled subshells, and we shall focus on two cases, the noble gases and alkali earths. The kinetic energy density for a spherical atom is
τ KS = 1 2 L l=0 N n=0 f n,l dR n,l (r) dr 2 + l(l + 1)R n,l (r) 2 r 2 ,(25)
and the total kinetic energy is
T ks = ∞ 0 τ ks (r)d 3 r.(26)
1. Scaling to large Z An elegant and systematic way of measuring the quality of approximate density functional theories is test their behavior for neutral atoms as the nuclear charge increases. In the case of hydrogen and helium, representing a limit of extreme electron localization, the KS functional reduces to the von Weizsäcker result. But as the nuclear charge increases, the core electrons of the atom behave more and more like a homogeneous electron gas. Thus, for an orbital-free density functional model to predict the kinetic energies of any atom, it must be able to predict accurately the transition between the homogeneity of extended systems to the extreme inhomogeneity of small atoms and molecules. This would then make it a good candidate to replace the KS model for a variety of systems.
In the limit of large Z, the electronic structure of atoms tends exactly [26] to the Thomas-Fermi limit with total energy given by E = −0.768745Z 7/3 . The density tends nearly everywhere to a universal smooth form, with quantum oscillations due to shell structure decreasing with amplitude as the number of shells increases [39]. The peak radial probability density occurs for r = a T F /Z 1/3 with a close to a B ; with this definition of atomic radius, the atomic radius scales as Z −1/3 . The Thomas-Fermi limit describes most accurately the core of the atom where the density is constructed from many interlacing orbitals and approaches a degenerate fermi gas. It must break down for the innermost shells since the Thomas-Fermi density unphysically diverges to infinity at r = 0; it also breaks down at large r because the semiclassical approximation used to derive the Thomas-Fermi result cannot not describe classically forbidden regions. (In the latter case, the large-r limit of the density decays as 1/r 6 rather than exponentially.)
The Thomas-Fermi energy is but the leading term in a general asymptotic expansion in Z [39]. For the kinetic energy this expansion is known for at least three terms: Finally we note that this asymptotic trend is an example of Lieb-Simon scaling [26,27] where the potential is scaled by an arbitrary strength ζ, distance is scaled by 1/ζ 1/3 , and the number of particles in the system is also scaled as ζ so that a charge-neutral system stays chargeneutral. This scaling procedure is defined as a generalization of the scaling which occurs as one goes down a column of the periodic table. As it defines the scaling of this perhaps most fundamental of all constructs in chemistry, it should be much more revealing than that of the normal uniform scaling to high density at fixed particle number.
T [Z] = AZ 7/3 + BZ 2 + CZ 5/3 + · · · .(27)
For the purpose of this paper, we look for three regimes of density, the large-r asymptotic region r > Z 7/6 , the core of the atom r ∼ Z −1/3 and the near-nuclear region r < Z −2/3 . We should expect a convergence to the Thomas-Fermi limit, and perhaps the gradient expansion for intermediate distances, but not for the other two regimes.
Limits
A number of facts are known about the KED in the limit of small and large r, and have recently been characterized in some detail [40]. As this region defines the leading error in the Thomas-Fermi picture, getting it right will be important to obtaining good kinetic energies. Although the density and thus KE density in the core of the atom tends to a finite value for r ∼ a 0 /Z or less, the TF charge density diverges to infinity and the real charge density can never be treated by this approach. However, given the vanishingly small role of exchange and correlation in this limit, one may gain insight by modeling the density with orbitals taken from the hydrogen atom.
The charge density in this limit is given strictly by the contribution of l = 0 orbitals. It has the cusp form [67] for small r:
lim r→0 n(r) → n(0)(1 − 2Zr/a 0 )(28)
with n(0) ∼ Z 3 /a 3 0 . This fixes the r = 0 value of the von Weizsäcker KED:
lim r→0 τ vW (r) → 1 2 Z 2 a 2 0 n(0).(29)
Taking the atomic KS KED defined above, we decompose into components from orbitals of specific angular momentum l and sum over all shells. For closed-shell atoms, we obtain
τ KS = l n τ nl .(30)
At the nucleus, r = 0, only the two lowest angular momentum components contribute: l = 0 and l = 1. The l = 0 component of the KED is given by
τ 0 = n f n0 |dR n0 /dr| 2 = τ vW .(31)
The density at the nucleus n(0) is constructed solely from the s orbitals and the probability density of each of these is of the form n ns (0)(1 − 2Zr/a 0 ). In other words, each orbital separately has the limiting cusp condition for the density defined above. This is enough to show that τ 0 is identical to the von Weizsäcker model result τ vW . The l = 1 term comes from both non-zero centrifugal energy contribution to the KED and the square of the derivative of the radial orbital R n1 . It contributes a nonzero Pauli contribution to the KED at the nucleus for any atom with at least one occupied p orbital [54]. The resulting formula is
τ 1 = n f n1 3 |R n1 /r| 2 = τ P auli .(32)
As a result, we should expect to find that the r = 0 limit of the KED and more specifically, the Pauli KED, to have a nontrivial dependence on the l = 1 occupation number and implicitly perhaps upon Z. It is worth noting that it has often been the assumption [38] that τ KS → τ vW in this limit. However the true non-zero value of the Pauli KED has long been known for atoms, and was part of the rationale behind the construction of functionals using the electron number N about an atom as an explicit functional variable [54]. The feature has recently been formally characterized and generalized to all central-potential problems [40], but it has yet to become part of an effective density functional. Finally, the large-r limit of τ KS follows from taking the contribution of the HOMO shell to the KED as r → ∞. For a spherically symmetric atom (a closed shell atom or an open shell atom with uniform fractional occupancy), the result is [40]
lim r→∞ τ KS (r) = τ vW (r) + l H (l H + 1) 2r 2 n(r)(33)
where l H is the angular momentum quantum number of the HOMO shell, and the particle density n(r) tends to that of the HOMO shell n n H ,l H (r). It is notable that neither |∇n| 2 nor ∇ 2 n preserves knowledge of the centrifugal force contribution to the KED. A radially symmetric density n(r) is constructable without any reference to the angular components of the Kohn-Sham orbitals so that there is no way to generate terms that depend upon l. Thus we do not expect a good OFDFT model to the Pauli contribution to τ KS in this limit.
III. METHODOLOGY
It is difficult to compare OFDFT models by solving them self-consistently. We rather solve the Kohn-Sham equation for a given system and use the resulting density for each model. To this end, we use the FHI98PP code [68] to generate Kohn-Sham particle and kinetic energy densities. FHI98PP is an atom code that computes Kohn-Sham orbitals on a logarithmic grid of potentially arbitrary accuracy for all particle radii. The formula for generating the grid is given by r i+1 = γr i + r 0 with γ = 0.0247. Because the well-known large-Z expansion is nonrelativistic, we do the same for our calculations to be able to make comparison. For simplicity, the local density approximation was used for calculating the exchange-correlation energy. This does not directly enter into the calculation of the kinetic energy or kinetic energy density, but might have some effect on the coefficients of the asymptotic expansion in Z.
To calculate the derivatives needed for calculating the KED and the Laplacian and gradient of the density on the logarithmic grid, we use a Lagrange-interpolation scheme which constructs approximate n-th order polynomials to be differentiated using n + 1 grid points. A subgrid of thirteen points was found to be optimal, after dropping the first and last six points. Simpson's method was used for integrals.
Numerical and analytical tests to determine the accuracy of the differentiation and integration algorithms are described in Ref. [69]. The issue of replacing the exact density and LDA density may be assessed by comparing LDA kinetic energies for noble gases with those obtained using the optimized effective potential (OEP) method. These are shown in Table I. Notably, the percent error of the LDA diminishes rapidly for Z > 10, as it becomes asymptotically exact for infinite Z. Fig. 1 shows the main players for characterizing the kinetic energy density of a typical atom, Argon. Fig. 1(a) plots the scaled radial density versus scaled radius Z 1/3 r. The peak of the Thomas-Fermi density, the Z → ∞ limit, occurs at roughly Z 1/3 r = 0.3 [39], in between the n = 1 and n = 2 shells; the shells oscillate above the TF peak value of ∼ 0.38. Fig. 1(b) shows suitably scaled values of p and q versus scaled radius. As noticed by Bader in the development of the QTAIM [70,71], the Laplacian of the density, proportional to q, is negative (or more reliably, at a local minimum) at the centre of each shell, and is a local maximum in between shells. It tends to −∞ at the nucleus because of the cusp in the electron density and to +∞ far from the atom. The gradient variable p is finite at r = 0 but otherwise shows a similar behavior as q, with q lagging slightly behind it in a way reminiscent of sine and cosine functions.
We may gain more insight by plotting q(r) versus p(r), an analog to the phase-space plot dθ(t)/dt versus θ(t) encountered in the study of oscillator dynamics. The results for the first row of the periodic table, from Li through Ne, are shown in Fig. 2 and for the noble gases in Fig. 3. Comparing to Fig. 1(b), we can identify the three pertinent regions of the atom as three distinct features in "phase-space." The classically-forbidden asymptotic region far from the nucleus shows up as a linear tail that extends to positive infinity in both p and q. The region near the nucleus characterized by the cusp in the electron density is the other end of each phase-space "trajectory", where q → −∞ and p is finite and varies little with Z. A system with only one shell, such as He in Fig. 3, transitions from the one region to the other seamlessly. Otherwise there is exactly one loop in p and q for every shell transition. The n = 2 to n = 1 or L to K shell transition is observable in Fig. 2; close observation of Fig. 3 reveals one loop for Ne, two for Ar, three for Kr and so on. The largest p and q values occur in the transition between shells, and the smallest at valence shell peaks. Thus in the midregion between the two extremes of cusp and asymptote, there is a tendency towards weak relative gradient corrections p, q 1 -that is, towards the slowly-varying electron gas.
The trend to infinite Z in this picture is also revealing. The behavior of p and q in the cusp and asymptotic regions is essentially unchanging -there is only a modest shift from the He atom case to the largest Z atom. This may reflect the fact that neither of these two regimes can be adequately described in Thomas-Fermi theory: the charge density is singular at the nucleus and decays as 1/r 6 as r → ∞. One sees in some sense a renormalization of the trend described by the Helium atom -that is of the atomic features of the system furthest from the TF limit. It is in the core shells of the atom, which should eventually trend to the TF limit that a dependence upon Z is most clearly seen. The trend down the first row, shown in Fig. 2, is of the shell structure loop transitioning from an exceptionally large range of p and q for the smallest-Z atom, slowly towards the p = q = 0 limit. By Neon, the majority of the atom is within the range p, q < 1.
As further shells are added onto the system (Fig. 3), the space for any particular transition -L to K, M to L, N to M -consistently shrinks. Interestingly, the second innermost loop caused by the transition from the M to L shells rapidly shrinks to the perturbative regime p, |q| 1 -one rapidly reaches the slowly-varying limit for inner shells as predicted by TF theory. However, the last transition, between K and L causes a large swingout to higher p just before the trajectory transitions to the nuclear cusp. This may be indicative of the argument behind the Scott correction to the KE (the second term in Eq. 27) -that it involves not only the 1s shell, but contributions from the other innermost shells as well [32]. Focusing on the HOMO shell, the trend is less predictable but follows very gradually to the slowly-varying limit.
B. Parametric visualizaiton of the kinetic energy density
Up to now only the visualization of the space defined by p(r) and q(r) has been discussed. We now include the Pauli enhancement factor of the Kohn-Sham KED, given by Eq. (14) in the third dimension. The result for the noble gases is shown as a scatter-plot over the numerical logarithmic grid in Fig. 4(a). This results in a three-dimensional parametric plot similar to the twodimensional plot in Fig. 3. The view is rotated 30 • about the z axis in Fig. 4(a) and 120 • in (b). Note that He, shown as violet circles, has zero Pauli KED and thus lies entirely in the F P auli = 0 plane. All parametric curves start with a nearly universal behavior with F P auli ∼ 0 near the nuclear cusp, shown as the tail for p ∼ 0 and q < 0. The noble gases show approximately the same behavior for very large r, forming a second nearly universal curve. This however shows distinct signs of fanning out and is significantly different from He, or for other atoms, like Be, with no p frontier orbitals.
Most remarkably, it can be seen especially from (b) that the frontier and core regions of every atom are nearly coplanar. There is a perspective, not too far from that shown in (b) which looks at that plane edge on, in which the whole parametrized enhancement factor over all noble gases reduces to a simple hockey-stick form. This has several implications. For the observable range of values of p and q, F P auli for the noble atoms reduces to nearly a single-valued function of the two variables p and q. While either separately might lack sufficient information to characterize this set of systems, the combination does, and thus an unambiguous orbital-free functional may be constructed. But more than this: over much of its range, F P auli reduces to a simple linear function of the two. In terms of density functional theory, the Pauli enhancement factor is in large part that of a second-order gradient expansion. Finally, the region of the parameter space where the F P auli data does not fall into a plane is that of the cusp in the density near the nucleus, where a different universal behavior holds. The net result is that both regions can be described by a single parameter -a linear combination of p and q. The determination of this parameter and its use in modifying density functionals is described in the next sections.
C. Gradient Expansion Fits
We now assume the projection of F P auli onto a function defining a plane in p, q space. This describes a fit to a GEA:
F GEAloc P auli = 1 + z loc(34)
with z loc = (a cos θ)p + (a sin θ)q
being an empirical version of the z variable introduced in Eq. (19). This defines a GE valid locally for the KE density rather than the normal GE, derived for the KE. Then a and θ can determined by a least-squares-fit over a suitable range in p and q. Ideally, given that the GEA should be most applicable in the limit Z → ∞, we should take an extrapolation to the largest-Z atom numerically feasible. Such calculations of 1000's of electrons are chemically unrealizable but mathematically important for accurately determining limiting cases [11,36]. Secondly, we should limit the range of the fit to values of p, |q| 1, the range of validity for the gradient expansion.
A preliminary calculation shows that this may not be too important for our purposes. We perform a least squares fit of F P auli to Eqs. (34) and (35) for a given atom over all numerical grid points r i for which p(r i ) < 0.6 and −0.125 < q(r i ) < 0.6. The results are shown for the alkali earths and noble gases in Fig. 5(a) for a and 5(b) for θ. The results converge very nearly to a constant for both columns after about Z = 50. Taking the last five atoms shown and averaging we get a = 3.459 (13) and θ = 2.1652 (13). Taking the data for Uuo (Z = 118) only, and restricting the fit further to p, q < 0.5, we get a = 3.486(26) and θ = 2.1615(28), a near match.
One point of interest here is that the values found empirically do not match those of the canonical [57] gradient expansion. The corresponding values of a and θ obtainable from Eq. (16), a = 2.671 and θ = 2.159, are shown as straight lines in Fig. 5. Apparently, θ, measuring the relative mixture of p and q to the gradient expansion correction to the KED is unchanged to within statistical error. However the magnitude of the GE correction a converges quickly with Z to a value 30% larger than the predicted correction.
That is to say, the actual gradient expansion of the KE density, within the core region of the atom where this expansion is locally valid, is not the gradient expansion of the integrated KE.
The implications of this difference are quite dramatic. Convert these parameters back to the expression Eq. (11) for the KED and then to an expression for the total KE. We then get the following expressions for the result produced by the empirical local GEA for Uuo and the canonical GEA:
T GEA = d 3 r(1 + 0.185p + 2.222q)τ T F(36)T GEAloc = d 3 r(1 − 0.275p + 2.895q)τ T F .(37)
Given that for a pure GEA functional, the GE term linear in q integrates to zero, the net GE contribution to the kinetic energy from the local GEA fit is the opposite sign from that of the canonical GE. As we shall see further on, it is actually the wrong sign -giving a GE expression for the energy that is worse than that for the Thomas-Fermi model. It is also interesting that this is not the first evidence of such a qualitative discrepancy between the gradient expansions of the KE and KED. The recent analytic gradient expansion of the KED of the Airy gas [52], a system that asymptotically approaches an electron gas with a constant density gradient, also produces a negative coefficent for p. In this case, the kernel for the KE integral is F S = 1 − 0.185p + 3.333q, which shows a similar change from the standard gradient expansion as that of the atom. However, quantitatively, these numbers are far outside the error bars of our statistical fits for the atom -the asymptotic limit of the KED of the neutral atom clearly tends to a different gradient expansion than that of the Airy gas. Nevertheless, it is reasonable to say that the gradient expansion about the local density approximation limit of a sloped system, either atom or Airy gas, is fundamentally different from that about the homogeneous electron gas.
D. Single-variable projection of the KED
We have seen that the behavior of F P auli for atoms projected upon the parameter space defined by p(r) and q(r) is capable of a great deal of simplification. Given the hypothesis that we might have a successful two parameter parametrization F P auli [p(r), q(r)], we find through Fig. 4 that we essentially only have a one-parameter space, F P auli [z loc (r)], with z loc given by Eq. (35). The result is shown in Figs. 6 and 7.
These plots provide a wealth of detail that illuminate several key features of the Kohn-Sham KED of atoms. Most important of all is the visualization of how the KED scales to high Z. A single shell system such as He has zero Pauli KED and is in this sense infinitely far from the asymptotic limit. But any two-shell system already captures much of the sense of what happens at large Z, albeit with obvious shell structure -for example, F P auli for Ne (blue crosses) loops around but does not land on the GEA line. Here Be and Li, not shown, are worst cases, as one might expect, while Ne is already fairly close to the limit. As more and more shells are added, F P auli continues to loop around the large-Z asymptote defined by the GE line, but in ever tighter loops that rapidly approach the asymptote. There is a hint of curvature for Uuo that might imply a fourth-order gradient correction but a very small one, as is the case for the standard gradient expansion.
Second, we see the two regions that cannot be captured by Thomas-Fermi theory each demonstrate difficulties with the asymptotic model. First of all, the region r → ∞ correlates with the loss of a well-defined singlevalued function F P auli (z). That is, for any point in the core region of an atom corresponding to some z loc and some value of F P auli , there will be a point in the asymptotic region with the same value of z loc but requiring a value of F P auli up to 50% smaller. Moreover, every individual atom seems to require a unique form for F P auli (z) in the asymptotic region. Though the tails seem to converge to some finite value as Z increases, this convergence is also very slow.
This behavior may be an indication of the problem facing OFDFT in the asymptotic region discussed in Sec. II A 2. In this regime, the Pauli KED has a contribution from the HOMO shell [Eq. (33)] that depends upon the angular momentum quantum number of the shell. It therefore cannot be predicted from the total particle density alone. At the same time, it shoud be noted that the worst behavior occurs only for very large r. As seen in Fig. 1(a), the Pauli enhancement factor of the HOMO shell tends to be depressed relative to p and q and hovers around its minimum value for a fair distance. This is also seen as the clumping of a large number of grid points in Fig. 6 at the very last local minimum in F P auli before it trends off to ∞. The impressive near-universal form seen in Fig. 4 is a reflection of the gradual onset of non-universal behavior.
A second difficulty occurs for the smallest radii, within the innermost shell of each atom, as shown in Fig. 7. Here the Pauli contribution to the KED is non-zero and measures the contribution of p-orbitals to the KED. Systems like He, Li and Be with no p-orbitals have exactly zero Pauli KED in this limit, as seen for He in this plot. For atoms with p orbitals, the result depends sensitively on how many shells are occupied, with the smallest F P auli for Neon and the largest for Uuo. There is a definite limiting case for infinite Z [40], which is approached rather slowly. The functional form of F P auli for these systems is linear in r at the nucleus -the enhancement factor has a finite cusp. This translates to a Pauli correction of the form F 0 (1 + A 0 /z loc ) where F 0 and A 0 necessarily depend upon the number of electrons. Although this seems to be a very small effect, with F 0 on the order of 0.02 for the largest physical atoms, it occurs in a limit with extremely high density and has a measurable impact upon integrated kinetic energies as we shall see in the next section.
E. Modified functionals for the KED
We find two insights for developing OFDFT from the perspective of the local kinetic energy density. First of all, rather than the canonical gradient expansion, which is derived to from an expression for the integrated kinetic energy of the slowly varying gas, we should start from the observed gradient expansion for the local kinetic energy density. In our mGGArev model, this is achieved by simply replacing the argument z in Eq. (21) with z loc of Eq. 35. This produces a new family of possible functionals (mGGAlocα) with different values of the parameter α that controls the rate at which the transition between gradient expansion and von Weizsäcker model occurs for strong electron localization. Analogous corrections can be made for the mGGA.
The second insight stems from the deviation of the KED from the gradient expansion near the nucleus. The nuclear region is a particular point of interest for models of the local KED such as the mGGA and the related meta-GGA's we have constructed. The transition to large negative values for the gradient expansion correction that occurs in this region breaks the basic constraint on the KED that F P auli > 0; in fact here F GEA P auli → −∞. This region is thus necessarily a probe of the transition from the slowly-varying electron gas characterized by the GE and the localized electron limit dominated by the von Weizsäcker KED. Exactly how the Kohn-Sham KED responds in this situation is a clue as to how to model this transition.
The impacts of the varying strategies for doing this are shown in Fig. 8. This plots enhancement factors F S for the special case of zero density gradient versus the Laplacian-based variable q. This limit is a fair approximation of the nuclear region, where p is small (< 0.2) and nearly constant while q tends to −∞, as shown in Fig. 3. In this case, τ vW = 0 so that the lower bound it imposes is easy to visualize: F S = F P auli > 0.
The canonical gradient expansion is shown to fourth order (dots), very nearly a straight line in q passing through the Thomas-Fermi limit F S = 1 at q = 0. It very quickly goes below zero for negative q. The empirical local GEA (wider-spaced dots) exaggerates this behavior, given its steeper slope in q, evident in Eq. (37). The mGGA imposes F P auli > 0 by a sharp cutoff that interpolates between GEA and von Weizsäcker functionals in such a way as to be identically zero for negative q beyond the GEA crossover point. The mGGArev [Eq. (21), with α = 1] is shown as long-dashed line. This enforces F P auli > F GEA P auli which is beneficial for molecular bonding [47]. The short-dashed and dot-dashed lines show the mGGAloc with α = 1 and α = 4, which adhere to the local GEA outside the transition region.
Two points may be learned from this comparison. First of all, the functional form of the mGGArev is closer to reality than that of the mGGA. As seen in Fig. 7 the KS KED tapers off like the blade of a hockey stick, as q and thus z → −∞, and certainly lacks the mGGA's abrupt transition to zero. In that sense, the hypothesis upon which the mGGArev is based [47] -that F KS > F GEA as q → −∞ -does hold here, as long as one uses the empirical local GEA, and not the canonical GEA.
However, as we shall see next, the mGGA is highly accurate for the total kinetic energy of atoms, while the mGGArev and its relation the mGGAloc1 give large overestimates. While having the correct qualitative shape, they both overestimate the contribution to the integrated KE from this region. Only the mGGAloc4 approaches the quality of the mGGA. The mGGA's success thus seems to be from a clever weaving from the wrong gradient expansion limit, to the wrong approach to the von Weizsäcker limit in such a way as to cancel out the errors from each region. Getting a better local KED does not guarantee a better kinetic energy, thus meriting serious attention to the integrated quantity. Energy Figures 9(a) and (b) show the integrated kinetic energy of the noble-gas atoms for many of the OFDFT models discussed in this paper, scaled by the Thomas-Fermi scaling factor Z 7/3 and plotted as a function of Z −1/3 . As discussed in Sec. II, the kinetic energy can be expressed as an expansion in powers of Z −1/3 , with the infinite-Z limit of 0.768745Z 7/3 predicted by Thomas-Fermi theory. Also shown is a fit of the trend with Z for each functional to the asymptotic form [Eq. (27)]. The Thomas-Fermi limit is assumed for each case and the next two coefficients B and C are determined by linear regression over the noble gases excluding He. The fit coefficients and errors are shown in Table II.
F. Integrated Kinetic
The slight disagreement between the theoretical and calculated asymptotic coefficients for the KS/LDA kinetic energy in Table II are within two standard deviations for the fit and thus seem reasonable. The errors due to the use of the LDA rather than exact KS density are probably much smaller. Beyond this, it is possible to distinguish two classes of functionals. The canonical GEA obtained from the slowly-varying electron gas is already exceptionally close to the KS value and more sophisticated models like the mGGA struggle to improve upon or even do as well as it over all Z. Nevertheless, both it and the APBEK [11] are constructed in part through a fit to the large-Z limit. As a result both have excellent estimates of the asymptotic coefficients B and C and are nearly flawless for larger Z.
On the other hand, the mGGArev4, [Eq. (21) with α = 4, labelled rev4 on the plot] is a serious regression, and the VT84F, whose asymptotic coefficients are shown in Table II, is worse. These have been constructed with constraint choices that emphasize the von Weizäcker lower bound on the KED. In the mGGArev4 and in the VT84F, this is done by imposing the implicit constraint that τ > max(τ vW , τ GEA ), the former by choice and the latter by necessity given the restricted flexibility of the GGA form. This leads to an overestimate of total energy, because the GEA is significantly less than the von Weizsäcker KED especially near the nucleus. Removing this unphysical behavior must cause a net increase in the total kinetic energy, whereas the GEA is already almost perfectly accurate. In contrast, the mGGA interpolates between slowly-varying and von Weizsäcker limit with a function that incorrectly obeys τ vW < τ < τ GEAthus taking advantage of a natural cancellation of errors. Both of these effects are clearly seen in Fig. 10, which shows the radial KE density of the 1s shell of Neon. The GEA (dotted line) has a large negative error at the cusp, but an equally large error at the peak of the shell. In tran- sitioning from the GEA to the vW, the mGGA preserves this error cancellation. The mGGArev4 (dot-dashed line) fixes the error near the cusp but its constraint choice prevents it from fixing the error at the shell peak. The final key to the story is the impact of the empirical local GEA we find for the KED. The impact of its deviation from the standard GEA is to lower the local KED with respect to it everywhere in the system. This produces a total KE that is much less even than the TF energy, as seen in Fig. 9. At the same time, this lowering of KED works naturally with the raising of energy caused by the imposition of the constraint τ > τ vW near the nucleus and the further constraint τ > τ GEAloc that we have observed throughout the 1s shell. The effect of combining this constraint with the local gradient expansion is shown in Fig. 9(b). While using the canonical gradient expansion with these constraints leads to the serious overestimate of the mGGArev4, the combination of the right form of local gradient expansion with this constraint (labelled loc4) combine to almost cancel this error.
Unfortunately the overall quality of the asymptotic trend of the mGGAloc4 with Z is poor, as shown especially in Table II. This is the downside of the good cancellation of errors seen in the GEA: removal of one errorcausing effect leads to poorer results unless the companion effect causing the cancellation is treated equally well. The problem here is the failure to account for the Pauli contribution from p orbitals in the near-nuclear region, which has a measurable effect on the quality of the answer. Thus a model for this effect is necessary, if only to understand the physics of the atom.
G. Empirical model of near-nucleus region
In the previous section we have taken as a reference model the revised mGGA of Eq. (21) with a transition parameter of α = 4. This is a reasonable choice -it ensures that both the Pauli contribution to the KED and its potential δτ P auli (r)/δn(r) are zero near the nucleus. This ensures that for systems like He, for which there is no Pauli KED, or for small Z in general, that the nearnuclear region at least is handled reasonably. (It is improbable that a functional based upon the slowly-varying electron gas can produce zero τ P auli everywhere.) However, this choice of interpolating factor does not account for the non-zero contribution by p-orbitals to the Pauli KED at the nucleus. Unfortunately, we have seen (Table II and Fig. 9(b)) that our best empirical fit for the core and asymptotic regions gives a poor estimate for integrated KE's of atoms. This indicates that the error in ignoring the Pauli contribution to the KED near the nucleus is a measurable effect. Though the Pauli enhance-ment factor in this region is small (Fig. 7), it results in a significant contribution to the KE given the enormous densities for large-Z atoms. And unfortunately, we need a correction that is different for every row of the periodic table, each of which adds a new p orbital to the system and an additional contribution to the Pauli KED. Thus a correction to the von Weizsäcker KED is required for this region that is somehow dependent upon the electron number N .
As a first step in this direction, we build upon the Ndependent model developed by Acharya et al. [54] Their work noted that an excellent model of the KED for atoms could be obtained by first taking a slowly-varying model of the KED such as the TF or GEA model for all shells but the innermost K shell. Then, for the K shell, the model is replaced by the von Weizsäcker KED:
τ [n] = τ 0 [n] − τ 0 [n K ] + τ vW [n K ](38)
with τ 0 the KED of the initial slowly-varying model and n K the density of the K-shell. Note that at the nucleus this model essentially restricts τ 0 to the description of the small Pauli contribution to the KED due to p orbitals, and assumes that τ vW contributes negligibly elsewhere.
With reasonable assumptions about the nature of the K shell density n K , one gets an N -dependent model for the KE:
T [n] = T vW [n] + T 0 [n] 1 + c/N 1/3 .(39)
A very similar approach has recently been proposed [42] which uses the KED of the K-shell as a basic variable for building an OFDFT and extending the analysis to treat the exchange contribution from this shell. It provides excellent predictions of exchange and kinetic energy densities near the nucleus, suggesting that the careful treatment of the K-shell density is the key to modeling the KED in this region. We will take another tack to this issue, by determining an N -dependent correction to the mGGArev functional that reproduces the important features of the Acharya KED in the near-nuclear regime and recovers the asymptotic scaling of the KE of atoms to large Z. We do so by modifying the mGGArev interpolation function I(z), using z = z loc , to
I N N (z loc , N ) = {1 − exp [−β α (N )/|z loc | α ]H(−z loc )} 1/α (40) where β(N ) = A N N + B N N N 1/3 .(41)
Expanding about the near-nuclear limit z loc → −∞ we find
lim z→−∞ F mGGAnn S = F vW S + 1 − β.(42)
Essentially, the correction contributes a non-zero component to Pauli KED in the near-nuclear region with the same scaling in N as the empirical Acharya correction. By adjusting the constants A N N and B N N , our functional can be empirically fit to the Z scaling behavior of the KS KE for large Z atoms. Our original model is recovered with A N N = 1, B N N = 0. Values of A N N ≈ 0.77 and B N N ≈ 0.50 give a nearly ideal fit to the Kohn-Sham kinetic energy as seen in Fig. 9(b). These are remarkably close to the large-Z expansion parameters of Eq. (27), although we have no evidence that this is more than a coincidence.
Nevertheless, these values are poor predictors of the actual KED at the nucleus -while the actual value of F P auli (r = 0) ∼ 0.022 at the nucleus for Rn, our correction predicts a value six times larger. This is indicated by error introduced into the KED as r → 0, as seen for Argon in Fig. 11(a) and Uuo in Fig. 11(b). The excellent KE's are caused by successful cancellation of errors between those of the near-nuclear regime and that accumulated across the rest of the atom. Interestingly, the need is to make the fit in the near-nuclear region worse compared to the non-N -dependent mGGAloc model. By comparing Fig. 11(a) to Fig. 1(a), we find the second largest source of error for the mGGAloc4 (fit-4 in the plot) comes in the transition between shells, where F P auli has a local maximum. This error is already outside the nuclear cusp region and in that of oscillatory behavior of F P auli about the gradient expansion asymptote as seen in Fig. 6. As Z increases, the magnitude of the error increases, and more shells seem to be involved, but its contribution to the total KE decreases, as the region of error moves farther from that of peak radial charge density at Z 1/3 r ∼ 1. Fig. 11 tells roughly the same story for the mGGA, and the cancellation of error in that model, but with generally larger amplitude oscillations.
V. DISCUSSION AND CONCLUSIONS
We have analyzed scaling trends in the positive-definite Kohn-Sham kinetic energy density over the periodic table of atoms. We have concentrated our attention to the transition to the large-Z limit, in order to characterize the diminishing size of the corrections to the Thomas-Fermi limit as Z increases. Second-order density derivatives ∇ 2 n and |∇n| 2 expressed in scale-invariant form provide a intuitively useful and nearly complete visual description of the atom and particularly, the trends with Z of different local regions of the atom -nucleus, core, and valence shell. The pair thus should be a useful basis for constructing orbital-free maps of local quantities such as the kinetic energy density or the energy densities associated with the exchange and correlation holes.
In fact, we find that over much of the atom, corresponding roughly to the regime of validity of the TF model in the infinite-Z limit, the Kohn-Sham KED is exceptionally well fit by a simple second-order gradient expansion. For low Z deviations from this asymptotic trend, caused by shell structure, naturally oscillate about it and gradually reduce as Z increases. At large Z, the local GE model becomes nearly exact, and independent of column. This suggests that the gradient expansion is the fundamental semilocal density functional correction to the TF limit, but with the significant caveat that the local gradient expansion for the KED is not the global one for the KE. In fact it is qualitatitively differentthe correction to the integrated Thomas-Fermi KE obtained from the local gradient expansion is the opposite sign of the normal case. Thus, we cannot say that if T KS = τ model (r)d 3 r then τ KS (r) = τ model (r), or viceversa. Note that this is not simply an issue of choice of "gauge", where one might compare two KED's defined in alternate ways that integrate to the same value. In this paper, only the unique positive-definite gauge is used.
Rather the problem is fundamental -the relative success of Kirzhnits GE is not because of the accuracy of the underlying local functional τ [n(r)] because this breaks the lower bound τ > τ vW . It rather captures a cancellation of errors in the integral of τ -the breaking of the von Weizsäcker lower bound near the nucleus being compensated by an overestimate of the local gradient correction elsewhere. This points to the much greater difficulty in modeling the local versus the global quantity, as the former requires modeling from point to point and is thus much less amenable to beneficial error cancellation. At the same time these results confirm, qualitatively if not quantitatively, the gradient expansion analysis of Ref. [52] for the Airy gas, a model designed asymptotically to represent a system that is all surface. Together these two asymptotic limits strongly suggest that the Kirzhnits gradient expansion should not be used in an application (presumably including bonding) that would depend sensitively on the local kinetic energy density.
It is not surprising to find that the greatest difficulties in removing this point-to-point error using the secondorder gradient quanitities p and q are the two limits in which Thomas-Fermi theory fails. The asymptotic limit far from the atom is problematic because the Pauli kinetic energy deviates from being a single-valued function of these variables. This seems to be related to the dependence of τ KS on the angular momentum quantum number l HOM O of the HOMO orbital [40], something that is not predictable with only ∇ 2 n or |∇n| 2 . The use of higher-order derivatives might help in this case [61]. The near-nuclear region dominated by the cusp in the density is also difficult because of the sensitive dependence of the Pauli KED on the number of electrons occupying p-orbitals in the system. This might be crudely approximated with the reduced density gradient p, which also shows a weak dependence on N , but the recent nonlocal approach of Ref. [42] should be more robust.
In all then, it is not surprising that a simple fix to our OFDFT meta-GGA models, replacing the global gradient expansion with the empirical local one we find here, fails to produce good total kinetic energies for the atoms. While they can hit the ballpark of KS energies, they do not compare favorably even to the lowest level conventional gradient correction. Rather our findings should help to develop OFDFT models that much more accurately model the KED in the bulk of the atom than prior models. In this, the fact that we can limit the functional to a gradient expansion and not a GGA helps a lota gradient expansion has a well behaved Pauli potential that neither breaks known constraints nor generates unphysical oscillatory behavior.
At the same time, we can reproduce the integrated KE of atoms with excellent accuracy given a fit to a simple N -dependent modification of our orbital-free model. A density-functional theory that depends upon the number of electrons N may be less than satisfactory from an a priori standpoint. More to the point perhaps is that this close fit is achieved by introducing, not reducing, error into the KED at the nucleus in order to can-cel out the net error from inner shells. A connection to semiclassical theory may explain this. In a paper deriving the Scott correction B = −1/2 to the Thomas-Fermi KE [32], Schwinger noted that the correction came not just from the cusp region where the Thomas-Fermi density diverges, but also from quantum oscillations in the inner shells -those with peaks at radii r peak Z 1/3 a B . In our situation, for even the largest system, Uuo, we see not only large errors at the nucleus, but in the quantum oscillations about the gradient expansion that damp out only gradually. We believe that for Z → ∞ these oscillations will remain large for any atom, but extending only over a fraction of the inner shells, becoming negligible relative to the total energy. The point is that a successful model of the KED for atoms will have to account for both the unusual Pauli energy density at the nucleus and for large quantum oscillations in the nearby shells. The progress made to handle the former in Ref. [42] will need to be matched by improvement in the latter; these might be made by a fourth-order gradient correction.
A final issue is whether the use of a negative gradient correction in the gradient expansion helps to improve binding energies predictions for molecules, or perhaps makes them worse. This issue is currently being explored. Preliminary data for the AE6 test set show that the use of a mGGAloc using the atomic local GEA rather than a mGGArev using the conventional GEA does improve binding energies consistently. At the same time, the indication is that this improvement is nowhere near enough to make OFDFT competitive with Kohn-Sham methods. However, it would be interesting to explore the effect of the use of a negative gradient expansion coefficient in a GGA. If the best performer on the test set, the VT84F, showed a similar improvement in binding energy we see for our meta-GGA's, it should come within the ballpark of the LDA in performance. Our findings thus should make a contribution, if not a decisive one, towards solving the challenge of the orbital-free prediction of covalent bonding.
A
= 0.768745 defines the Thomas-Fermi limit with T = −E because of the virial theorem. B = −0.5 is the Scott correction[29,32] which corrects the error in the Thomas-Fermi KE caused by the spurious divergence in the Thomas-Fermi density in the innermost shells of an atom. C = 0.2699 defines additional corrections derivable from the gradient-expansion correction to the Thomas-Fermi picture[33].
FIG. 1 .FIG. 2 .
12(a) Scaled radial number density n(r) and Pauli enhancement factor F P auli as a function of scaled radius Z 1/3 r for Argon. Scaling factors reflect scaling of atomic peak radius by Z −1/3 and particle density by Z 2 for the Thomas-Fermi atom. (b) p (dotted line) and q (dashed) scaled by Z 2/3 and plotted as function of scaled radius. Parametric plot of p(r) vs q(r) for row two of the periodic table.
FIG. 3 .
3Parametric plot of p(r) vs q(r) for all atoms in column VIII of the periodic table.
versus p(r) and q(r) for noble gas atoms. Perspective is rotated 30 degrees about the z-axis with respect to the p-axis. (b) Same, for a 120 degree rotation.
FIG. 5 .
5(a) Fit parameter a and (b) fit parameter θ versus Z as determined by fitting Eqs.(34) and(35)to F KS P auli (p, q) for individual atoms. These are compared to values of a and θ from conventional gradient expansion (dotted line.)
FIG. 6 .FIG. 7 .
67F KS P auli (r) plotted parametrically versus z loc (r) [Eq.(35)] for the noble gas atoms, including Helium and Unumoctium. Dashed line gives the GEA fit F GEAloc P auli = 1 + z loc . The values for a and θ used to define z loc are those obtained by optimizing the fit for Uuo. The same asFig. 6, but focusing on the near-nuclear regime where q → −∞.
FIG. 8 .
8The Pauli enhancement factor for the fourth-order GEA (dotted), meta-GGA's based upon it (mGGA, mG-GArev), and for the empirically fit second-order GEA (widelyspaced dotted) and variants of the mGGArev built upon it, using α = 4 and 1 in Eq. 21. Shown versus q for p = 0, approximating the conditions near the atomic nucleus. Grey area shows region forbidden by the von Weizsäcker bound.
FIG. 9 .FIG
9(a) T /Z 7/3 versus Z −1/3 for standard kinetic energy models discussed in the paper. GEA2-fit is the second-order GEA using empirical parameters of Eqs.(34) and(35). (b) the same, demonstrating the effect of using the empirical local GE in constructing OFDFT models. . 10. (colour online) Kinetic energy radial densities in the 1s shell of the Ne atom.
11. (a) Error in the scaled radial KED of Argon 4πr 2 [τ model (r) − τKS(r)]/Z 2 versus scaled radius for several KED models. (b) The same, for unumoctium (Z = 118).
TABLE I .
IErrors in KS kinetic energies using the LDA density versus the OEP, from Ref.[39].Atom Z
TS
TLDA % Error
He
2 2.86168 2.76739
3.295
Ne
10 128.545 127.737
0.629
Ar
18 526.812 524.967
0.350
Kr
36 2752.04 2747.81
0.154
Xe
54 7232.12 7225.09
0.097
Rn
86 21866.7 21854.7
0.055
IV. RESULTS
A. Visualizing a parameter space
TABLE II .
IILeast squares fit parameters for the Z expansion
[Eq. (27)] of the noble gases for various OFDFT models of
the kinetic energy. The Thomas-Fermi limit A = 0.7687 is
assumed.
Model
B
C
Accepted
−1/2
0.2699
KS/LDA
-0.4943(43) 0.252(11)
TF
-0.649(7) 0.351(19)
GEA
-0.522(8) 0.292(20)
APBEK
-0.489(8) 0.241(21)
VT84F
0.116(20) 0.72(8)
mGGA
-0.493(9) 0.270(23)
mGGArev4 -0.429(7) 0.320(20)
GEAloc
-0.834(6) 0.437(16)
mGGAloc4 -0.618(5) 0.546(13)
fit4-NN
-0.4933(31) 0.273(5)
ACKNOWLEDGMENTSA.C.C would like to thank Kieron Burke and Sam Trickey for useful discussions.
. P Hohenberg, W Kohn, Phys. Rev. 136864P. Hohenberg and W. Kohn, Phys. Rev. 136, B864 (1964).
V Karasiev, D Chakraborty, S Trickey, Many-Electron Approaches in Physics, Chemistry, and Mathematics. L. D. Site and V. BachBerlinSpringer VerlagV. Karasiev, D.Chakraborty, and S. Trickey, in Many- Electron Approaches in Physics, Chemistry, and Mathe- matics, edited by L. D. Site and V. Bach (Springer Verlag, Berlin, 2013).
. A V Akimov, O V Prezhdo, 10.1021/cr500524cpMID: 25851499Chemical Reviews. 115A. V. Akimov and O. V. Prezhdo, Chemical Reviews 115, 5797 (2015), pMID: 25851499.
F Graziani, Frontiers and Challenges in Warm Dense Matter. BerlinSpringer VerlagF. Graziani, ed., Frontiers and Challenges in Warm Dense Matter (Springer Verlag, Berlin, 2014).
Basic Research Needs for High Energy Density Laboratory Physics: Report on the Workshop on High Energy Density Laboratory Physics Research Needs. Dept. of EnergyTech. RepBasic Research Needs for High Energy Density Labora- tory Physics: Report on the Workshop on High Energy Density Laboratory Physics Research Needs, Nov. 1518, 2009, Tech. Rep. (Dept. of Energy).
. A D Becke, Phys. Rev. A. 383098A. D. Becke, Phys. Rev. A 38, 3098 (1988).
. C Lee, W Yang, R G Parr, 10.1103/PhysRevB.37.785Phys. Rev. B. 37785C. Lee, W. Yang, and R. G. Parr, Phys. Rev. B 37, 785 (1988).
. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 77E1396J. P. Perdew, K. Burke, and M. Ernzerhof, , Phys. Rev. Lett. 77, 3865 (1996); 78, 1396(E) (1997).
. V V Karasiev, D Chakraborty, O A Shukruto, S B Trickey, 10.1103/PhysRevB.88.161108Phys. Rev. B. 88161108V. V. Karasiev, D. Chakraborty, O. A. Shukruto, and S. B. Trickey, Phys. Rev. B 88, 161108 (2013).
. V V Karasiev, R S Jones, S B Trickey, F E Harris, 10.1103/PhysRevB.80.245120Phys. Rev. B. 80245120V. V. Karasiev, R. S. Jones, S. B. Trickey, and F. E. Harris, Phys. Rev. B 80, 245120 (2009).
. L A Constantin, E Fabiano, S Laricchia, F Della Sala, 10.1103/PhysRevLett.106.186406Phys. Rev. Lett. 106186406L. A. Constantin, E. Fabiano, S. Laricchia, and F. Della Sala, Phys. Rev. Lett. 106, 186406 (2011).
. F Tran, T A Wesolowski, 10.1002/qua.10306Int. J. Quantum Chem. 89441F. Tran and T. A. Wesolowski, Int. J. Quantum Chem. 89, 441 (2002).
. D J Lacks, R G Gordon, 10.1063/1.466274J. Chem. Phys. 1004446D. J. Lacks and R. G. Gordon, J. Chem. Phys. 100, 4446 (1994).
. A Thakkar, 10.1103/PhysRevA.46.6920Phys. Rev. A. 466920A. Thakkar, Phys. Rev. A 46, 6920 (1992).
. L H Thomas, 10.1017/S0305004100011683Math. Proc. Camb. Phil. Soc. 23542L. H. Thomas, Math. Proc. Camb. Phil. Soc. 23, 542 (1927).
. E Fermi, 10.1007/BF01351576Zeitschrift für Physik A Hadrons and Nuclei. 4873E. Fermi, Zeitschrift für Physik A Hadrons and Nuclei 48, 73 (1928).
. V V Karasiev, S B Trickey, F E Harris, 10.1007/s10820-006-9019-8Journal of Computer-Aided Materials Design. 13111V. V. Karasiev, S. B. Trickey, and F. E. Harris, Journal of Computer-Aided Materials Design 13, 111 (2006).
. H Lee, C Lee, R Parr, 10.1103/PhysRevA.44.768Phys. Rev. A. 44768H. Lee, C. Lee, and R. Parr, Phys. Rev. A 44, 768 (1991).
. L.-W Wang, M Teter, 10.1103/PhysRevB.45.13196Phys. Rev. B. 4513196L.-W. Wang and M. Teter, Phys. Rev. B 45, 13196 (1992).
. Y Wang, N Govind, E Carter, 10.1103/PhysRevB.60.16350Phys. Rev. B. 6016350Y. Wang, N. Govind, and E. Carter, Phys. Rev. B 60, 16350 (1999).
. Y Wang, N Govind, E Carter, 10.1103/PhysRevB.64.089903Phys. Rev. B. 6489903Y. Wang, N. Govind, and E. Carter, Phys. Rev. B 64, 089903 (2001).
. C Huang, E Carter, 10.1103/PhysRevB.81.045206Phys. Rev. B. 8145206C. Huang and E. Carter, Phys. Rev. B 81, 045206 (2010).
. Y Ke, F Libisch, J Xia, E A Carter, 10.1103/PhysRevB.89.155112Phys. Rev. B. 89155112Y. Ke, F. Libisch, J. Xia, and E. A. Carter, Phys. Rev. B 89, 155112 (2014).
. L Hung, E A Carter, J. Phys. Chem. C. 1156269L. Hung and E. A. Carter, J. Phys. Chem. C 115, 6269 (2011).
. I Shin, E A Carter, 10.1103/PhysRevB.88.064106Phys. Rev. B. 8864106I. Shin and E. A. Carter, Phys. Rev. B 88, 064106 (2013).
. E H Lieb, B Simon, Adv. in Math. 2322E. H. Lieb and B. Simon, Adv. in Math. 23, 22 (1977).
. E Lieb, B Simon, Phys. Rev. Lett. 31681E. Lieb and B. Simon, Phys. Rev. Lett. 31, 681 (1973).
. L Spruch, Reviews of Modern Physics. 63L. Spruch, Reviews of Modern Physics 63 (1991).
. J M C Scott, Philos. Mag. 43859J. M. C. Scott, Philos. Mag. 43, 859 (1952).
. N H March, J S Plaskett, Proceedings of the Royal Society A. 235419N. H. March and J. S. Plaskett, Proceedings of the Royal Society A 235, 419 (1956).
N H March, R G Parr, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA776285N. H. March and R. G. Parr, Proc. Natl. Acad. Sci. USA 77, 6285 (1980).
. J Schwinger, Phys. Rev. A. 221827J. Schwinger, Phys. Rev. A 22, 1827 (1980).
. B.-G Englert, J Schwinger, Phys. Rev. A. 3226B.-G. Englert and J. Schwinger, Phys. Rev. A 32, 26 (1985).
. B.-G Englert, J Schwinger, 10.1103/PhysRevA.26.2322Phys. Rev. A. 262322B.-G. Englert and J. Schwinger, Phys. Rev. A 26, 2322 (1982).
. P Elliott, K Burke, 10.1139/V09-095Canadian Journal of Chemistry. 871485P. Elliott and K. Burke, Canadian Journal of Chemistry 87, 1485 (2009).
. K Burke, A Cancio, T Gould, S Pittalis, arXiv:1602.08546K. Burke, A. Cancio, T. Gould, and S. Pittalis, (2016), arXiv:1602.08546.
. K Burke, A Cancio, T Gould, S Pittalis, arXiv:1409.4834K. Burke, A. Cancio, T. Gould, and S. Pittalis, (2014), arXiv:1409.4834.
. J P Perdew, A Ruzsinszky, G I Csonka, O A Vydrov, G E Scuseria, L A Constantin, X Zhou, K Burke, 10.1103/PhysRevLett.100.136406Phys. Rev. Lett. 100136406J. P. Perdew, A. Ruzsinszky, G. I. Csonka, O. A. Vydrov, G. E. Scuseria, L. A. Constantin, X. Zhou, and K. Burke, Phys. Rev. Lett. 100, 136406 (2008).
. D Lee, L A Constantin, J P Perdew, K Burke, The Journal of chemical physics. 13034107D. Lee, L. A. Constantin, J. P. Perdew, and K. Burke, The Journal of chemical physics 130, 034107 (2009).
. F Della Sala, E Fabiano, L A Constantin, 10.1103/PhysRevB.91.035126Phys. Rev. B. 9135126F. Della Sala, E. Fabiano, and L. A. Constantin, Phys. Rev. B 91, 035126 (2015).
. S Laricchia, L A Constantin, E Fabiano, F D Sala, 10.1021/ct400836sJournal of Chemical Theory and Computation. 10164S. Laricchia, L. A. Constantin, E. Fabiano, and F. D. Sala, Journal of Chemical Theory and Computation 10, 164 (2014).
. L A Constantin, E Fabiano, F Della Sala, 10.3390/computation4020019Computation. 4L. A. Constantin, E. Fabiano, and F. Della Sala, Com- putation 4 (2016), 10.3390/computation4020019.
. A D Becke, K E Edgecombe, J. Chem. Phys. 925397A. D. Becke and K. E. Edgecombe, J. Chem. Phys. 92, 5397 (1990).
. B Silvi, A Savin, Nature. 371683B. Silvi and A. Savin, Nature (London) 371, 683 (1994).
. A D Becke, J. Chem. Phys. 1092092A. D. Becke, J. Chem. Phys. 109, 2092 (1998).
. J Sun, B Xiao, Y Fang, R Haunschild, P Hao, A Ruzsinszky, G I Csonka, G E Scuseria, J P Perdew, 10.1103/PhysRevLett.111.106401Phys. Rev. Lett. 111106401J. Sun, B. Xiao, Y. Fang, R. Haunschild, P. Hao, A. Ruzsinszky, G. I. Csonka, G. E. Scuseria, and J. P. Perdew, Phys. Rev. Lett. 111, 106401 (2013).
. A Cancio, D Stewart, A Kuna, Journal of Chemical Physics. 14484107A. Cancio, D. Stewart, and A. Kuna, Journal of Chem- ical Physics 144, 084107 (2016).
. K Finzel, Theoretical Chemistry Accounts. 134106K. Finzel, Theoretical Chemistry Accounts 134, 106 (2015).
. J Xia, E A Carter, 10.1103/PhysRevB.91.045124Phys. Rev. B. 9145124J. Xia and E. A. Carter, Phys. Rev. B 91, 045124 (2015).
. S B Trickey, V V Karasiev, D Chakraborty, 10.1103/PhysRevB.92.117101Phys. Rev. B. 92117101S. B. Trickey, V. V. Karasiev, and D. Chakraborty, Phys. Rev. B 92, 117101 (2015).
. J Xia, E A Carter, 10.1103/PhysRevB.92.117102Phys. Rev. B. 92117102J. Xia and E. A. Carter, Phys. Rev. B 92, 117102 (2015).
. A Lindmaa, A E Mattsson, R Armiento, 10.1103/PhysRevB.90.075139Phys. Rev. B. 9075139A. Lindmaa, A. E. Mattsson, and R. Armiento, Phys. Rev. B 90, 075139 (2014).
. J P Perdew, L A Constantin, Phys. Rev. B. 75J. P. Perdew and L. A. Constantin, Phys. Rev. B 75, 155109/1 (2007).
P K Acharya, L J Bartolotti, S B Sears, R G Parr, Proc. Nati. Acad. Sci. USA. Nati. Acad. Sci. USA776978P. K. Acharya, L. J. Bartolotti, S. B. Sears, , and R. G. Parr, Proc. Nati. Acad. Sci. USA 77, 6978 (1980).
. W Yang, R G Parr, C Lee, 10.1103/PhysRevA.34.4586Phys. Rev. A. 344586W. Yang, R. G. Parr, and C. Lee, Phys. Rev. A 34, 4586 (1986).
. R O Jones, O Gunnarsson, Rev. Mod. Phys. 61689R. O. Jones and O. Gunnarsson, Rev. Mod. Phys. 61, 689 (1989).
. D Kirzhnits, Sov. Phys. JETP. 564D. Kirzhnits, Sov. Phys. JETP 5, 64 (1957).
. M Brack, B K Jennings, Y H Chu, Phys. Lett. 65B. 651M. Brack, B. K. Jennings, and Y. H. Chu, Phys. Lett. 65B 65, 1 (1976).
. C H Hodges, Can. J. Phys. 511428C. H. Hodges, Can. J. Phys. 51, 1428 (1973).
. D Murphy, Phys. Rev. A. 241682D. Murphy, Phys. Rev. A 24, 1682 (1981).
. P Silva, C Corminboeuf, 10.1063/1.4931628J. Chem. Phys. 143P. de Silva and C. Corminboeuf, J. Chem. Phys. 143 (2015), http://dx.doi.org/10.1063/1.4931628.
. C Weizsäcker, 10.1007/BF01337700Zeitschrift für Physik. 96431C. Weizsäcker, Zeitschrift für Physik 96, 431 (1935).
. C Herring, 10.1103/PhysRevA.34.2614Phys. Rev. A. 342614C. Herring, Phys. Rev. A 34, 2614 (1986).
. M Levy, H Ou-Yang, 10.1103/PhysRevA.38.625Phys. Rev. A. 38625M. Levy and H. Ou-Yang, Phys. Rev. A 38, 625 (1988).
. F Hao, R Armiento, A E Mattsson, 10.1063/1.4871738J. Chem. Phys. 140F. Hao, R. Armiento, and A. E. Mattsson, J. Chem. Phys. 140, 18A536 (2014).
. A C Cancio, C E Wagner, arXiv:1308.3744physics.chem-phA. C. Cancio and C. E. Wagner, (2013), arXiv:1308.3744 [physics.chem-ph].
. T Kato, Commun. Pure Appl. Math. 10151T. Kato, Commun. Pure Appl. Math. 10, 151 (1957).
. M Fuchs, M Scheffler, Computer Physics Communications. 11967M. Fuchs and M. Scheffler, Computer Physics Commu- nications 119, 67 (1999).
J J Redd, Master's thesis. J. J. Redd, Master's thesis (2015).
. R F W Bader, 10.1021/cr00005a013Chemical Reviews. 91893R. F. W. Bader, Chemical Reviews 91, 893 (1991).
. R F W Bader, H Essén, 10.1063/1.446956J. Chem. Phys. 801943R. F. W. Bader and H. Essén, J. Chem. Phys. 80, 1943 (1984).
|
[] |
[
"Using Machine Learning to Identify Extragalactic Globular Cluster Candidates from Ground-Based Photometric Surveys of M87",
"Using Machine Learning to Identify Extragalactic Globular Cluster Candidates from Ground-Based Photometric Surveys of M87"
] |
[
"Emilia Barbisan \nDepartment of Physics\nMcGill University\n3600 University StreetH3A 2T8MontréalQCCanada\n\nMcGill Space Institute\nMcGill University\n3550 University StreetH3A 2A7MontréalQCCanada\n",
"Jeff Huang \nDepartment of Physics\nMcGill University\n3600 University StreetH3A 2T8MontréalQCCanada\n\nMcGill Space Institute\nMcGill University\n3550 University StreetH3A 2A7MontréalQCCanada\n",
"Kristen C Dage \nDepartment of Physics\nMcGill University\n3600 University StreetH3A 2T8MontréalQCCanada\n\nMcGill Space Institute\nMcGill University\n3550 University StreetH3A 2A7MontréalQCCanada\n",
"Daryl Haggard \nDepartment of Physics\nMcGill University\n3600 University StreetH3A 2T8MontréalQCCanada\n\nMcGill Space Institute\nMcGill University\n3550 University StreetH3A 2A7MontréalQCCanada\n",
"Robin Arnason \nInterface Fluidics, Ltd\n11421 Saskatchewan Dr NWT6G 2M9EdmontonABCanada\n",
"Arash Bahramian \nInternational Centre for Radio Astronomy Research − Curtin University\nGPO Box U19876845PerthWAAustralia\n",
"William I Clarkson \nDepartment of Natural Sciences\nUniversity of Michigan-Dearborn\n4901 Evergreen Rd. Dearborn48128MI\n",
"Arunav Kundu \nEureka Scientific, Inc\n2452 Delmer Street, Suite 100 Oakland94602CAUSA\n",
"Stephen E Zepf \nDepartment of Physics and Astronomy\nMichigan State University\n48824East LansingMI\n"
] |
[
"Department of Physics\nMcGill University\n3600 University StreetH3A 2T8MontréalQCCanada",
"McGill Space Institute\nMcGill University\n3550 University StreetH3A 2A7MontréalQCCanada",
"Department of Physics\nMcGill University\n3600 University StreetH3A 2T8MontréalQCCanada",
"McGill Space Institute\nMcGill University\n3550 University StreetH3A 2A7MontréalQCCanada",
"Department of Physics\nMcGill University\n3600 University StreetH3A 2T8MontréalQCCanada",
"McGill Space Institute\nMcGill University\n3550 University StreetH3A 2A7MontréalQCCanada",
"Department of Physics\nMcGill University\n3600 University StreetH3A 2T8MontréalQCCanada",
"McGill Space Institute\nMcGill University\n3550 University StreetH3A 2A7MontréalQCCanada",
"Interface Fluidics, Ltd\n11421 Saskatchewan Dr NWT6G 2M9EdmontonABCanada",
"International Centre for Radio Astronomy Research − Curtin University\nGPO Box U19876845PerthWAAustralia",
"Department of Natural Sciences\nUniversity of Michigan-Dearborn\n4901 Evergreen Rd. Dearborn48128MI",
"Eureka Scientific, Inc\n2452 Delmer Street, Suite 100 Oakland94602CAUSA",
"Department of Physics and Astronomy\nMichigan State University\n48824East LansingMI"
] |
[
"MNRAS"
] |
Globular clusters (GCs) have been at the heart of many longstanding questions in many sub-fields of astronomy and, as such, systematic identification of GCs in external galaxies has immense impacts. In this study, we take advantage of M87's well-studied GC system to implement supervised machine learning (ML) classification algorithms -specifically random forest and neural networks -to identify GCs from foreground stars and background galaxies using ground-based photometry from the Canada-France-Hawai'i Telescope (CFHT). We compare these two ML classification methods to studies of "human-selected" GCs and find that the best performing random forest model can reselect 61.2% ± 8.0% of GCs selected from HST data (ACSVCS) and the best performing neural network model reselects 95.0% ± 3.4%. When compared to human-classified GCs and contaminants selected from CFHT data -independent of our training data -the best performing random forest model can correctly classify 91.0% ± 1.2% and the best performing neural network model can correctly classify 57.3% ± 1.1%. ML methods in astronomy have been receiving much interest as Vera C. Rubin Observatory prepares for first light. The observables in this study are selected to be directly comparable to early Rubin Observatory data and the prospects for running ML algorithms on the upcoming dataset yields promising results.
|
10.1093/mnras/stac1396
|
[
"https://arxiv.org/pdf/2205.09280v1.pdf"
] | 248,887,690 |
2205.09280
|
aabc7fb849278723b17e1083ebdf2d7d32b84c0e
|
Using Machine Learning to Identify Extragalactic Globular Cluster Candidates from Ground-Based Photometric Surveys of M87
2021
Emilia Barbisan
Department of Physics
McGill University
3600 University StreetH3A 2T8MontréalQCCanada
McGill Space Institute
McGill University
3550 University StreetH3A 2A7MontréalQCCanada
Jeff Huang
Department of Physics
McGill University
3600 University StreetH3A 2T8MontréalQCCanada
McGill Space Institute
McGill University
3550 University StreetH3A 2A7MontréalQCCanada
Kristen C Dage
Department of Physics
McGill University
3600 University StreetH3A 2T8MontréalQCCanada
McGill Space Institute
McGill University
3550 University StreetH3A 2A7MontréalQCCanada
Daryl Haggard
Department of Physics
McGill University
3600 University StreetH3A 2T8MontréalQCCanada
McGill Space Institute
McGill University
3550 University StreetH3A 2A7MontréalQCCanada
Robin Arnason
Interface Fluidics, Ltd
11421 Saskatchewan Dr NWT6G 2M9EdmontonABCanada
Arash Bahramian
International Centre for Radio Astronomy Research − Curtin University
GPO Box U19876845PerthWAAustralia
William I Clarkson
Department of Natural Sciences
University of Michigan-Dearborn
4901 Evergreen Rd. Dearborn48128MI
Arunav Kundu
Eureka Scientific, Inc
2452 Delmer Street, Suite 100 Oakland94602CAUSA
Stephen E Zepf
Department of Physics and Astronomy
Michigan State University
48824East LansingMI
Using Machine Learning to Identify Extragalactic Globular Cluster Candidates from Ground-Based Photometric Surveys of M87
MNRAS
0002021Accepted XXX. Received YYY; in original form ZZZPreprint 20 May 2022 Compiled using MNRAS L A T E X style file v3.0M87: globular clusters: general -surveys -methods: statistical
Globular clusters (GCs) have been at the heart of many longstanding questions in many sub-fields of astronomy and, as such, systematic identification of GCs in external galaxies has immense impacts. In this study, we take advantage of M87's well-studied GC system to implement supervised machine learning (ML) classification algorithms -specifically random forest and neural networks -to identify GCs from foreground stars and background galaxies using ground-based photometry from the Canada-France-Hawai'i Telescope (CFHT). We compare these two ML classification methods to studies of "human-selected" GCs and find that the best performing random forest model can reselect 61.2% ± 8.0% of GCs selected from HST data (ACSVCS) and the best performing neural network model reselects 95.0% ± 3.4%. When compared to human-classified GCs and contaminants selected from CFHT data -independent of our training data -the best performing random forest model can correctly classify 91.0% ± 1.2% and the best performing neural network model can correctly classify 57.3% ± 1.1%. ML methods in astronomy have been receiving much interest as Vera C. Rubin Observatory prepares for first light. The observables in this study are selected to be directly comparable to early Rubin Observatory data and the prospects for running ML algorithms on the upcoming dataset yields promising results.
INTRODUCTION
Globular clusters (GCs) are home to hundreds of thousands of gravitationally bound stars and have garnered much interest for the intriguing dynamical occurrences that are sourced within them. GCs are widely found to have a bimodal colour distribution, forming red and blue GC populations (Kundu & Whitmore 2001;Harris et al. 2006;Peng et al. 2006;Brodie et al. 2012), as well as a common bi/multimodal metallicity distribution, typically a spacially extended metal-poor population and a spatially concentrated metal-rich population (Ashman & Zepf 1992;Brodie et al. 2012). Both GC formation and GC properties are, however, still a constantly evolving subject of research both observationally (e.g., Forbes et al. 2017;Lee et al. 2019;Usher et al. 2019;Fahrion et al. 2020) and through simulations and modelling (e.g., Bastian et al. 2020;El-Badry et al. 2019;Reina-Campos et al. 2019, 2022.
A number of black holes (BHs) and BH candidates have been uncovered in globular clusters, mostly within the Milky Way (e.g., Maccarone et al. 2007;Strader et al. 2012;Miller-Jones et al. 2015; ★ E-mail: [email protected] (EB) Giesers et al. 2018Giesers et al. , 2019. Recent simulations by Weatherford et al. (2020) show that many GCs are home to stellar-mass BHs and that these BHs may also be instrumental in the GCs' evolution and morphology. Theoretical work such as Rodriguez et al. (2016) points towards globular clusters as the birthplace of BH-BH binary mergers such as those detected by the Laser Interferometer Gravitational-Wave Observatory (LIGO). GCs are also known to host a variety of X-ray sources and radio transients. These include ultra-luminous X-ray sources (ULXs) which undergo some of the most extreme mass transfer rates, and may also be indicators of BH candidates in extragalactic globular clusters (Dage et al. 2021, and references therein). X-ray sources have also been traced back to ultracompact dwarf (UCD) hosts (e.g., Seth et al. 2014;Pandya et al. 2016), of which there are many theories regarding their definition, make-up, and origin -one common theory being that UCDs are primarily the nuclei of tidally stripped dwarf galaxies (Zhang et al. 2015). The first fast radio burst (FRB) localized to a GC was recently reported in the M81 galactic system, though the origin of this luminous flash is still unclear (Bhardwaj et al. 2021;Kirsten et al. 2021; see also discussion in Bhandari et al. 2021). Finally, many studies suggest that globular clusters may be a potential hiding spot for the elusive intermediate mass black hole, and that the key to identifying these is studying large numbers of GCs (Wrobel et al. 2018, and many references therein). Thus, wide-scale, systematic identification of globular clusters can provide important benefits to many fields of astronomy, including high energy astrophysics and beyond (e.g., Reina-Campos et al. 2021, among others).
While galactic GCs appear as a cluster of distinct point sources, extragalactic GCs can often be observed as a single point source. As such, the main issue with extragalactic GC identification is that GCs can be difficult to classify because they are easily mistaken for (1) foreground stars between the galaxy of interest and the observer or (2) distant background galaxies that appear to be associated with the observed galaxy. Extragalactic globular clusters have been identified either through ground-based photometric campaigns, spectroscopic studies, or by combining photometry with the excellent spatial resolution of the Hubble Space Telescope (HST) (e.g., Kundu & Whitmore 2001;Rhode et al. 2007;Harris 2009;Jordán et al. 2009;Usher et al. 2019).
Ground-based photometric surveys are an excellent means to create substantial GC candidate lists, but require spectroscopic follow-up to confirm the cluster nature of the candidate. HST's spatial resolution enables an estimate of the GC half light radius, which can be folded in with the photometric properties for an increased accuracy in these GC candidate lists. However, HST observations are often pointed at the centres of galaxies and many systems have not been targeted, so combing this archive yields an incomplete sample (e.g., Thygesen et al. in prep). Spectroscopy is the best means to confirm or rule out the star cluster nature of extragalactic point sources by measuring velocity dispersions (e.g., the methods in Illingworth 1976;Ho & Filippenko 1996;Dubath & Grillmair 1997). This tactic, however, can often be very telescope time intensive, as well as less sensitive to GCs close to the galaxy centres, which may be obscured by their host galaxy.
In the new era of large surveys, these methods become increasingly difficult to implement. A recent strategy for astrophysical source identification is using machine learning (ML) as a classification tool. While ML algorithms are nowhere near as accurate as spectroscopy, once trained they are quick and easy to use and, if trained properly, will still generate robust candidate lists. Such strategies have been implemented for classification of galaxies (e.g., De La Calleja & Fuentes 2004, who used a combination of neural networks and image analysis to classify galaxies morphologically and Ball et al. 2006 who used Sloan Digital Sky Survey data and decision trees to classify sources as galaxies, stars, or neither) and X-ray point sources (e.g., Arnason et al. 2020;Pattnaik et al. 2021;Zhang et al. 2021;Tranin et al. 2021;Mountrichas et al. 2017). These methods have also been used to successfully identify star clusters: Pérez et al. (2021) and Thilker et al. (2022) used image data and image processing techniques alongside ML algorithms (in both cases convolutional neural networks) and Saifollahi et al. (2021) processed imaging data to run a K-nearest neighbours (KNN) model on only photometric colour combinations.
In this paper we employ optical, ground-based photometry to train and run supervised ML classification models to identify GC sources without spectroscopy. We train our models on sources from within M87, as it is one of the most thoroughly studied galaxies besides our own, which provides us with large catalogues of pre-classified data. Anticipating the completion of Vera C. Rubin Observatory and the resulting deep, wide data observations from the Legacy Survey of Space and Time (LSST), our aim is to create ML data classification tools that will run on these data products.
In Section 2 we discuss the data and data preparation of both the training data (Section 2.1) and the data used for external verification with human-selected GCs (Section 2.2), as well as the model type and architecture used (Section 2.3). Section 3 gives a detailed account of the results of each of our different models (Sections 3.1 and 3.2) and analyses the complications with separating UCDs and GCs (Section 3.3). In Section 4 we discuss our findings, the limitations and advantages of our approach, and our recommendations for future use. Finally, we conclude our findings in Section 5.
DATA AND MODELS
To robustly train an accurate ML model that will identify extragalactic GC candidates, we require a large, uniform photometric sample. We select the photometric survey data from the Next Generation Virgo Cluster Survey (NGVS) (Ferrarese et al. 2012) from observations taken by the MegaPrime instrument on the Canada France Hawaii Telescope (CFHT) using the Canadian Astronomy Data Centre (CADC) MegaPipe pipeline (Gwyn 2008). This catalogue contains over 45 observations in five magnitude bands ( ), for a total of 225 merged observations (Table A1 in Appendix A). For each source the final combined catalogue consists of five magnitude values and five flux radius values, one for each band, as well as RA and Dec coordinates of each source's barycentre. The flux radius is defined in the NGVS catalogue documentation as the fraction-of-light radius, or half-light radius, in pixels and is included in our chosen parameters since we expect some sources (namely background galaxies) to be less resolved than others.
We train two types of ML models on classified sources for use on photometric datasets (i.e., catalogues of photometric measurements) and follow this by running independent verifications with two separate sets of human-selected GC candidates. The first human-selected catalogue is built from HST observations near the centre of M87 and consists of GC candidates with varying likelihood of being a GC, as calculated using model-based clustering methods and catalogues of expected contaminants, from Jordán et al. (2009). The second is, like our training data, built from NGVS observations and consists of both GC candidates and other interloping sources, also with varying likelihoods of classifications (in this case calculated using colour cuts) from Oldham & Auger (2016).
Training Data
To generate the training data, our formatted and unfiltered catalogue of NGVS data which contained 719,949 sources was cross-matched within 1.0 arcsec with the coordinates of each of our four classes of sources (i.e., UCDs, GCs, background galaxies, and foreground stars). Before each class of data was cross-matched with the full NGVS catalogue, they were cross-matched within 1.0 arcsec of each other to avoid overlap. Duplicate sources were left in only one of the class catalogues, with a preference in the order of UCDs, GCs, background galaxies, then foreground stars. This hierarchy was chosen due to the very limited number of our most important sources and the abundance of our less important sources. The two instances where this hierarchy was employed include: (1) sources that were listed as GCs and stars were kept in the GC dataset since there are fewer GC sources than star sources and (2) sources that were listed as UCDs and GCs were kept in the UCD dataset since the number of known UCD sources is extremely low compared to the three other class datasets.
The coordinates of UCDs used were obtained by merging catalogues from Zhang et al. (2015) and Pandya et al. (2016) for a resulting 402 sources (not all within the area of M87). The Zhang et al. (2015) catalogue is one of the largest spectroscopic datasets of UCDs, but since there are so few confirmed UCD sources in general, we chose to also use the Pandya et al. (2016) catalogue to supplement it. After cross-matching these coordinates with the NGVS data, we had a resulting dataset of 83 UCDs. The coordinates of the GCs were obtained from Strader et al. (in prep.) and offered a total of 1428 sources before cross-matching and 1188 sources after. The Strader et al. (in prep.) catalogue was chosen because it is one of the most extended specroscopic datasets of GCs in the M87 region. Note that there is some observational selection within this GC sample against sources at fainter magnitudes for two main reasons: (1) spectroscopy is typically run on only the brightest sources (which are bright either because they are close, or because they are intrinsically luminous) and (2) NGVS is only sensitive to magnitudes below 25.9 mag in the band (Ferrarese et al. 2012), making it difficult to probe the lower end of the M87 GC population.
The coordinates of background galaxies and foreground stars were both obtained by querying from various databases on NOIRLab's Astro Data Lab 1 . The background galaxy sources were selected from the 2MASS extended source catalogue (twomass.xsc), which is a robust dataset consisting of non-stellar sources. These sources were queried from within the approximate coordinate area of our GC+UCD sources (to ensure that the area of sources our models are being run on has appropriate representation of all source classes) and with the flag vc = 1, indicating that a source was visually classified as a galaxy (Skrutskie et al. 2006). This resulted in a total of 234 background galaxy sources which was reduced to 93 sources by cross-matching their coordinates with NGVS data. The majority of the unmatched background sources are found within the M87 region but outside of the area covered by NGVS observations (i.e., the topleft region of Fig. 1). The foreground star sources were downloaded from a join of two tables, the Legacy Survey's DR8 photometry catalogue of the southern region (ls_dr8.tractor_s; Dey et al. 2019) and the Gaia DR2 catalogue (gaia_dr2.gaia_source; Gaia Collaboration et al. 2016Collaboration et al. , 2018, again from within our GC+UCD areas. The Gaia DR2 catalogue is large and contains a flag indicating whether a given source is a point source and values indicating the proper motion of a given source. Both of these parameters allow for reliable querying of stars. Additional constraints include a signal-tonoise ratio in , , and of greater than 50; a gaia_pointsource value of 1, indicating the source is indeed a point source; a magnitude of greater than 10; and proper motion values in RA and Dec with a 4 sigma threshold. This resulted in an original sample of 7666 foreground star sources and was culled to 4433 through cross-matching with NGVS data. These two sources provided motivation for using flux radii as a feature (Fig. 2). While both UCDs and GCs are typically appear small and more resolved, some of our star sources appear less resolved due to their close proximity and the majority of our background galaxy sources are much less resolved due to their wider dispersions.
Once loaded into the ML model programs, columns were added to each dataset indicating four colours (u-g, g-r, g-i, and g-z) and all magnitudes were converted to absolute magnitudes assuming a distance of 16.5 Mpc, the distance to M87 (Strader et al. 2011 Figure 1. The spatial distribution of known sources in each of the four classes (UCDs in red, GCs in blue, background galaxies in black, and foreground stars in yellow) that make up the training dataset. The distribution of the two datasets of human-selected GC candidates are also included, as shown by the cyan and brown plus signs. Note that the ACSVCS GCs are found in small clustered groups, giving the impression of fewer sources than there are. This data was cross-matched to within 1.0 arcsec of NGVS data which covers M87 as shown by the grey boxes (Table A1 in Appendix A).
all sources (Fig. 3). The use of absolute magnitudes makes it easier to run the models on other galaxies, as they would not need to be recalibrated for each new galaxy, but the models' use should still be limited to galaxies of similar distances since the relationship between the features of each source will change with distance. To ensure the use of only the most reliable data and eliminate any sources listed with unphysical magnitudes, which may indicate missing or erroneous data, the datasets were then trimmed to ensure magnitudes in all bands fall within the the nominal limiting magnitude of each observation and band. One additional reduction was applied to the stars dataset to ensure that only the stars with the highest proper motion (i.e., the sources that are the brightest and fastest and therefore the most likely to be real stars) were used. This was done by using a generous trim of any data with proper motion in RA and/or Dec between -10 mas/year and 10 mas/year. An additional reduction was also applied to the GC dataset regarding radial velocity. M87 has a heliocentric radial velocity of 1284 km/s (Cappellari et al. 2011), so only GCs with velocities in the range of 200-2400 km/s were used to avoid any potential confusion between low velocity GCs and foreground stars. The multiple cross-matches resulted in many columns in each dataset, but only magnitude (in each of the 5 bands), colour (each of the four aforementioned combinations), and flux radius values (in each of the 5 bands) were used as input features for the models to run on (Table 2). After removing outliers and filtering the GCs and stars, our remaining datasets contained 83 UCDs, 1160 GCs, 90 background galaxies, and 2346 foreground stars ( Fig. 1, Table 1). These populations have distinct source properties, but overlap in colour-magnitude space as seen in Fig. 4.
Human-selected GCs for external verification
The first catalogue used to verify and test our models against human-selected GC candidates was the ACS Virgo Cluster Survey (ACSVCS) catalogue of globular cluster candidates which was created based on data taken by HST (Jordán et al. 2009). We do have an inherent bias when running our models on this dataset, as the NGVS data has a limiting magnitude of 25.9 mag in the band (Ferrarese et al. 2012), whereas the ACSVCS catalogue has some magnitude values fainter than 26 mag in the band (since availabe HST data often probes fainter magnitudes than current data available from ground-based telescopes like CFHT). The ACSVCS catalogue does not include magnitude values in the u, i, and bands or flux radius values, so it is necessary to cross-match with CFHT data, preventing us from running our model on the faintest sources in the ACSVCS catalogue. As such, much like the training data, the RA and Dec coordinates of this dataset were cross-matched within 1.0 arcsec of our main NGVS catalogue (Section 2.1). The AVSVCS dataset was also cross-matched within 1.0 arcsec of the training datasets to ensure there was no overlap. This resulted in a total of 646 sources from the original 12,763 being input into our models (Fig 1, Table 1).
Unlike the previous datasets used to construct our training data, the ACSVCS sources are listed with values indicating the probability of each being a globular cluster (pGC), as calculated by Jordán et al. (2009) using model-based clustering methods and catalogues of expected contaminants. Once incorporated, the sources were again trimmed to be within the the nominal limiting magnitude of each observation and band to eliminate missing or erroneous data, leaving a total of 645 sources, and the apparent magnitudes were converted to absolute magnitudes. Each model (detailed in Section 2.3) except those focused on only UCDs vs GCs was then run on this dataset and we based our models' success on how well they reselected this list of GCs. Since this dataset was created using a space telescope rather than a ground-based telescope, it includes partially resolved GCs and . The absolute magnitude distribution of of our training data with UCDs, GCs, background galaxies, and foreground stars shown in red, blue, black, and yellow in each of the five filter bands. The distance to M87, 16.5 Mpc (Strader et al. 2011), was assumed for all sources in calculating absolute magnitudes.
serves as a robust and well selected list of GC candidates. Reselecting a large number of these sources is essential for any well-performing model. The only drawback to this diagnostic is that the catalogue does not include any background or foreground contaminants, meaning that running models on this data gives no information on how well a given model can recognize background and foreground sources. The second catalogue used to verify and test our models was the Oldham & Auger (2016) catalogue of 17,620 GC candidates, which were chosen by colour cuts and, like our training dataset, consisted of NGVS photometric data taken by the CFHT MegaPrime instrument. These sources were cross-matched within 1.0 arcsec of our prepared NGVS catalogue and the same trimming and conversion of magnitudes were done. This dataset, which we refer to as the NGVS test set, was then also cross-matched within 1.0 arcsec of the training datasets to ensure there was no overlap. This resulted in 11,862 sources after cross-matching and 11,860 sources after trimming (Fig 1, Table 1). A key difference between this catalogue and the last is that this dataset includes interloping contaminant sources and has more specific clas- sification probability values. The latter were obtained using functions of galactocentric radius, magnitude, and colour, and include the probability of each source belonging to the red GC population (labelled as "pRed" in their catalogue), the blue GC population (pBlue), the Milky Way/Sagittarius stellar interloper population (pMW), or the uniformcolour interloper population (pInt as labelled in their paper or pStar as labelled in their corresponding catalogue). The latter interloper population may consist of other contaminants such as background galaxies or sources within the Virgo overdensity. Before running our models on this catalogue, we summed the blue and red GC probabilities to get one value indicating the probability of being a star cluster (pC) and summed the interloper probabilities to get a second value indicating the probability of being a contaminant (pNC). These two combinations were summations rather than averages so that the sum of all possible probabilities for a given source (i.e., pC and pNC) would equal 1. This pNC parameter allowed for an additional verification of not only how well our models can reselect GCs+UCDs, but also how well they can reselect background and foreground sources.
Models
Machine learning can be implemented in many different ways with a variety of different types of training algorithms. We aim to compare the performance of two popular approaches ideal for supervised classification tasks: random forest (RF; Section 2.3.1) and deep neural networks (NN; Section 2.3.2). The main motivation behind the choice of these two ML algorithms is that we aim to make our models accessible for future users. NN and RF are two of the more common algorithms for supervised classification based tasks and have a minimal barrier to entry for ML beginners. For both approaches, three separate models were trained and tested: (i) a binary classification model of GCs+UCDs and contaminant sources (i.e., foreground stars and background galaxies), (ii) a multi-class classification model of GCs+UCDs, foreground stars, and background galaxies, and (iii) an auxiliary binary classification model of only GCs and UCDs. The first two types of models were trained and tested on our full training and testing datasets, but the third model was only run on UCD and GC sources, since these are the only types of objects it can classify. There is some disagreement on what precisely constitutes a UCD and how precisely it differs from a GC, but we nonetheless aimed to see how well this auxiliary model performed based on the UCD catalogue we had access to and under the assumption that all sources in it are in fact UCDs.
Random forest approach
RF is a classification algorithm which utilises many decision trees to classify objects. Decision trees are tree-like structures similar to flowcharts with decision nodes determining which branches to send the object down, then branching off further to other nodes. At the end of the trees are leaves which represent the possible classification outcomes (class labels) and serve as the output. Individual decision trees are susceptible to overfitting and are not flexible to changes in data. The random forest classification helps to correct these disadvantages by creating many decision trees which each use random subsets of the input features and then averaging over them. The use of random subsets and many trees allows the trees to be uncorrelated and reduces overfitting. Our RF algorithm was implemented using RandomForestClassifier from sklearn 2 (Pedregosa et al. 2011). Three separate classifiers were created for different classification purposes. The first model (RF1) is a binary classification model distinguishing between GCs+UCDs and contaminant sources (i.e., foreground stars and background galaxies). The second model (RF2) is a 3-class classification model distinguishing between GCs, background galaxies and foreground stars. Lastly, the third model (RF3) is another binary model which was created to distinguish between GCs and UCDs and, as such, was only trained on pre-classified UCD and GC sources rather than the full training dataset. All three models utilised the same training data divided with an 80-20% data split where all the sources of stars, background galaxies, and UCDs were used, but a trimmed subset of only 600 GCs were used to minimize class imbalance. All three models also utilized all 14 of the features available (Table 2) and had hyperparameters optimized using sklearn's GridSearchCV (Pedregosa et al. 2011) which are listed in Table 3 and include values indicating the number of decision trees to be built, the metric formula used to make decisions at each tree node, and the minimum number of sources that should end up at each leaf. Aside from the hyperparameters listed in Table 3, all other hyperparameters were left at their default values with the exception of class_weight which was set to "balanced_subsample" to alleviate the imbalance in class sizes for all models.
Neural network approach
Another common implementation of machine learning is the use of NN. NN models are a collection of nodes (also referred to as neurons) organized into one or more layers. Through supervised machine learning, a model is trained by passing preprocessed data into the algorithm, which continually updates its nodes and the weights between nodes on different layers as it is given more data. When training a model it is necessary to split the training data into what is typically termed a training set and a test set to assess metrics before releasing the model on new, unclassified data. A typical data split is 80% in the training set and 20% in the test set, which is the split used in all models in this paper. A second split of the data is made within the training set and is termed the validation set (in this case a 70%-30% split was used). This set is used throughout training to test how each update of nodes and weights fared and how much it should update itself on the next run.
The training data consists of a variety of features, or parameters, along with either a class or numerical label, all of which is passed into the algorithm. Additionally, a set of initial parameters, the hyperparameters, are hardcoded into the training algorithm by the user and can be tweaked to improve model performance. These hyperparameters include values indicating how drastically the model should update itself, how much data it should be run on before updating, and how many epochs this process should last. The hyperparameters used for our NN models were chosen for best performance by trial and error and are detailed in Table 3. Once a model has been trained on these feature-label pairings, it can then predict either a classification or a regression (depending on the type of NN model) for new data that is passed in with the same features as the training data. Considering our goal is source classification, it is most appropriate to have our models predict a class.
Three NN models were created for our dataset, each with a different number of nodes and layers. The first model (NN1) is a binary classification model that was trained on GCs+UCDs and contaminant sources (ie. foreground stars and background galaxies) with two layers of 10 nodes each. The second model (NN2) is a 3-class classification model and was trained on GCs+UCDs, foreground stars, and background galaxies with two layers of 20 and 10 nodes. The third and final model (NN3) is another binary classification model which was trained only on GCs and UCDs with two layers of 50 and 30 nodes. For this model it was necessary to trim the number of GCs it was trained on from 1160 sources to 250 due to the small number of UCD sources available. Had this class not been trimmed there would be a significant class imbalance and the model would not function properly. Typically with class imbalance data, models have a tendency to assume all sources belong to the class with the larger dataset, since that would technically make the model more accurate, despite not learning anything about the other, smaller classes.
The first two models NN1 and NN2 were both trained on 14 features (Table 2), including absolute magnitude (u, g, r, i, and ), colour (u-g, g-r, g-i, and g-z), and flux radius values for each of the five magnitude bands, whereas the third model NN3 was trained only on colour and flux radius values. When tested by trial and error, NN3 performed best when only run on these 9 features, whereas NN1 and NN2 performed best on all available features. One possible reason Table 3. Hyperparameters used in each random forest and neural network model. The RF hyperparameters indicate the number of decision trees to be built, the metric formula used to make decisions at each tree node, and the minimum number of sources that should end up at each leaf. The NN hyperparameters indicate how strongly the model should correct itself, how much data it should be run on before updating, and how many epochs the training process should last. for this could be that NN3 was trained on significantly fewer sources than the other two NN models. A larger number of features without a proportionately large number of sources may have resulted in too much noise for the model to infer a clear idea of what constitutes each class. Note that each model should only be used on data that it is expected to classify. For instance, the second binary model NN3 should not be used on an unfiltered catalogue as it will not know how to deal with anything other than GCs or UCDs, but will perform as expected on a catalogue of sources believed to contain only those types of sources. Additionally, each model must only be run on data which has all of the features that it was trained on, as it will not execute on data that is missing features.
RESULTS
The three RF and three NN models were each trained on 80% of the training data prepared as detailed in Section 2.1 before then being tested on the remaining 20% of the training data. Multiple metrics were used to get a clear view of how the models performed on this portion of the training data, especially in cases where there was a class imbalance in the data (e.g., in RF2/NN2 and RF3/NN3 due to a low number of confirmed sources of background galaxies and UCDs, respectively). These metrics include overall accuracy across all classes (i.e., the fraction of all classifications that are correct), precision of each individual class (i.e., the fraction of a given predictive class' identifications that are correct, which gives a measure of false positives), recall of each individual class (i.e., the fraction of a given class' actual sources that were correctly classified, which gives a measure of false negatives), and comparisons with human-selected GCs from the ACSVCS and NGVS test catalogues. Considering each of these metrics rather than just one or some provides a clearer idea of model performance, however, when training, more importance was placed on the minimization of false positives (i.e., maximizing precision) than the minimization of false negatives (i.e., maximizing recall), since a more reliable, but incomplete candidate list is preferable to a more complete, but unreliable list in this case. The aim of this paper is not simply to find all likely candidates for targeted follow-up, but to do so with a focus on data reduction and accuracy.
Since every time a model is trained it performs slightly differently, each model was trained and tested 20 times to see how each performs on average. A summary of the averages of all metrics for each RF and NN model can be found in Table 4. All metric values aside from the comparisons with human-selected GCs were calculated from the confusion matrix -the usual method for visualizing results of ML models -of each model which details how individual sources are identified. These are structured with the predicted class labels on the x-axis and the known class labels on the y-axis. Confusion matrices for each of the 6 models can be found in Fig. B1 in Appendix B. We view our results through an additional visualization tool termed the receiver operating characteristic curve (ROC curve), which plots the rate of false positives against the rate of true positives in characterizing the test data, considering different classification thresholds (i.e., decision boundaries). An ideal ROC curve would have a maximal area under the curve whereas a random classifier would result in a perfect diagonal line. ROC curves are typically used on binary classifiers and, as such, the ROC curves for RF1, RF3, NN1, and NN3 are shown in Fig. 5. An easy way to better understand ROC plots is by calculating a model's area under the ROC curve (AUC). An AUC score ranges from 0 to 1, meaning an AUC of close to 1.0 would implicate a model's predictions are entirely correct, a score close to 0.0 would imply the predictions are entirely wrong, and a score close to 0.5 would imply an uninformative model. For the ROCs shown in Fig. 5, our binary RF models have AUCs of 0.99 and 0.94, respectively, and our binary NN models have AUCs of 0.99 and 0.82, respectively. While these are very high scores, an important caveat is that ROCs (and AUCs as a result) tend to be overly optimistic for class imbalance data such as ours, especially when the minority class is the the main focus of the model (He & Ma 2013). This means that for NN3 and RF3 where UCDs are the main focus, but the source count is minimal, the ROC curves are expected to be especially optimistic.
Effectiveness of Random Forest Approach
RF1 had very high performance with an overall accuracy of 99.4% +/-0.2%, GC+UCD precision of 98.9% +/-0.6%, and GC+UCD recall of 99.2% +/-0.4%. RF2 also had high performance, showing near identical metrics within error to RF1 with an overall accuracy of 99.5% +/-0.3%, GC+UCD precision of 99.3% +/-0.5%, and GC+UCD recall of 99.4% +/-0.5%. RF1 and RF2 were then used to classify the ACSVCS GC candidates where the results were compared with the listed pGC values for each source. RF2 proved to be the better model at selecting GC candidates with the model reselecting 61.2% +/-8.0% of the GCs and RF1 reselecting 59.4% +/-3.7% of the GCs. The NGVS test set sources were used similarly to ACSVCS GC candidates to determine how well our models reselect GCs. The RF3 has the worst performance out of the three RF models, with significantly lower performance compared to RF1 and RF2 over all metrics. This is somewhat expected as, not only are there minimal UCD sources to train on and much fewer distinguishing factors between UCDs and GCs than between other classes, there is also much less agreement on what precisely constitutes a UCD. These issues are discussed further in the following Section 3.3. RF3 has an overall accuracy of 95.4% +/-0.8%, UCD precision of 71.0% +/-9.9%, and UCD recall of 47.8% +/-11.0%. This model, unlike the previous two, was not run on either the ACSVCS or the NGVS testing catalogues, as neither would be an appropriate diagnostic tool. This model can only classify sources as UCDs or GCs and since the ACSVCS GC catalogue was created to intentionally exclude UCDs, running the model on this dataset would not give us any information regarding how well it can separate UCDs from GCs. Similarly, the sources in the NGVS test set include sources that are not strictly GCs or UCDs, and, as such, this model cannot be run on this catalogue since it cannot possibly classify these contaminant sources as anything other than star clusters. A comparison of the ROC curves of RF1 and RF3 are found in Fig. 5. A large area under the curve is ideal and, as such, while both perform well, RF1 is evidently the higher performer.
Effectiveness of Neural Network Approach
NN1 and NN2 had very similar performance, with many metrics that are equal within errors. This is expected since both models classify GCs+UCDs in the same way, with the only difference being how they classify contaminants. The two models respectively had an overall accuracy of 98.4% +/-0.4% and 98.0% +/-0.7%, GC+UCD precision of 98.2% +/-1.0% and 97.8% +/-1.5%, and GC+UCD recall of 97.0% +/-1.0% and 97.2% +/-1.0%. While NN1 metrics are slightly higher, this can be attributed to there being one less class for the model to choose from and is not a significant enough difference to claim one model as performing better than the other. Once created, these models were run on the ACSVCS GC candidates and the values predicted by the model were compared to the listed pGC values for each source. Both models, again, had very similar performance and reselected 95.0% +/-3.4% and 94.8% +/-1.5% of the GCs respectively. These models were then run on the NGVS testing catalogue of sources after combining pRed and pBlue values into one pC (probability of star cluster) value and pMW and pInt/pStar values into one pNC (probability of contaminant) value. We considered sources with either pRed or pBlue values above 0.5 or pC values above 0.8 to be GCs and all others sources to be contaminants. The two models then correctly reclassified 57.3% +/-1.1% and 57.2% +/-0.9% of the NGVS test sources. Note that while the NN models had much higher performance on the ACSVCS catalogue, the RF models had much higher performance on the NGVS test catalogue.
Compared to the other two models, NN3 has much lower performance across all metrics. This is again somewhat expected due to minimal UCD sources to train on, fewer distinguishing factors between UCDs and GCs than between other classes, and the lack of agreement on what precisely constitutes a UCD. This model has an overall accuracy of 81.1% +/-3.0% and has 71.2% +/-7.3% precision and 47.9% +/-22.4% recall for UCDs. This indicates that while the model may miss near half of the UCDs present, when it does classify a source as a UCD it is correct near 71% of the time. Using this model on other GC catalogues may therefore result in an incomplete but still interesting list of UCD candidates. This model, like RF3, was run on neither the ACSVCS catalogue nor the NGVS test catalogue for the reasons described in Section 3.1. A comparison of the ROC curves of NN1 and NN3 is found in Fig. 5. A large area under the curve is ideal and, as such, while both perform well, NN1 is evidently the higher performer.
Ultra-compact Dwarfs vs. Globular Clusters
There are multiple theories regarding the definition, make-up, and origin of UCDs, but Zhang et al. (2015) found that their results support the theory that UCDs are primarily the nuclei of tidally stripped dwarf galaxies. Results from Saifollahi et al. (2021) suggest that UCDs can be distinguished from GCs, and that ML algorithms can be used (in addition to follow-up inspection) to achieve this task. However, they also recognize that this tidally stripped scenario regarding UCDs may be biased. They suggest that, as catalogues of studied UCDs are typically taken from surveys focused on high density environments, since that is where most spectroscopic surveys are done, there is very little information on UCDs or UCD-like ob-jects on the outskirts of galaxy clusters (Saifollahi et al. 2021). More research on UCDs farther out may or may not change the generally accepted theories, but it would likely give a more thorough view of the UCD population and allow for more advanced classification algorithms. Models RF3 and NN3 were therefore run on datasets that rely heavily on a few assumptions: that UCDs are observationally distinct enough from GCs to effectively sort and separate them; that they differ from GCs as postulated in Zhang et al. (2015) as having a larger size, a possible size-luminosity relation, and a mass greater than about 2 10 6 ; and that all sources listed in the Zhang et al. (2015) catalogue are in fact UCDs. Fig. 6 shows the relative importance of each feature used in training RF3 as reported by the model itself. Unlike NN packages, the RandomForestClassifier class includes an attribute, feature_importances_, which indicates impurity-based feature importances (Pedregosa et al. 2011) and can be used to compare the relative importance of each feature for a given model during training. Also known as the Gini importance, the importance of each feature is calculated based on how much it reduces the criterion (i.e., the function that measures the quality of each new branch in RF trees). This shows that while the model does use all available features to distinguish between sources, it places the most importance on magnitude and flux radius (i.e., size) features. This is in agreement with our above assumptions regarding UCDs. NN3, unlike RF3, performed best when only run on the colour and flux radius features. This can likely be attributed to this combination of features being the fewest possible features that still provide the model with a substantial amount of information. NN3 was trained on significantly fewer sources than the other two NN models and a larger number of features without a proportionately large number of sources may have resulted in too much noise for the model to infer a clear idea of what constitutes each class. This may not have been an issue for RF3, but RF models and NN models are structured very differently.
With our small dataset of only 83 UCD sources, we were able to build two models with mediocre performance that only looked at UCDs vs GCs and still required a very significant trim of sources to avoid a strong class imbalance in the data. This trim implies a loss of information from the unused GC sources which may affect model performance. To improve this would require either choosing only the most representative GCs to use in the training data rather than randomly selecting them or to simply have a larger UCD dataset, potentially finding and incorporating outskirt UCDs. A larger dataset would also allow for a model involving other contaminant sources as well. Models that classify UCDs, GCs, and contaminant sources are impractical with our current datasets since our small UCD dataset either becomes invisible when combined with multiple classes or forces a large trim on all classes, which greatly reduces performance.
DISCUSSION
When run on the human-classified sources, the NN models had higher performance on the ACSVCS catalogue, whereas the RF models had higher performance on the NGVS testing catalogue. Of the NGVS test sources that were not classified correctly, approximately 95% were contaminant sources classified as GCs+UCDs (false positives) while the other 5% were GCs classified as contaminants (false negatives). This may suggest that either the level of confidence we expected for GCs classified by Oldham & Auger (2016) was too high or that many of the sources they classify as interlopers are in fact GCs or UCDs.
Both model architectures, however, perform better on the ACSVCS dataset which was taken by HST than the NGVS test dataset which comes from the same NGVS dataset as our training data (despite there being no overlap of sources). This is likely due to the robustness of selection criteria of the two human-selected datasets. Jordán et al. (2009) employ a more rigorous approach involving model-based clustering methods and catalogues of expected contaminants (as opposed to the colour-cuts used by Oldham & Auger (2016)), which results in a more reliable classified list of sources. The higher spatial resolution of HST data also contributes to the reliability of humanclassification. When comparing the two human-selected datasets, there are 253 sources in common and, while the majority of their GC classifications agree within 5% of each other, there is some disagreement which may also increase the discrepancy in our models' performance ( Fig. 7).
Limitations
The main limitation in training our models is the limited number of confirmed sources, especially of UCD and background galaxy sources, so the models we created would benefit from further follow up analysis. GC populations have similar characteristics galaxy to galaxy (e.g., colour, size, and distribution across GC mass and luminosity), but in terms of apparent magnitude these features only appear similar for galaxies of similar distances. Therefore, while our use of absolute magnitudes allows our models to be run easily on other galaxies, it is necessary to limit their use to galaxies at similar distances to that of our training sample. Using our models on other well studied and well documented galaxies (such as the other giant ellipticals in Virgo, M49 and M60, which are well documented and at similar distances to M87) would provide an idea of how well they perform on sources outside of M87. It would be especially helpful to run NN3 and RF3 on larger samples since, as evidenced by their much poorer performance compared to the other models, we did not have access to enough UCD sources to create a successful UCD vs GC model. This is due to UCDs being particularly difficult to source and confirm (Section 3.3). Creating more robust versions of these two models would also greatly aid further research into UCDs by efficiently creating more reliable candidate lists. A major concern when creating these models was whether they would be able to accurately sort through sources with fainter magnitudes, especially considering that Thilker et al. (2022) were able to classify sources ∼1 mag fainter by using ML than by using human classification alone. Our models were only trained on sources for which their classifications were spectroscopically confirmed and, as spectroscopy is typically only run on the brightest sources, our training data is much brighter than the rest of the data in the full NGVS dataset. For this reason we ran our four main models (RF1, RF2, NN1, and NN2) on the full, unfiltered dataset of 719,600 NGVS sources to see how our models classified fainter sources. When viewed in colour-magnitude space (Fig. 8), it's clear that they do still classify fainter sources and, despite some overlap, the different populations that each model extracts are distinct. The four models classify each population differently (especially RF2 and NN2), but given that they all had high performance -despite the RF models performing much better on NGVS test data and the NN models performing much better on the ACSVCS data -further investigation and larger tests are required before concluding which is the better model algorithm.
Advantages
Our approach of using solely photometric data from a ground-based telescope is important for two main reasons. The first being that incorporating space-based HST data would involve using non-uniform observations since the telescope uses a mix of different filters and exposure lengths for different galaxies. Since its camera is often pointed directly at galactic centres, this means we would be neglecting GCs in the outer halo in our training data. The second reason is that Rubin Observatory's upcoming LSST survey will offer a uniform dataset and our results show it is possible to train a model using only photometric data in the same formats and filters as will be available. LSST is also expected to be a large survey, with a resulting very large amount of data and, hence, our goal is to create tools that can help sift through all these new observations efficiently. Our models are, however, intended to be only that: tools that generate robust, yet unconfirmed candidate lists that still require targeted follow-up inspection and analysis, much like Saifollahi et al. (2021) did with their models generating UCD candidates. The two model types chosen, RF and NN, are two of the more common ML algorithms with minimal barrier to entry for ML beginners that are suited for our supervised classification task. This makes our models and methods more accessible to those who wish to perform follow-up analysis or start a similar project. Given the advantages of using ML on large surveys, this proves to be a promising method of GC and UCD can-didate generation, provided the appropriate care is taken in selecting training sets and inputs features.
Recommendations for Future Use
Aside from training models on larger datasets, another important consideration for follow-up research is that our models are not linearly independent, in that some features (in this case, the 4 colour features) are linear combinations of other features (i.e., magnitudes). Generally, ML algorithms may perform differently depending on whether or not features are linearly independent, as this can sometimes emphasize the importance of dependant features or make them redundant. When creating our models, we found by trial and error that most of our models (with the exception of NN3) performed better with both colour and magnitude features, but with future improvement of these models it will be important to verify or reevaluate this choice. Unlike NN packages, the RandomForestClassifier class has a built-in attribute, feature_importances_, which indicates impurity-based feature importances, also known as the Gini importance (Pedregosa et al. 2011). During follow-up analysis we accessed this attribute wherein which the importance of each feature is calculated based on how much it reduces the criterion (i.e., the function that measures the quality of each new branch in RF trees). We found that the magnitude and flux radius features were most important, while the colour features were comparatively less important. Note that this does not imply that colour features are not important, simply that they may not be as important as magnitude features, since colours show the relationships between magnitude bands and models will assess features relationships during training. This, therefore, may suggest that while the colour features do seem to improve model performance, they are likely not essential for a successful random forest model due to linear dependence on magnitude features. This also suggests that the flux radius features do help the models distinguish background sources from other sources, as these are expected to be less spatially resolved.
CONCLUSION
With the large volume of data from upcoming sky surveys, we aimed to create ML classification tools that will create lists of GC+UCD candidates by running on datasets of photometric measurements alone. We employ two supervised ML algorithms: random forest and neural networks, and three architectures, two of which focus on selecting GCs+UCDs from unfiltered catalogues and one auxiliary one which focuses on selecting UCDs from catologues of only star cluster sources. The two main architectures have promising performance, but the auxiliary models will likely require more confirmed UCD sources before reaching optimal performance levels. When compared to the ACSVCS dataset of human-selected GCs, the best performing random forest model is able to reselect 61.2% ± 8.0% of GCs and the best performing neural network model reselects 95.0% ± 3.4% of them. When compared to GCs and interlopers of the NGVS test set, the random forest models can correctly classify 91.0% ± 1.2% of them and the neural network models can correctly classify 57.3% ± 1.1%. Note that there are inherently more systematic uncertainties in the human-classification method, which may contribute to poorer performance when testing the NN models. Additionally, the strength of our NN method is indicated by achieving 62% as good classification on seeing-limited data compared to HST data.
We show in this paper that existing and accessible ML techniques Figure 8. Colour-magnitude space of training data (scattered dots) compared to that of classified data (shaded contours) resulting from the RF models (upper left: RF1; lower left: RF2) and NN models (upper right: NN1; lower right: NN2) being run on the full, unfiltered NGVS catalogue of 719,600 sources (bottom centre). The contours show the 70th and 90th percentiles of non-zero values. The training data only consists of sources for which their classifications were spectroscopically confirmed and, as spectroscopy is typically only run on the brightest sources, the training data is much brighter than the rest of the data in the full NGVS dataset. Evidently, the models do still classify fainter sources and, despite some overlap, the different populations that each model extracts are distinct.
can be used to successfully classify objects in large-scale photometric surveys. Others have attempted this problem with galaxies at different distances with different algorithms and features and have achieved similar success, which supports the robustness of this technique (Mohammadi et al. 2022). Anticipating the first light of Rubin Observatory, we have created tools that will properly run on upcoming data products. Future development of ML algorithms like ours only serve to improve the accuracy and easy with which it will be possible to search for important and scientifically illuminating sources, such as GCs and UCDs. accuracy = 81.12 +/-2.96
Figure 2 .
2The flux radius distribution of of our training data with UCDs, GCs, background galaxies, and foreground stars are respectively shown in red, blue, black, and yellow in each of the five filter bands. The black vertical line in each plot indicates a change in scaling.
Figure 3
3Figure 3. The absolute magnitude distribution of of our training data with UCDs, GCs, background galaxies, and foreground stars shown in red, blue, black, and yellow in each of the five filter bands. The distance to M87, 16.5 Mpc (Strader et al. 2011), was assumed for all sources in calculating absolute magnitudes.
Figure 4 .
4The g-z/ colour-magnitude diagram of each source class comprising the training dataset. (Symbols as inFig. 1) The UCD dataset is sourced from theZhang et al. (2015) andPandya et al. (2016) catalogues, the GC dataset from theStrader et al. (in prep.) catalogue, the background galaxy dataset from the 2MASS extended source catalogue, and the foreground star dataset from the Legacy Survey's DR8 photometry catalogue of the southern region and the Gaia DR2 catalogue (Section 2.1).
Figure 5 .
5Receiver operating characteristic curves (ROC curves) of both RF binary classification models (left) and both NN binary classification models (right), which show the rate of false positives against that of true positives. All four ROC curves have a large area under the curve (AUC) indicating high performance, but RF1 and NN1 are the better performers given their greater AUC. RF1 and RF3 have AUCs of 0.99 and 0.94 and NN1 and NN3 have AUCs of 0.99 and 0.82.
Figure 6 .Figure 7 .
67Impurity based feature importances of model RF3 normalized to 1. While this model does use all available features, magnitude features and flux radius features are shown to be similarly important and both more important than colour features when classifying UCD and GC sources. Absolute difference between pGC values from the ACSVCS catalogue and pRed + pBlue (pC) values from the NGVS test catalogue of sources found in both datasets. The classification of the majority of sources is agreed upon within 5%, but there are also sources about which the two catalogues completely disagree (i.e., a 100% difference in probability values).
Figure B1 .
B1Confusion matrices for all six models developed in this work, positioned as follows: top-left: RF1; top-right: NN1; middle-left: RF2; middle-right: NN2; lower-left: RF3; lower-right: NN3. Overall accuracy is listed at the top of each figure and counts of classified sources are listed on the matrices, with precision and recall values for each class denoted as "p" and "r" respectively.
Table 1 .
1Final count of sources on which our models were run of each class in the training dataset and in both human-selected catalogues.catalogue
UCD
GC
background
galaxy
foreground
star
NGVS training set
83
1160
90
2346
ACSVCS
645
0
0
NGVS test set
6951
4909
Table 2 .
2All models utilized all 14 of the below features, except for NN3 which utilized only 9, the colour features and the flux radius features.feature
magnitude bands feature count
magnitude
u, g, r, i, z
5
colour
u-g, g-r, g-i, g-z
4
flux radius
u, g, r, i, z
5
Table 4 .
4Metrics for each of the 6 models, with precision and recall values listed for each class. The ACSVCS column indicates how well the models can reselect GCs classified using HST data and the NGVS test column indicates how well the models can reselect both GCs and contaminants classified using NGVS data. The ACSVCS catalogue was cross-matched with NGVS data and both human-selected catalogues were cross matched with our training data to avoid overlap. A detailed definition of each metric is found in Section 3.model accuracy (%)
precision (%)
recall (%)
ACSVCS
NGVS test
RF1
99.4 ± 0.2
GCs+UCDs: 98.9 ± 0.6
contaminants: 99.6 ± 0.2
GCs+UCDs: 99.2 ± 0.4
contaminants: 99.4 ± 0.3
59.4 ± 3.7
91.0 ± 1.2
RF2
99.5 ± 0.3
GCs+UCDs: 99.3 ± 0.5
background galaxies: 98.4 ± 3.4
foreground stars: 99.7 ± 0.2
GCs+UCDs: 99.4 ± 0.5
background galaxies: 97.9 ± 4.3
foreground stars: 99.6 ± 0.3
61.2 ± 8.0
86.0 ± 3.8
RF3
95.4 ± 0.8
UCDs: 71.0 ± 9.9
GCs: 96.6 ± 0.7
UCDs: 47.8 ± 11.0
GCs: 98.6 ± 0.6
-
-
NN1
98.4 ± 0.4
GCs+UCDs: 98.2 ± 1.0
contaminants: 98.5 ± 0.5
GCs+UCDs: 97.0 ± 1.0
contaminants: 99.1 ± 0.5
95.0 ± 3.4
57.3 ± 1.1
NN2
98.0 ± 0.7
GCs+UCDs: 97.8 ± 1.5
background galaxies: 77.6 ± 5.0
foreground stars: 99.0 ± 0.5
GCs+UCDs: 97.2 ± 1.0
background galaxies: 94.4 ± 0.0
foreground stars: 98.5 ± 0.9
94.8 ± 1.5
57.2 ± 0.9
NN3
81.1 ± 3.0
UCDs: 71.2 ± 7.3
GCs: 84.5 ± 5.0
UCDs: 47.9 ± 22.4
GCs: 92.4 ± 6.0
-
-
NGVS test set sources with pRed and/or pBlue values greater than
0.5 were treated as GCs, sources with pC (i.e., pRed + pBlue) values
greater than 0.8 were also deemed as GCs, and all other sources were
considered to be contaminants. Here RF1 had higher performance at
reselecting GCs from the catalogue with RF1 reselecting 91.0% +/-
1.2% and RF2 reselecting 86.0% +/-3.8% of the NGVS test sources.
E.Barbisan et al.
https://datalab.noirlab.edu
MNRAS 000, 1-13 (2021) 4 E.Barbisan et al.
MNRAS 000, 1-13(2021)
https://scikit-learn.org
ACKNOWLEDGEMENTSEB, KCD, and DH acknowledge funding from the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Research Chairs (CRC) program, and the McGill Bob Wares Science Innovation Prospectors Fund. KCD acknowledges fellowship funding from the McGill Space Institute. AK acknowledges support from NASA through grant number GO-14738 from STSci. We thank Jay Strader for sharing a preliminary version of the M87 spectroscopic catalog. We also thank the referee for their helpful comments which greatly improved this work.This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.This publication is based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii.The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID #2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Prop. ID #2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; Prop. ID #2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF's NOIR-Lab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham Nation.NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.This project used data obtained with the Dark Energy Camera (DE-Cam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by many sources detailed here: https://noirlab.edu/science/about/ scientific-acknowledgments#decals.BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program "The Emergence of Cosmological Structures" Grant # XDB09000000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the Exter-nal Cooperation Program of Chinese Academy of Sciences (Grant # 114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant # 11433005).The Legacy Survey team makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration.The This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/ gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/ consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.DATA AVAILABILITYThe CFHT data used in this study is publicly available through the CADC (https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/ en/). We also source data from the following catalogues:APPENDIX A: CFHT OBSERVATIONS OF M87M87 is in the field of many MegaPipe observations. These are itemised inTable A1.APPENDIX B: CONFUSION MATRICESThis paper has been typeset from a T E X/L A T E X file prepared by the author.
. R M Arnason, P Barmby, N Vulic, 10.1093/MNRAS/STAA207Monthly Notices of the Royal Astronomical Society. 4925075Arnason R. M., Barmby P., Vulic N., 2020, Monthly Notices of the Royal Astronomical Society, 492, 5075
. K M Ashman, S E Zepf, 10.1086/170850ApJ. 38450Ashman K. M., Zepf S. E., 1992, ApJ, 384, 50
. N M Ball, R J Brunner, A D Myers, D Tcheng, 10.1086/507440ApJ. 650497Ball N. M., Brunner R. J., Myers A. D., Tcheng D., 2006, ApJ, 650, 497
. N Bastian, J Pfeffer, J M D Kruijssen, R A Crain, S Trujillo-Gomez, M Reina-Campos, 10.1093/mnras/staa2453MNRAS. 4981050Bastian N., Pfeffer J., Kruijssen J. M. D., Crain R. A., Trujillo-Gomez S., Reina-Campos M., 2020, MNRAS, 498, 1050
. S Bhandari, arXiv:2108.01282arXiv e-printsBhandari S., et al., 2021, arXiv e-prints, p. arXiv:2108.01282
. M Bhardwaj, 10.3847/2041-8213/abeaa6ApJ. 91018Bhardwaj M., et al., 2021, ApJ, 910, L18
. J P Brodie, C Usher, C Conroy, J Strader, J A Arnold, D A Forbes, A J Romanowsky, 10.1088/2041-8205/759/2/L33ApJ. 75933Brodie J. P., Usher C., Conroy C., Strader J., Arnold J. A., Forbes D. A., Romanowsky A. J., 2012, ApJ, 759, L33
. M Cappellari, 10.1111/j.1365-2966.2010.18174.xMNRAS. 413813Cappellari M., et al., 2011, MNRAS, 413, 813
. K C Dage, 10.1093/mnras/stab943MNRAS. 5041545Dage K. C., et al., 2021, MNRAS, 504, 1545
. J De La Calleja, O Fuentes, 10.1111/j.1365-2966.2004.07442.xMNRAS. 34987De La Calleja J., Fuentes O., 2004, MNRAS, 349, 87
. A Dey, 10.3847/1538-3881/ab089dAJ. 157168Dey A., et al., 2019, AJ, 157, 168
. P Dubath, C J Grillmair, A&A. 321379Dubath P., Grillmair C. J., 1997, A&A, 321, 379
. K El-Badry, E Quataert, D R Weisz, N Choksi, M Boylan-Kolchin, 10.1093/mnras/sty3007MNRAS. 4824528El-Badry K., Quataert E., Weisz D. R., Choksi N., Boylan-Kolchin M., 2019, MNRAS, 482, 4528
. K Fahrion, 10.1051/0004-6361/202037685A&A. 63726Fahrion K., et al., 2020, A&A, 637, A26
. L Ferrarese, 10.1088/0067-0049/200/1/4Astrophysical Journal, Supplement Series. Ferrarese L., et al., 2012, Astrophysical Journal, Supplement Series, 200
. D A Forbes, 10.3847/1538-3881/153/3/114AJ. 153114Forbes D. A., et al., 2017, AJ, 153, 114
. Gaia Collaboration, 10.1051/0004-6361/201629272A&A. 5951Gaia Collaboration et al., 2016, A&A, 595, A1
. Gaia Collaboration, 10.1051/0004-6361/201833051A&A. 6161Gaia Collaboration et al., 2018, A&A, 616, A1
. B Giesers, 10.1093/mnrasl/slx203MNRAS. 47515Giesers B., et al., 2018, MNRAS, 475, L15
. B Giesers, 10.1051/0004-6361/201936203A&A. 6323Giesers B., et al., 2019, A&A, 632, A3
. S D J Gwyn, 10.1086/526794PASP. 120212Gwyn S. D. J., 2008, PASP, 120, 212
. W E Harris, 10.1088/0004-637X/703/1/939ApJ. 703939Harris W. E., 2009, ApJ, 703, 939
. W E Harris, B C Whitmore, D Karakla, W Okoń, W A Baum, D A Hanes, J J Kavelaars, 10.1086/498058ApJ. 63690Harris W. E., Whitmore B. C., Karakla D., Okoń W., Baum W. A., Hanes D. A., Kavelaars J. J., 2006, ApJ, 636, 90
H He, Y Ma, 10.1002/9781118646106Imbalanced Learning: Foundations, Algorithms, and Applications. WileyHe H., Ma Y., 2013, Imbalanced Learning: Foundations, Algorithms, and Applications. Wiley, doi:10.1002/9781118646106
. L C Ho, A V Filippenko, 10.1086/178091ApJ. 472600Ho L. C., Filippenko A. V., 1996, ApJ, 472, 600
. G Illingworth, 10.1086/154152ApJ. 20473Illingworth G., 1976, ApJ, 204, 73
. A Jordán, 10.1088/0067-0049/180/1/54Astrophysical Journal, Supplement Series. 18054Jordán A., et al., 2009, Astrophysical Journal, Supplement Series, 180, 54
Kirsten F , arXiv:2105.11445A repeating fast radio burst source in a globular cluster. Kirsten F., et al., 2021, A repeating fast radio burst source in a globular cluster (arXiv:2105.11445)
. A Kundu, B C Whitmore, 10.1086/321073AJ. 1212950Kundu A., Whitmore B. C., 2001, AJ, 121, 2950
. S.-Y Lee, C Chung, S.-J Yoon, 10.3847/1538-4365/aaecd4ApJS. 2402Lee S.-Y., Chung C., Yoon S.-J., 2019, ApJS, 240, 2
. T J Maccarone, A Kundu, S E Zepf, K L Rhode, 10.1038/nature05434Nature. 445183Maccarone T. J., Kundu A., Zepf S. E., Rhode K. L., 2007, Nature, 445, 183
. J C A Miller-Jones, 10.1093/mnras/stv1869MNRAS. 4533918Miller-Jones J. C. A., et al., 2015, MNRAS, 453, 3918
. M Mohammadi, J Mutatiina, T Saifollahi, K Bunte, arXiv:2201.01604arXiv e-printsMohammadi M., Mutatiina J., Saifollahi T., Bunte K., 2022, arXiv e-prints, p. arXiv:2201.01604
. G Mountrichas, A Corral, V A Masoura, I Georgantopoulos, A Ruiz, A Georgakakis, F J Carrera, S Fotopoulou, 10.1051/0004-6361/201731762A&A. 60839Mountrichas G., Corral A., Masoura V. A., Georgantopoulos I., Ruiz A., Georgakakis A., Carrera F. J., Fotopoulou S., 2017, A&A, 608, A39
. L J Oldham, M W Auger, 10.1093/mnras/stv2244Monthly Notices of the Royal Astronomical Society. 455820Oldham L. J., Auger M. W., 2016, Monthly Notices of the Royal Astronomical Society, 455, 820
. V Pandya, J Mulchaey, J E Greene, 10.3847/0004-637x/819/2/162The Astrophysical Journal. 819162Pandya V., Mulchaey J., Greene J. E., 2016, The Astrophysical Journal, 819, 162
. R Pattnaik, K Sharma, K Alabarta, D Altamirano, M Chakraborty, A Kembhavi, M Méndez, J K Orwat-Kapola, 10.1093/mnras/staa3899MNRAS. 5013457Pattnaik R., Sharma K., Alabarta K., Altamirano D., Chakraborty M., Kem- bhavi A., Méndez M., Orwat-Kapola J. K., 2021, MNRAS, 501, 3457
. F Pedregosa, Journal of Machine Learning Research. 122825Pedregosa F., et al., 2011, Journal of Machine Learning Research, 12, 2825
. E W Peng, 10.1086/498210ApJ. 63995Peng E. W., et al., 2006, ApJ, 639, 95
. G Pérez, M Messa, D Calzetti, S Maji, D E Jung, A Adamo, M Sirressi, 10.3847/1538-4357/abcebaThe Astrophysical Journal. 907100Pérez G., Messa M., Calzetti D., Maji S., Jung D. E., Adamo A., Sirressi M., 2021, The Astrophysical Journal, 907, 100
. M Reina-Campos, J M D Kruijssen, J L Pfeffer, N Bastian, R A Crain, 10.1093/mnras/stz1236MNRAS. 4865838Reina-Campos M., Kruijssen J. M. D., Pfeffer J. L., Bastian N., Crain R. A., 2019, MNRAS, 486, 5838
. M Reina-Campos, S Trujillo-Gomez, A J Deason, J M D Kruijssen, J L Pfeffer, R A Crain, N Bastian, M E Hughes, arXiv:2106.07652Reina-Campos M., Trujillo-Gomez S., Deason A. J., Kruijssen J. M. D., Pfeffer J. L., Crain R. A., Bastian N., Hughes M. E., 2021, arXiv e-prints, p. arXiv:2106.07652
. M Reina-Campos, S Trujillo-Gomez, J L Pfeffer, A Sills, A J Deason, R A Crain, J M D Kruijssen, arXiv:2204.11861Reina-Campos M., Trujillo-Gomez S., Pfeffer J. L., Sills A., Deason A. J., Crain R. A., Kruijssen J. M. D., 2022, arXiv e-prints, p. arXiv:2204.11861
. K L Rhode, S E Zepf, A Kundu, A N Larner, 10.1086/521397AJ. 1341403Rhode K. L., Zepf S. E., Kundu A., Larner A. N., 2007, AJ, 134, 1403
. C L Rodriguez, S Chatterjee, F A Rasio, 10.1103/PhysRevD.93.084029Phys. Rev. D. 9384029Rodriguez C. L., Chatterjee S., Rasio F. A., 2016, Phys. Rev. D, 93, 084029
. T Saifollahi, 10.1093/mnras/stab1118Monthly Notices of the Royal Astronomical Society. 5043580Saifollahi T., et al., 2021, Monthly Notices of the Royal Astronomical Society, 504, 3580
. A C Seth, 10.1038/nature13762Nature. 513398Seth A. C., et al., 2014, Nature, 513, 398
. M F Skrutskie, 10.1086/498708AJ. 1311163Skrutskie M. F., et al., 2006, AJ, 131, 1163
. J Strader, 10.1088/0067-0049/197/2/33ApJS. 19733Strader J., et al., 2011, ApJS, 197, 33
. J Strader, L Chomiuk, T J Maccarone, J C A Miller-Jones, A C Seth, 10.1038/nature11490Nature. 49071Strader J., Chomiuk L., Maccarone T. J., Miller-Jones J. C. A., Seth A. C., 2012, Nature, 490, 71
. D A Thilker, 10.1093/mnras/stab3183MNRAS. 5094094Thilker D. A., et al., 2022, MNRAS, 509, 4094
. H Tranin, O Godet, N Webb, D Primorac, arXiv:2111.01489arXiv e-printsTranin H., Godet O., Webb N., Primorac D., 2021, arXiv e-prints, p. arXiv:2111.01489
. C Usher, J P Brodie, D A Forbes, A J Romanowsky, J Strader, J Pfeffer, N Bastian, 10.1093/mnras/stz2596MNRAS. 490491Usher C., Brodie J. P., Forbes D. A., Romanowsky A. J., Strader J., Pfeffer J., Bastian N., 2019, MNRAS, 490, 491
. N C Weatherford, S Chatterjee, K Kremer, F A Rasio, 10.3847/1538-4357/ab9f98ApJ. 898162Weatherford N. C., Chatterjee S., Kremer K., Rasio F. A., 2020, ApJ, 898, 162
Science with a Next Generation Very Large Array. J M Wrobel, J C A Miller-Jones, K E Nyland, T J Maccarone, Astronomical Society of the Pacific Conference Series. Murphy E.517743Wrobel J. M., Miller-Jones J. C. A., Nyland K. E., Maccarone T. J., 2018, in Murphy E., ed., Astronomical Society of the Pacific Conference Series Vol. 517, Science with a Next Generation Very Large Array. p. 743
. H X Zhang, 10.1088/0004-637X/802/1/30Astrophysical Journal. 802Zhang H. X., et al., 2015, Astrophysical Journal, 802
. Y Zhang, Y Zhao, X.-B Wu, 10.1093/mnras/stab744MNRAS. 5035263Zhang Y., Zhao Y., Wu X.-B., 2021, MNRAS, 503, 5263
Table A1. CADC MegaPipe observations covering M87. Table A1. CADC MegaPipe observations covering M87.
. Megapipe, 364203MegaPipe.364.203
. Megapipe, 365MegaPipe.365.201
. Megapipe, 365202MegaPipe.365.202
. Megapipe, 365203MegaPipe.365.203
. Megapipe, 365206MegaPipe.365.206
. Megapipe, 366MegaPipe.366.201
. Megapipe, 366202MegaPipe.366.202
. Megapipe, 366203MegaPipe.366.203
. Megapipe, 366204MegaPipe.366.204
. Megapipe, 366205MegaPipe.366.205
. Megapipe, 366206MegaPipe.366.206
. Megapipe, 367MegaPipe.367.201
. Megapipe, 367202MegaPipe.367.202
. Megapipe, 367203MegaPipe.367.203
. Megapipe, 367204MegaPipe.367.204
. Megapipe, 367205MegaPipe.367.205
. Megapipe, 367206MegaPipe.367.206
. Megapipe, 368MegaPipe.368.200
. Megapipe, 368MegaPipe.368.201
. Megapipe, 368202MegaPipe.368.202
. Megapipe, 368203MegaPipe.368.203
. Megapipe, 368204MegaPipe.368.204
. Megapipe, 368205MegaPipe.368.205
. Megapipe, 368206MegaPipe.368.206
. Megapipe, 369MegaPipe.369.198
. Megapipe, 369MegaPipe.369.199
. Megapipe, 369MegaPipe.369.200
. Megapipe, 369202MegaPipe.369.202
. Megapipe, 369203MegaPipe.369.203
. Megapipe, 369204MegaPipe.369.204
. Megapipe, 369205MegaPipe.369.205
. Megapipe, 369206MegaPipe.369.206
. Megapipe, 370MegaPipe.370.198
. Megapipe, 370MegaPipe.370.199
. Megapipe, 370MegaPipe.370.200
. Megapipe, 370202MegaPipe.370.202
. Megapipe, 370203MegaPipe.370.203
. Megapipe, 370204MegaPipe.370.204
. Megapipe, 370205MegaPipe.370.205
. Megapipe, 370206MegaPipe.370.206
. Megapipe, 371MegaPipe.371.198
. Megapipe, 371MegaPipe.371.199
. Megapipe, 371MegaPipe.371.200
. Megapipe, 371204MegaPipe.371.204
. Megapipe, 372MegaPipe.372.198
|
[] |
[
"NEW PROPERTIES OF THE EDELMAN-GREENE BIJECTION",
"NEW PROPERTIES OF THE EDELMAN-GREENE BIJECTION"
] |
[
"Svante Linusson ",
"Samu Potka "
] |
[] |
[] |
Edelman and Greene constructed a correspondence between reduced words of the reverse permutation and standard Young tableaux. We prove that for any reduced word the shape of the region of the insertion tableau containing the smallest possible entries evolves exactly as the upper-left component of the permutation's (Rothe) diagram. Properties of the Edelman-Greene bijection restricted to 132-avoiding and 2143-avoiding permutations are presented. We also consider the Edelman-Greene bijection applied to non-reduced words.
|
10.4310/joc.2020.v11.n2.a2
|
[
"https://arxiv.org/pdf/1804.10034v2.pdf"
] | 52,949,894 |
1804.10034
|
252ff004850a8c9a9e364ac9bf11fae61c7fbb79
|
NEW PROPERTIES OF THE EDELMAN-GREENE BIJECTION
Svante Linusson
Samu Potka
NEW PROPERTIES OF THE EDELMAN-GREENE BIJECTION
Edelman and Greene constructed a correspondence between reduced words of the reverse permutation and standard Young tableaux. We prove that for any reduced word the shape of the region of the insertion tableau containing the smallest possible entries evolves exactly as the upper-left component of the permutation's (Rothe) diagram. Properties of the Edelman-Greene bijection restricted to 132-avoiding and 2143-avoiding permutations are presented. We also consider the Edelman-Greene bijection applied to non-reduced words.
Introduction
In 1982, Richard Stanley conjectured, and later proved algebraically in [23] that the number of different reduced words for the reverse permutation in the symmetric group S n is equal to the number of staircase shape standard Young tableaux. Motivated to find a bijective proof, Edelman and Greene [9] constructed a correspondence based on the celebrated Robinson-Schensted-Knuth (RSK) algorithm and Schützenberger's jeu de taquin. See also the work of Haiman on dual equivalence [12]. Later, Little [19] found another bijection based on the Lascoux-Schützenberger tree, [16], which was proved to be equivalent to the Edelman-Greene (EG) correspondence by Hamaker and Young in [14]. Recently, reduced words of the reverse permutation have been studied under the name of sorting networks. Uniformly random sorting networks are the topic of, for example, [1] by Angel, Holroyd, Romik, and Virág, and the subsequent papers, in particular the recent work by Dauvergne and Virág [7] and Dauvergne [6] announcing proofs of the conjectures in [1]. See an example of a sorting network illustrated by its wiring diagram in Figure 1.
Our main result, Theorem 3.3, is that the shape of the empty area (Rothe diagram) in the upper left corner of the permutation matrix is exactly the same as a region in the tableaux generated by the EGcorrespondence which we call the frozen region. See Figure 2. One consequence of this is Conjecture 1, a reformulation of a part of [1,Conjecture 2] directly in terms of the EG-bijection. As a side-product of Theorem 3.3 we obtain some new observations and simple reproofs of previous results on the reduced words of 132-avoiding permutations in Corollary 3.8, Corollary 3.9 and Proposition 3.10. We also consider sorting networks whose intermediate permutations are required to be 132-avoiding. These can be viewed as chains of maximum length in the Tamari lattice [2], and have recently been studied by Fishel and Nelson [10], and Schilling, Thiéry, White and Williams [21]. The results in this paper are used to study limit phenomena of random 132-avoiding sorting networks in [18].
In Section 4 we consider the Edelman-Greene bijection applied to non-reduced words. In particular, we study the sets of words yielding the same pairs of Young tableaux under the Edelman-Greene correspondence and study a natural partial order on this set which turns out to have some nice and surprising properties. Note that there is a different generalization of the Edelman-Greene bijection for non-reduced words called Hecke insertion [3].
Preliminaries
This section briefly reviews the basic definitions and background of this paper.
2.1. Reduced words and the weak Bruhat order on S n . The symmetric group S n contains all permutations σ = σ(1) . . . σ(n) on [n] = {1, . . . , n}. The set of inversions of a permutation σ ∈ S n is defined as Inv(σ) = {(i, j) : 1 ≤ i < j ≤ n, σ(i) > σ(j)}. The weak Bruhat order is then defined by σ w τ for σ, τ ∈ S n if Inv(σ) ⊆ Inv(τ ). The reverse permutation n(n − 1) . . . 21 is the unique maximal element of (S n , w ) and the identity permutation id = (1, . . . , n) the unique minimal element.
Each σ ∈ S n can be written as a composition of at least inv(σ) = |Inv(σ)| adjacent tranpositions, s i = (i i + 1). Hence σ ∈ S n can be written as a word w = w 1 . . . w m , m ≥ inv(σ), with letters 1 ≤ w i ≤ n − 1 corresponding to transpositions s w i . The notation w ∈ N * means that w is a finite word with positive integer letters. We define len(w) = m, the length of w. When len(w) = inv(σ), we say that w is a reduced word of σ. Note that each reduced word w = w 1 w 2 . . . w m , m = inv(σ), of σ ∈ S n can be identified with a chain id w s w 1 w s w 1 s w 2 w · · · w s w 1 s w 2 · · · s wm = σ in the weak Bruhat order on S n . We denote the set of reduced words of σ ∈ S n by R(σ), and, for convenience, in the case of σ = n(n − 1) . . . 21 use the abbreviation R(n).
We will adopt the convention that the permutation matrix corresponding to σ ∈ S n has 1s in entries (σ(i), i), i = 1 . . . n, see the example below. It is important to note that we consider the transpositions acting on positions and perform the compositions of s w i corresponding to a word w = w 1 . . . w m from the left in our arguments. (Equivalently one could compose from the right and consider them acting on values.) As an example, consider S 4 and the reduced word 1213. Composing s 1 s 2 s 1 s 3 from the left yields the permutation 3241. In terms of permutation matrices, we would have, for example, where we can see that s i corresponds to swapping the columns i and i + 1.
s 1 = 2 1 3 4
Standard Young tableaux.
Recall that a partition λ of m ∈ N is a tuple (λ 1 , . . . , λ ) of positive integers λ 1 ≥ · · · ≥ λ > 0 such that λ 1 + · · · + λ = m. The length of λ is the number of parts in it: len(λ) = . A partition can be represented by its Young diagram (also called Ferrers diagram) which is the set {(i, j) ∈ N 2 : 1 ≤ i ≤ , 1 ≤ j ≤ λ i } and is often (in the so-called English notation) drawn as a collection of square boxes corresponding to the cells (i, j) with i increasing downwards and j to the right. A Young tableau T of shape λ = (λ 1 , . . . , λ ) is a filling of the Young diagram of λ, typically with positive integer entries, denoted T i,j . Such a tableau T is called standard if the entries 1, . . . , λ 1 + · · · + λ appear exactly once each, and the rows and columns of T are strictly increasing. We let SYT(λ) be the set of standard Young tableaux of the shape λ.
2.3. The Edelman-Greene bijection. The Edelman-Greene correspondence is a bijection between R(n), that is, maximal chains in the weak Bruhat order on S n , and standard Young tableaux of the staircase shape sc n = (n − 1, n − 2, . . . , 1).
Definition 2.1 (The Edelman-Greene insertion). Suppose that P is a Young tableau with strictly increasing rows P 1 , . . . , P and x 0 ∈ N is to be inserted in P . The insertion procedure is as follows for each 0 ≤ i ≤ :
• If x i > z for all z ∈ P i+1 , place x i at the end of P i+1 and stop.
•
If x i = z for some z ∈ P i+1 , insert x i+1 = z + 1 in P i+2 .
• Otherwise, x i < z for some z ∈ P i+1 , and we let z be the least such z, replace it by x i and insert x i+1 = z in P i+2 . In both this and the case above we say that x i bumps z . Repeat the insertion until for some i the x i is inserted at the end of P i+1 and the algorithm stops. This could be a previously empty row P +1 .
We should mention that our definition of the insertion differs from that of [9], where it is called the Coxeter-Knuth insertion. However, using for example the proof of [9, Lemma 6.23], one can show that the two definitions coincide for reduced words. In our formulation the tableaux are increasing in rows and columns also for non-reduced words. Note also that except for a difference in handling equal elements bumping, the Edelman-Greene insertion and the RSK insertion are the same.
Definition 2.2 (The Edelman-Greene correspondence). Suppose that w = w 1 . . . w i . . . w m ∈ N * . Initialize P (0) = ∅.
• For each 1 ≤ i ≤ m, insert w i in P (i−1) and denote the result by P (i) . Let P (m) = P (w) and let Q(w) be the Young tableau obtained by setting Q(w) i,j = k for the unique cell (i, j) ∈ P (k) \ P (k−1) . Set EG(w) = Q(w).
As an example, consider the reduced word w = 321232. Then the P (k) , 1 ≤ k ≤ 6, form the following sequence 3 .
The tableau P (w) is called the insertion tableau and the tableau Q(w) the recording tableau. Note that P (w) and Q(w) are always of the same shape for a fixed w. To state one of the main results of Edelman and Greene, let the reading word r(P ) of an insertion tableau P be the word obtained by collecting the entries of P row by row from left to right starting from the bottom row. is a bijection between ∪ σ∈Sn R(σ) and the set of pairs of tableaux (P, Q) such that P is row and column strict, r(P ) is reduced, P and Q have the same shape, and Q is standard.
Each of the P (k) , 1 ≤ k ≤ m, is going to contain some amount of entries such that P (k) i,j = i + j − 1. We call the region of P (k) formed by such entries the frozen region and say that an insertion tableau is frozen if the tableau is entirely a frozen region. The reason for using this terminology is that the frozen region does not change during the Edelman-Greene insertion. See P in Figure 2. The frozen region is white in the example. It turns out that P (w) is always frozen when w ∈ R(n), and in fact, as we will see later in Corollary 3.7, more generally if and only if w ∈ R(σ) with σ 132-avoiding. Frozen tableaux have previously appeared in the literature on the combinatorics of K-theory under the name minimal increasing tableaux, see, for example, [4] and [13].
Theorem 2.4 ([9, Theorem 6.26]). Suppose w ∈ R(n). Then P (w) is frozen and Q(w) ∈ SYT(sc n ). The map EG(w) : w → Q(w) is a bijection from R(n) to SYT(sc n ).
Continuing in the setting of Theorem 2.4, if w ∈ R(n), the inverse to the Edelman-Greene bijection takes a very special form. To define it, we have to introduce Schützenberger's jeu de taquin. For a good introduction, we refer to [24] or [20], although the terminology is slightly different.
Let T be a partially filled Young diagram with increasing rows and columns, and each entry 1 ≤ k ≤ max (i,j)∈T T i,j occurring exactly once. The evacuation path of T is a sequence of cells π 1 , . . . , π s such that • π 1 = (i max , j max ), the location of the largest entry of T ,
• if π k = (i, j), then π k+1 = (i , j ) ∈ T such that T i ,j = max{T i,j−1 , T i−1,j } > −∞ with the convention T i,j = −∞ for (i, j)
∈ T and for unlabeled (i, j) ∈ T . Next, define the tableau T ∂ by
• removing the label of T π 1 ,
• and sliding the labels along the evacuation path: T π 1 ← T π 2 ← · · · ← T πs . A single application of ∂ is called an elementary promotion. Whenever a label 1 ≤ ≤ T π 1 slides from some cell (i, j) to (i, j + 1) (respectively (i + 1, j)) in applying ∂ until all labels have been removed is referred to as a right slide (respectively downslide). For w ∈ R(n), the inverse to the Edelman-Greene bijection can then be defined as follows.
Theorem 2.5 ([9, Theorem 7.18]). Suppose Q ∈ SYT(sc n ). Apply ∂ until all labels have been cleared and say that π
(k) 1 = (i k , j k ) is the first cell of the evacuation path π (k) for the k:th iteration. Then EG −1 (Q) = j ( n 2 ) . . . j k . . . j 1 .
Consider again the example following Definition 2.2. Applying ∂ yields the sequence
Q = 1 4 5 2 6 3 ∂ −→ 1 5 2 4 3 ∂ −→ 1 2 4 3 ∂ −→ 1 2 3 ∂ −→ 1 2 ∂ −→ 1 ∂ −→ .
The largest entries are in the cells π
(1) 1 = (2, 2), π (2) 1 = (1, 3), π (3) 1 = (2, 2), π (4) 1 = (3, 1), π (5) 1 = (2, 2) and π (6) 1 = (1, 3). Hence, EG −1 (Q) = 321232 as expected.
Another important operator will be the so-called evacuation S, which is in some sense dual to promotion. If T is a standard Young tableau, T S is defined by setting T S i,j = k if and only if (i, j) is not labeled in T ∂ k but is labeled in T ∂ k−1 . Thus T S records when cells become empty in iterating the elementary promotion ∂ for T . Returning to the previous example, we would have
Q S = 1 2 6 3 5 4 .
In his original work [22], Schützenberger proved a remarkable property of the operator S: it is an involution.
Frozen regions and diagrams
This section aims to prove our main result. Before proceeding with the proof, we need to recall some additional properties of the Edelman-Greene bijection. The results below are due to Edelman and Greene.
Lemma 3.1 ([9, Lemma 6.22]). If P is row and column strict, then P (r(P )) = P . Lemma 3.2 ([9, a part of Lemma 6.23]). If w ∈ R(σ), then P (w) is row and column strict, and r(P (w)) ∈ R(σ).
Our goal is to show that the shape of the frozen region of P (k) corresponds to the shape of one part of the so-called diagram of σ = s w 1 s w 2 . . . s w k . The (Rothe) diagram D(σ) of a permutation σ is the set of cells left unshaded when we shade all the cells weakly to the east and south of 1-entries in the permutation matrix M (σ). In particular, we consider the (possibly empty) connected component of D(σ) containing (1, 1) which we call the top-left component of the diagram and denote by D (1,1) (σ). The top-left component induces a partition which is denoted by λ(σ). Similarly, the frozen region of the insertion tableau of a reduced word induces a partition λ f (w) since by Theorem 2.3 the tableau is row and column strict. See Figure 2 for an example. The following is one of our main results.
D(σ) = 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 P =Theorem 3.3. If w = w 1 · · · w is reduced, then λ(s w 1 . . . s w ) = λ f (w).
That is, the top-left component of the diagram of s w 1 . . . s w has the same shape as the frozen region of P (w).
Proof. By Lemma 3.1 and Lemma 3.2, it is enough to consider the case when w = r(P (w)) = p m . . . p 2 p 1 where p i is the word formed by the letters in P i (w), the i:th row of P (w). The remark below will be useful throughout.
Remark. Let σ(w) = s w 1 · · · s w k for a word w = w 1 . . . w k . Since r i , 1 ≤ i ≤ m,
is a strictly increasing word, a number in σ(w) can move at most one step to the left in σ(w)σ(r i ). The number in position j moves k steps to the right if j(j + 1) · · · (j + k − 1) is a subword of r i .
Let σ = s w 1 · · · s w . We will start by showing that λ f (w) 1 = λ(σ) 1 and len(λ f (w)) = len(λ(σ)). For the first statement, note that the topmost 1 of M (σ) is in the cell (1, λ(σ) + 1). On the other hand, none of the rows of P (w) below the first can contain a 1 as the columns of P (w) are strictly increasing. Hence, by the remark, the sequence s 1 · · · s λ f (w) 1 of transpositions coming from p 1 moves the number 1 to position λ f (w) 1 + 1 in σ. Note that this means the same thing as moving the 1 in the first row to column λ f (w) 1
+ 1 in M (σ). Thus, λ f (w) 1 = λ(σ) 1 .
Similarly, we can show that there is a 1 in the cell (len(λ f (w)) + 1, 1) in M (σ), where len(λ f (w)) is the index of the last row containing a frozen part, as follows. By column and row-strictness, no transpositions in p m . . . p i , i = len(λ f (w)) + 1, affect the number len(λ f (w)) + 1 in the permutation. Hence, the first numbers len(λ f (w)), . . . , 1 of the rows len(λ f (w)), . . . , 1 form a sequence of transpositions s len(λ f (w)) , . . . , s 1 moving the number len(λ f (w))+1 to position 1 in σ. Thus len(λ f (w)) = len(λ(σ)).
It remains to show that λ f (w) i = λ(σ) i for 1 < i ≤ len(λ f (w)). Consider the row i with frozen part of length λ f (w) i , and the permutation σ i corresponding to the reduced word w i = p m · · · p i . By row and column-strictness, the letter i does not appear in w i+1 . By the remark, the effect of the transpositions at indices (
p i ) 1 = i, . . . , (p i ) λ f (w) i = (i + λ f (w) i − 1) is to move the 1 in row i to column i + λ f (w) i in M (σ i ). Suppose λ f (w) j = λ f (w) i for i ≤ j ≤ i, and λ f (w) j > λ f (w) i for all j < i . We will now prove that λ(σ) j = λ f (w) i for i ≤ j ≤ i.
This situation is illustrated in Figure 3. First, note that by the remark, in fact, for all i ≤ j ≤ i, the 1 in row j is moved by
1 λ f (w) i λ f (w) i + 1 i 0 . . . 0 1 . . . . . . . . . . . . 0 . . . i 0 . . . 0 0 . . . . . . 1 . . . 0 . . .p j to column j + λ f (w) i in M (σ j ). Next, consider i . We have P (j,λ f (w) i +1) = j + λ f (w) i for j ≤ i since it is in the frozen part. These entries for j = i − 1, . . . , 1 will move the number i back to λ f (w) i + 1. Hence σ(i ) = λ f (w) i + 1.
Finally, by the remark, the number j, i < j ≤ i, can be moved at most j − 1 steps to the left by p j−1 , . . . , p 1 . Hence σ(j) > λ f (w) i + 1 for i < j ≤ i, and the claim follows. This implies that λ f (w) i = λ(σ) i for any 1 < i ≤ len(λ f (w)).
Given w = w 1 . . . w k ∈ R(σ), let w rev = w k . . . w 1 ∈ R(σ −1 )
. This corresponds to reflecting the wiring diagram in the vertical axis through the midpoint. Edelman and Greene proved the following lemmas.
• P (w rev ) = P (w) t , where t is the transpose, and • Q(w rev ) = Q(w) S .
A similar statement holds for taking complements. This time the wiring diagram picture would be to reflect the diagram in the horizontal axis through the middle. (1)), which corresponds to flipping the permutation matrix about its horizontal axis, and σ c = (n+1−σ(1), . . . , n+1−σ(n)), which corresponds to doing the same about the vertical axis.
Lemma 3.5 ([9, Corollary 7.21]). Suppose w = w 1 . . . w k ∈ R(n) and letw =w 1 . . .w k , wherew i = n − w i for 1 ≤ i ≤ k. Thenw ∈ R(n), and Q(w) = Q(w) t . Note that if s w 1 · · · s w k = σ, then sw 1 · · · sw k = (σ r ) c = (σ c ) r , where σ r = (σ(n), . . . , σ
The symmetries above yield the reformulation of a part of [1, Conjecture 2] below. We state it informally. The reader is referred to [1] for the details on their conjecture.
Conjecture 1 (Reformulation of a consequence of [1, Conjecture 2]).
Let w be a random sorting network. For all t ∈ (0, 1), the limit shape of the scaled frozen region
F t = {( 2j n − 1, 1 − 2i n ) ∈ R 2 : (i, j) ∈ λ f (w 1 . . . w t( n 2 ) )} is {(x, y) ∈ R 2 :
x ≤ − cos(πt), y ≥ cos(πt), sin 2 (πt) − 2xy cos(πt) − x 2 − y 2 = 0}.
A proof of the corresponding part of [1, Conjecture 2] has been announced recently in [7]. See also a stronger version in [6, Theorem 2]. Conjecture 1 and [1, Conjecture 2] are illustrated in Figure 4. 3.1. Pattern avoidance. Theorem 3.3 also connects our work with the study of pattern-avoiding permutations. The permutation σ ∈ S n contains the pattern p = p 1 . . . p k ∈ N * if there exist indices 1 ≤ i 1 < i 2 · · · < i k ≤ n such that σ(i) < σ(j) if and only if p i < p j for all i < j, i, j ∈ {i 1 , . . . , i k }. If σ does not contain p, it is called p-avoiding. Since the length of a reduced word of σ ∈ S n is exactly the number of inversions in σ, that is inv(σ), Lemma 3.6 suggests we also need the following well-known fact: If σ ∈ S n , then |D(σ)| = inv(σ). Note that by Lemma 3.6, this can also be stated as λ(σ) inv(σ) for σ ∈ S n (132), meaning that λ(σ) is a partition of inv(σ). We then obtain the characterization below. illustrating how the same shapes occur in both permutation matrices M (σ t ) and frozen regions λ f (w 1 . . . w t( n 2 ) ), where σ t is the permutation defined by w 1 . . . w t( n 2 ) for a random sorting network w. Somewhat related, Tenner showed in [25,Theorem 5.15] that the set of 132-avoiding permutations of any length with k inversions is in bijection with partitions of k. The proof is by constructing a bijection by filling the Young diagram of λ k in such a way that the result is a frozen tableau (it is called antidiagonal filling in the paper). Then the reading words of these tableaux are shown to be reduced, as also follows by Lemma 3.1 and Lemma 3.2, and moreover to yield the 132avoiding permutations. This would then imply the "only if"-direction of Corollary 3.7 by Lemma 3.2.
The corollary below is mostly a reproof of consequences of results by Stanley [23,Theorem 4.1], and Edelman and Greene [9, Theorem 8.1]. We have added the observation that each shape λ ⊂ sc n appears for exactly one σ ∈ S n (132) (and the consequent second bijection), which also follows from their works by properties of 132-avoiding permutations but is not discussed.
Corollary 3.8. If σ is 132-avoiding, then P (w) is frozen and has the same shape λ(σ) for all w ∈ R(σ). Furthermore, each shape λ ⊂ sc n appears for exactly one σ ∈ S n (132). Hence, EG(w) : w → Q(w) defines a bijection R(σ) → SYT(λ(σ)), and a bijection
R(σ) = λ⊂scn f λ , where p ∈ {132, 213}.
This is implied by Corollary 3.8 and symmetries proved by Edelman and Greene. However, we have not been able to simplify the sum on the right-hand side.
3.2. 132-avoiding sorting networks. Having in mind that the insertion tableau P (w) becomes frozen for any reduced word w of the reverse permutation, it could be interesting to restrict to 132-avoiding sorting networks, that is, those reduced words w = w 1 . . . w ( n 2 ) ∈ R(n) such that for any 1 ≤ i ≤ n 2 the permutation s w 1 · · · s w i is 132-avoiding, or, equivalently, P (w 1 . . . w i ) is frozen. This corresponds to considering the maximum length chains in the weak Bruhat order on S n restricted to 132-avoiding permutations. Björner and Wachs showed in [2] that the restriction yields a sublattice isomorphic to the Tamari lattice T n .
Using results from the next section, we can characterize 132-avoiding sorting networks in terms of shifted standard Young tableaux, which was first proved by Fishel and Nelson [10, Theorem 4.6]. These are standard Young tableaux for which each row i can be shifted (i − 1) steps to the right without breaking the rule that the columns are increasing downwards. For example, It is 132-avoiding if and only if Q i,j > Q i−1,j+1 for all (i, j), (i − 1, j + 1) ∈ Q, or in other words, Q is a shifted standard Young tableau of the shape sc n , where Q = EG(w).
It is 213-avoiding if and only if
Q i,j < Q i−1,j+1 for all (i, j), (i − 1, j + 1) ∈ Q where Q = EG(w). Proof. Suppose Q i,j > Q i−1,j+1 for all (i, j), (i − 1, j + 1) ∈ Q.
We shall see in Proposition 4.3 that then w = c 1 c 2 . . . c ( n 2 ) where c i is the number of the column of 1 ≤ i ≤ n 2 in Q. This implies that P (k) is frozen for all 1 ≤ k ≤ n 2 , since each letter c i is inserted in column number c i on the first row. For example, consider w = 121321. Its insertion forms the sequence
1 −→ 1 2 −→ 1 2 2 −→ 1 2 3 2 −→ 1 2 3 2 3 −→ 1 2 3 2 3 3 .
For the other direction, assume P (k) is frozen for all 1 ≤ k ≤ n 2 and suppose w is not of the form c 1 c 2 . . . c ( n 2 ) . Then some letter w i is inserted in column c j > c i on the first row. The letter w i bumps c j . Otherwise the insertion tableau was not frozen. This means c j + 1 is inserted in the second row. Either it is the largest on the row or bumps a c j + 1 since the insertion tableau has to be frozen. Using this argument inductively, we see that at no point in the insertion can a letter be inserted into a column other than c j . This is a contradiction. Hence w = c 1 c 2 . . . c ( n 2 ) , but then by Proposition 4.3,
Q i,j > Q i−1,j+1 for all (i, j), (i − 1, j + 1) ∈ Q.
The second statement follows from the first by symmetries.
This subclass of sorting networks has also been studied by Schilling, Thiéry, White and Williams in [21]. Note in particular the observation that 132-avoiding sorting networks form a commutation class, that is, each 132-avoiding sorting network is reachable from another by a sequence of commutations: s i s j → s j s i if |i − j| > 1. They also observed that by [21, Lemma 2.2] n-element 132-avoiding sorting networks are in bijection with reduced words of the signed permutation −(n − 1) −(n − 2) . . . −1 by s i → s i−1 .
Another characterization of 132-avoiding sorting networks is in terms of lattice words (also called lattice permutations or Yamanouchi words).
A lattice word of type λ = (λ 1 , . . . , λ m ) is a word w = w 1 . . . w m in which for each 2 ≤ i + 1 ≤ m there is at least one i before it, and i occurs λ i times in w.
Proposition 3.11. Let w = w 1 . . . w ( n 2 ) be a sorting network and let w =w 1 . . .w k , wherew i = n − w i for 1 ≤ i ≤ k. Then w is 132avoiding if and only if w (or equivalently, w rev ) is a lattice word of type sc n . It is 213-avoiding if and only ifw (or equivalently,w rev ) is a lattice word of type sc n .
Proof. The proof borrows from the proof of Proposition 3.10. Suppose w is a 132-avoiding sorting network. Then, by Proposition 3.10 and
Proposition 4.3, w = c 1 c 2 . . . c ( n 2 ) where c i is the column of 1 ≤ i ≤ n 2
in Q(w). This implies that w is a lattice word of type sc n . For the other direction, note that if w is a lattice word of type sc n , then the P (k) obtained in computing EG(w) are frozen for all 1 ≤ k ≤ n 2 . By Corollary 3.7, w is 132-avoiding.
The second statement follows from the first.
Fishel and Nelson proved the "⇒"-direction of Proposition 3.11 in [10,Corollary 4.5]. Note that if w = w 1 . . . w k is a 132-avoiding sorting network, w rev = w k . . . w 1 is a 132-avoiding sorting network as well, since Q(w rev ) can be obtained by shifting Q(w), reflecting the result anti-diagonally, complementing the entries: m → n 2 − m + 1, and (un)shifting back.
We should emphasize that 132-avoiding and 312-avoiding sorting networks coincide.
n 2 ! 1!2! . . . (n − 2)! 1!3! . . . (2n − 3)! .
The same holds for 213-avoiding sorting networks.
The study of 132-avoiding sorting networks is continued in [18].
3.3. Vexillary permutations. The proof method of [9, Theorem 8.1, part 2] would also lead to a proof of Corollary 3.7. Moreover, it in fact allows us to prove something stronger. A permutation is said to be vexillary if it is 2143-avoiding. For (i, j) ∈ D(σ), let r(i, j) be the rank of (i, j), the number of 1s north-west of (i, j) in M (σ). We have the following result.
Theorem 3.14. Let w ∈ R(σ). If σ is vexillary, then the cell (i, j) of P (w) contains the entry (i+j −1)+k, k ≥ 0, if and only if (i+k, j +k) is in D(σ), where k = r(i + k, j + k). Furthermore, if the set consisting of the cells (i + k, j + k) for entries (i + j − 1) + k, k ≥ 0, in cells (i, j) in P (w) is the diagram of a vexillary permutation, then σ is vexillary.
Proof. We prove this by using a modification of the construction in the proof of [9, Theorem 8.1, part 2]. The idea is as follows. For a permutation σ, create a row (with possible empty spaces) of cells, the columns x containing the positions x such that σ(y) < σ(x) for some y > x. Next, for each x in the row, add x + 1, . . . , x + r x (σ) − 1, where r x = |{y : y > x, σ(y) < σ(x)}|, below x in the same column. Note that r x is just the number of inversions whose smaller component is x. Denote this configuration of cells by T 0 (σ). Finally, left-justify the rows and call the resulting increasing tableau T (σ). It follows from [9, Theorem 8.1, part 1] and the proof of [9, Theorem 8.1, part 2] that for σ vexillary, T (σ) = P (w) for all w ∈ R(σ). As an example, σ = 813975246 is considered in Figure 5. Consider the connected component D (i+k,j+k) in the diagram of a vexillary permutation σ having its north-west corner in (i + k, j + k), where k is the number of 1s north-west of (i + k, j + k). Note that k is well-defined. Assume that D (i+k,j+k) has column lengths c 0 , . . . , c l−1 .
We first show that for 0 ≤ m ≤ l, column j + k + m of T 0 (σ) has at least c m entries weakly south of row i. These entries are then by construction (i + j − 1) + k + m, . . . , (i + j − 1) + k + m + c m − 1 as required in P (w). It is clear that there are at least c m 1s east of column j + k + m, north of row (i + k) + c m − 1 and weakly south of i + k, whereas the 1-entry of column j + k + m must lie weakly south of (i + k) + c m − 1. See Figure 7. Furthermore, there are exactly k 1s north-west of (i + k, j + k) in the permutation matrix. Hence i − 1 1s are strictly north-east of the component with north-west corner in T 0 (σ) = (i + k, j + k). Hence column j + k + m of T 0 (σ) contains at least the entries j + k + m, . . . (j + k) + (i − 2) + m, (i + j − 1) + k + m, . . . , (i + j − 1) + k + m + c m − 1. This proves the claim. Next we show that the part of T 0 (σ) corresponding to D (i+k,j+k) is shifted to the left by k steps in T (σ). No 1s appear west of the component. Hence all columns of T 0 (σ) left of j + k have either length shorter than i or longer than i + c 0 . Since r(i + k, j + k) = k, exactly k columns are shorter. This proves that (x, y) ∈ T (σ) contains (x + y − 1) + k for all (x + k, y + k) ∈ D (i+k,j+k) .
For the other direction, note that the map above is surjective from the set of all diagrams of vexillary permutations to insertion tableaux of reduced words of vexillary permutations. Furthermore, it is injective since D(w) : P (w) → D ⊂ N 2 defined by sending (i, j) with entry (i + j − 1) + k to (i + k, j + k) is the inverse. This proves the " ⇒ "direction. For the final statement, note that in the vexillary case no two of the k 1s north-west of (i + k, j + k) can form a decreasing subsequence. Hence (i, j) is the first cell of N 2 on the diagonal of (i + k, j + k), not in the set D obtained after the components of D(σ) with their north-west corners (i , j ) on the same diagonal have been shifted diagonally northwest to the first available cells in order of increasing (i , j ). This gives an alternative description of the map P (σ) : D(σ) → P (w): send the cells (i+k, j +k) ∈ D(σ) in increasing order along the same diagonal to the first available cell, (i, j), and put the label (i + j − 1) + k into (i, j). Then, P can be defined for any permutation σ, and for w ∈ R(σ), the map D(w) : P (w) → D ⊂ N 2 defined by sending (i, j) with entry (i + j − 1) + k to (i + k, j + k) is invertible with P as its inverse. This proves the last part.
Note that the entries with k = 0 are in the frozen region of P (w).
Non-reduced words
The Edelman-Greene bijection takes as its argument a reduced word. In order to understand the insertion better, we study its interaction with non-reduced words as well. Simultaneously, we obtain Proposition 4.3 which can be used to prove Proposition 3.10 and Proposition 3.11.
Fix a standard Young tableau Q and let W Q = {w ∈ N * : EG(w) = Q, P (w) frozen}. Recall that by Corollary 3.7 the reduced words in the sets W Q are reduced words of 132-avoiding permutations. Note that since the tableau Q(w) = EG(w) has len(w) entries, all words in W Q have the same length. Also, since the Edelman-Greene correspondence is a bijection between R(σ) and SYT(λ(σ)) for σ ∈ S n (132), W Q contains exactly one reduced word.
We define the poset P Q = (W Q , ) by setting v w for v, w ∈ W Q if v i ≤ w i for all 1 ≤ i ≤ len(v) = len(w). Figure 8 contains some examples. At times, in particular in the following proof, we refer to bumping paths. Consider constructing EG(w) for an arbitrary word w = w 1 · · · w m . When w k is inserted, some entries P (k−1) i,j of P (k−1) may be bumped. We let the bumping path p w w k of w k be the set of the corresponding cells (i, j) ∈ P (k−1) . Proof. The proof is based on an extension of Lemma 6.28 in [9] (which is analogous to a property of the RSK correspondence). Let w = w 1 . . . w i . . . w m . First suppose i ∈ D(w). In running the Edelman-Greene insertion, when x i and y i , x i ≥ y i , are inserted consecutively on row i, x i either becomes the last entry of that row or bumps some x ≥ x i , and y i bumps some y , x i ≥ y ≥ y i . Hence, x i+1 ≥ x i + 1 and y i+1 ≤ x i + 1. Using this argument inductively shows that p w w i+1 is weakly to the left of p w w i . Thus i + 1 ends up on a lower row than i in Q = EG(w), so a weak descent of the word becomes a descent in Q.
For the converse, suppose 1 ≤ i ≤ m − 1 is not a weak descent in w, which means w i < w i+1 . Again, when x i and y i , x i < y i , are inserted consecutively on row i, x i either becomes the last entry of that row or bumps some x ≥ x i . Next, since y i > x i , it is either inserted at the end of the row or bumps some y ≥ y i > x i . Furthermore, since the insertion tableaux always have increasing rows, y > x . Hence, x i+1 < y i+1 , except possibly in the case x i bumped an x i , and y = x i + 1. But then necessarily y i = x i + 1, meaning that x i+1 = x i + 1 and y i+1 = x i + 2, so x i+1 < y i+1 . Repeating this argument inductively, we get that p w w i+1 is strictly to the right of p w w i . Hence, i + 1 cannot end up in a lower row than i, and i is not a descent in Q. Proof. Since Q is a standard Young tableau, if c i are the columns of the entries i of Q, |{i ≤ j : c i = x}| ≥ |{i ≤ j :
c i = x + 1}| for 1 ≤ j ≤ n 2 , 1 ≤ x ≤ n − 2.
Otherwise there is a row of Q which is not increasing. Since c(Q) has this form, each letter x will end up in the x:th column in the Edelman-Greene insertion, P (w) will be of the frozen form, and the Q-tableau has the entry i in column c i . Hence, c(Q) ∈ P Q . By the same argument, if any of the c i 's is replaced by a smaller number, the shape of P (and Q) changes. Thus c(Q) is a minimal element in P Q . Since the columns of the insertion tableaux are always strictly increasing, the bumping paths in the Edelman-Greene insertion go down and to the left. If there is another minimal element w in P Q , then it has to have a letter w i < c i . But then w i is inserted to a cell strictly before the c i :th cell on the first row in the insertion tableau and i cannot end up in the column c i as it does in Q. Hence c(Q) is the unique minimal element in P Q .
We conjecture that EG −1 (Q) is maximal in P Q . However, in general it is not the unique maximal element. As an example, take a reduced word of the reverse permutation in S 6 starting 4521343 . . . and a nonreduced word 2431343 . . . in the same poset P Q , both ending with the same subword. They are incomparable in P Q .
The height h(P ) of a poset P is the length of its longest chain. Let [·, ·] denote an interval in P Q and Q = h([c(Q), EG −1 (Q)]). In other words, Q is the length of a maximum length chain from c(Q) to
EG −1 (Q). Then Q ≤ len(c(Q)) i=1 (EG −1 (Q) i − c(Q) i ).
However, computations suggest that we have equality for Q ∈ SYT(sc n ).
Conjecture 2. For Q ∈ SYT(sc n ), we conjecture that EG −1 (Q) is a maximal element in P Q and Q = len(c(Q)) i=1 (EG −1 (Q) i − c(Q) i ). Note that len(c(Q)) i=1 (EG −1 (Q) i − c(Q) i )
is the amount of right slides when performing EG −1 on Q. Hence Q ≤ n 3 for the shape sc n . Let η n,i denote the number of Q ∈ SYT(sc n ) such that Q = i, 0 ≤ i ≤ n 3 . Table 1 lists some of these values. Table 1. The values of η n,i for n = 3, 4, 5.
The tableaux Q contributing to η n,0 are simple to characterize. Then P Q only contains the column word c(Q). Proof. For the if-direction, assume Q i,j > Q i−1,j+1 for all (i, j), (i − 1, j + 1) ∈ Q. This means that all the anti-diagonals (sets of cells with sums of the coordinates fixed) have entries increasing downwards. Suppose that there is at least one right slide, and consider the first such occurrence. Then either x < y have moved to the same anti-diagonal, into some cells (i, j) and (i − 1, j + 1), respectively, or the right slide occurs at the top of column j + 1. Since this is the first occurrence of a right slide, no evacuation paths starting from columns c > j + 1 can have crossed to column j + 1. Hence both cases would imply that the evacuation paths have started from column j + 1 more often than from column j, a contradiction since the anti-diagonals of Q are increasing to the left. Hence, all evacuation paths are vertical and the labels only slide down, so EG −1 (Q) = c(Q) and Q = 0.
For the other direction, suppose there are some x = Q i,j < Q i−1,j+1 = y. If there are no right slides, then x, y, and the labels above them have to stay in the same columns. If at some point x and y are on the same row, then there must have been a right slide involving the some element in the column of x. Hence x and y have to end up on the bottom antidiagonal, but then an evacuation path has to start from y before x and some entry of the column of x slides right, a contradiction.
Staircase standard Young tableaux satisfying the transpose of the condition in Proposition 4.3 have been enumerated in [26] and can also be reinterpreted in terms of several other combinatorial objects, for example Gelfand-Tsetlin patterns (see the entry A003121 in the OEIS [15]). We end with some consequences of Conjecture 2.
Proposition 4.5. Assume Conjecture 2 holds and Q ∈ SYT(sc n ). Then a) Q t = n 3 − Q , so the sequence η n,i , 0 ≤ i ≤ n 3 , is symmetric, b) the Schützenberger involution S satisfies Q = Q S , c) the number η n,i is even for all n ≥ 4, 0 ≤ i ≤ n 3 , d) and Q = n 3 if and only if Q i,j < Q i−1,j+1 for all (i, j), (i − 1, j + 1) ∈ Q.
Proof. a) By Lemma 3.5,
Q t = len(w) i=1 (n − 1 − w i − c(Q) i ) = n 2 (n − 1) − n+1 3 − len(w) i=1 w i = n 3 − ( len(w) i=1 w i − n 3 ) = n 3 − Q .
b) Let w ∈ R(n) and Q = Q(w). By Lemma 3.4, Q(w rev ) = Q S , and by Conjecture 2, we have Q = len(w)
i=1 (w i − c(Q) i ) = len(w) i=1 w i − n+1 3 = len(w rev ) i=1 w rev i − n+1 3 = Q S .
c) By b), the involution S satisfies Q = Q S . Thus it suffices to prove that it is fixed-point-free for SYT(sc n ), n ≥ 4. This can be seen from Lemma 3.4: Q(w rev ) = Q(w) S for w ∈ R(n), so if S had a fixpoint, there would exist w ∈ R(n) such that w = w rev . We show by induction on the length of w that every w = w rev is a reduced word of the same permutation as (in other words, is Coxeter equivalent to) i (i + 1) . . . (j − 1) j (j − 1) . . . (i + 1) i for some 1 ≤ i < j ≤ n, in which case w ∈ R(n) only for n ≤ 3, when i = 1, j = n − 1. Clearly w has to be of odd length. The base case is then that w = (j − 1) j (j − 1) or (j + 1) j (j + 1) ≈ j (j + 1) j where ≈ denotes Coxeter equivalence. Consider adding the letter x: x i (i + 1) . . . (j − 1) j (j − 1) . . . (i + 1) i x. We have i < x < j, x = j, x = j +1, or x = i−1. Using commutations in the first case gives i (i + 1) . . . x (x − 1) x . . . (j − 1) j (j − 1) . . . x (x − 1) x . . . (i + 1) i, which is non-reduced. If x = j, we get j (j −1) j (j −1) j in the middle, which is also non-reduced. Hence either x = i − 1 and we are done, or x = j + 1, in which case we get (j + 1) j (j + 1) ≈ j (j + 1) j in the middle, and are also done. d) This follows from Proposition 4.3 by transposition.
Figure 1 .
1The wiring diagram of 232123.
Figure 2 .
2The diagram D(σ) and P = P (w) for any w ∈ R(σ) for σ = 561423. The top-left component D (1,1) (σ) induces the partition λ(σ) = (2, 2, 2, 2) and the frozen region of P the partition λ f (w) = (2, 2, 2, 2).
Figure 3 .
3The top-left component of the diagram of σ.
Lemma 3.4 ([9, Corollary 7.22]). Suppose w = w 1 . . . w k is a reduced word. Then
The set of 132-avoiding permutations of [n], S n (132), is of particular interest here. The reason is an observation of Fulton.
Lemma 3.6 ([11, Proposition 9.19]). Let σ ∈ S n . Then σ is 132avoiding if and only if D(σ) = D (1,1) (σ).
Figure 4 .
4A comparison at times t = 1 2 and 3 4
Corollary 3. 7 .
7Let w ∈ R(σ). The insertion tableau P (w) is frozen if and only if σ is 132-avoiding.
Corollary 3. 9 .
9Let f λ = |SYT(λ)|. Then σ∈Sn(p)
]
Let w = w 1 . . . w ( n 2 ) be a sorting network.
Proposition 3.12. A sorting network is 132-avoiding if and only if it is 312-avoiding. Similarly, a sorting network is 213-avoiding if and only if it is 231-avoiding. Proof. Suppose that a 132-avoiding sorting network is not 312-avoiding. This means that an intermediate permutation contains the pattern 312. It must have been created by swapping the 1 and the 3. Hence, a previous intermediate permutation contains the pattern 132, a contradiction. If a 312-avoiding sorting network is not 132-avoiding, an intermediate permutation contains the pattern 132. The 1 and the 3 are swapped in a later intermediate permutation, which leads to a contradiction. A similar argument applies to 213-avoiding and 231-avoiding sorting networks.The following enumerative result was, stated in another form, first obtained by Fishel and Nelson[10, Corollary 3.4] who enumerated the maximum length chains in T n using a different set of methods. However, it is also a reformulation of Corollary 4.4 by Proposition 3.10.
Figure 5 .Figure 6 .
56The construction used in the proof of Theorem 3.14 for σ = 813975246. Compare withFigure 6. The diagram of σ = 813975246. Compare withFigure 5.
Figure 7 .
7. . . . . . . . . . . . . . 0 1 0 . . . An example illustrating the proof of Theorem 3.14. The component D (i+k,j+k) (σ) is in cyan and the entries shaded are not in D(σ).
Figure 8 .
8Some examples of the 16 posets P Q for Q ∈ SYT(sc 4 ). The bottom elements are the column words of the respective tableaux below. 4.1. Properties of P Q . First, we extend a result of Edelman and Greene. The descents of a standard Young tableau T are entries k such that if T i,j = k, then T i ,j = k + 1 for i > i, in other words k + 1 is strictly south of k. Let D(T ) = {k : k is a descent of T } be the set of descents of T . Correspondingly, for w ∈ N * , let D(w) = {1 ≤ i ≤ len(w) − 1 : w i ≥ w i+1 }. The elements of D(w) are called the weak descents of w.
Proposition 4 . 1 .
41For all w ∈ P Q , D(w) = D(Q).
Suppose Q is a standard Young tableau with m entries. Definec(Q) = c 1 . . . c i . . . c m , where c i is the column of i in Q for 1 ≤ i ≤ m.Then we say that c(Q) is the column word of Q. SeeFigure 8for examples. Note that this term is used differently by other authors. Column words of standard Young tableaux are, by their definition, lattice words.
Proposition 4 . 2 .
42For Q ∈ SYT(sc n ),0 = c(Q) is the unique minimal element in P Q .
Proposition 4 . 3 .
43If Q ∈ SYT(sc n ), then Q = 0 if and only if Q i,j > Q i−1,j+1 for all (i, j), (i − 1, j + 1) ∈ Q.
. . . (n − 2)! 1!3! . . . (2n − 3)! .
Acknowledgements. This paper benefited greatly from experimentation with Sage [8] and its combinatorics features developed by the Sage-Combinat community[5].
Random sorting networks. Omer Angel, Alexander E Holroyd, Dan Romik, Bálint Virág, 10.1016/j.aim.2007.05.019Adv. in Math. 2152Omer Angel, Alexander E. Holroyd, Dan Romik, and Bálint Virág. Random sorting networks. Adv. in Math., 215(2):839-868, 2007. doi.org/10.1016/j.aim.2007.05.019.
Shellable nonpure complexes and posets. Anders Björner, Michelle L Wachs, 10.1090/S0002-9947-97-01838-2II. Trans. Amer. Math. Soc. 34910Anders Björner and Michelle L. Wachs. Shellable nonpure complexes and posets. II. Trans. Amer. Math. Soc., 349(10):3945-3975, 1997. doi.org/10.1090/S0002-9947-97-01838-2.
Stable Grothendieck polynomials and K-theoretic factor sequences. Anders Skovsted Buch, Andrew Kresch, Mark Shimozono, Harry Tamvakis, Alexander Yong, 10.1007/S00208-007-0155-6Math. Ann. 3402Anders Skovsted Buch, Andrew Kresch, Mark Shimozono, Harry Tamvakis, and Alexander Yong. Stable Grothendieck polynomials and K-theoretic factor sequences. Math. Ann., 340(2):359-382, 2008. doi.org/10.1007/S00208-007-0155-6.
K-theory of minuscule varieties. Anders Skovsted Buch, Matthew J Samuel, 10.1515/crelle-2014-0051J. Reine Angew. Math. 719Anders Skovsted Buch and Matthew J. Samuel. K-theory of mi- nuscule varieties. J. Reine Angew. Math., 2016(719):133-171, 2016. doi.org/10.1515/crelle-2014-0051.
Sage-Combinat: enhancing Sage as a toolbox for computer exploration in algebraic combinatorics. The Sage-Combinat communityThe Sage-Combinat community. Sage-Combinat: enhancing Sage as a toolbox for computer exploration in algebraic combinatorics. http://combinat.sagemath.org, 2008.
Duncan Dauvergne, arXiv:1802.08934The Archimedean limit of random sorting networks. Duncan Dauvergne. The Archimedean limit of random sorting networks. arXiv:1802.08934, 2018.
Duncan Dauvergne, Bálint Virág, arXiv:1802.08933Circular support in random sorting networks. Duncan Dauvergne and Bálint Virág. Circular support in random sorting net- works. arXiv:1802.08933, 2018.
The Sage Developers. Sagemath, the Sage Mathematics Software System (Version. 8The Sage Developers. Sagemath, the Sage Mathematics Software System (Ver- sion 8.1), 2017. http://www.sagemath.org.
Balanced tableaux. Paul Edelman, Curtis Greene, 10.1016/0001-8708(87)90063-6Adv. in Math. 631Paul Edelman and Curtis Greene. Balanced tableaux. Adv. in Math., 63(1):42- 99, 1987. doi.org/10.1016/0001-8708(87)90063-6.
Chains of maximum length in the Tamari lattice. Susanna Fishel, Luke Nelson, 10.1090/S0002-9939-2014-12069-7Proc. Amer. Math. Soc. 14210Susanna Fishel and Luke Nelson. Chains of maximum length in the Tamari lattice. Proc. Amer. Math. Soc., 142(10):3343-3353, 2014. doi.org/10.1090/S0002-9939-2014-12069-7.
Flags, Schubert polynomials, degeneracy loci, and determinantal formulas. William Fulton, 10.1215/S0012-7094-92-06516-1Duke Math. J. 653William Fulton. Flags, Schubert polynomials, degeneracy loci, and determinantal formulas. Duke Math. J., 65(3):381-420, 1992. doi.org/10.1215/S0012-7094-92-06516-1.
Dual equivalence with applications, including a conjecture of Proctor. D Mark, Haiman, 10.1016/0012-365X(92)90368-PDiscrete Math. 991-3Mark D. Haiman. Dual equivalence with applications, including a conjecture of Proctor. Discrete Math., 99(1-3):79-113, 1992. doi.org/10.1016/0012-365X(92)90368-P.
Shifted Hecke insertion and the Ktheory of OG(n, 2n + 1). Zachary Hamaker, Adam Keilthy, Rebecca Patrias, Lillian Webster, Yinuo Zhang, Shuqi Zhou, 10.1016/j.jcta.2017.04.002J. Comb. Theory Ser. A. 151Zachary Hamaker, Adam Keilthy, Rebecca Patrias, Lillian Webster, Yinuo Zhang, and Shuqi Zhou. Shifted Hecke insertion and the K- theory of OG(n, 2n + 1). J. Comb. Theory Ser. A, 151:207-240, 2017. doi.org/10.1016/j.jcta.2017.04.002.
Relating Edelman-Greene insertion to the Little map. Zachary Hamaker, Benjamin Young, 10.1007/S10801-014-0503-zJ. Alg. Comb. 403Zachary Hamaker and Benjamin Young. Relating Edelman-Greene insertion to the Little map. J. Alg. Comb., 40(3):693-710, 2014. doi.org/10.1007/S10801-014-0503-z.
OEIS Foundation Inc. The On-Line Encyclopedia of Integer Sequences. OEIS Foundation Inc. The On-Line Encyclopedia of Integer Sequences. http: //oeis.org, 2017.
Schubert polynomials and the Littlewood-Richardson rule. Alain Lascoux, Marcel-Paul Schützenberger, 10.1007/BF00398147Lett. Math. Phys. 102-3Alain Lascoux and Marcel-Paul Schützenberger. Schubert polynomials and the Littlewood-Richardson rule. Lett. Math. Phys., 10(2-3):111-124, 1985. doi.org/10.1007/BF00398147.
New properties of the Edelman-Greene bijection (extended abstract). Svante Linusson, Samu Potka, Extended abstract at FPSAC'18Svante Linusson and Samu Potka. New properties of the Edelman-Greene bi- jection (extended abstract), 2018. Extended abstract at FPSAC'18.
On random shifted standard Young tableaux and 132-avoiding sorting networks. Svante Linusson, Samu Potka, Robin Sulzgruber, arXiv:1804.01795Svante Linusson, Samu Potka, and Robin Sulzgruber. On random shifted stan- dard Young tableaux and 132-avoiding sorting networks. arXiv:1804.01795, 2018.
Combinatorial aspects of the Lascoux-Schützenberger tree. David P Little, 10.1016/S0001-8708(02)00038-5Adv. in Math. 1742David P. Little. Combinatorial aspects of the Lascoux- Schützenberger tree. Adv. in Math., 174(2):236-253, 2003. doi.org/10.1016/S0001-8708(02)00038-5.
of Graduate Texts in Mathematics. Bruce E Sagan, 10.1007/978-1-4757-6804-6The Symmetric Group: Representations, Combinatorial Algorithms, and Symmetric Functions. Springer-Verlag203Bruce E. Sagan. The Symmetric Group: Representations, Combinatorial Al- gorithms, and Symmetric Functions, volume 203 of Graduate Texts in Mathe- matics. Springer-Verlag, 2001. doi.org/10.1007/978-1-4757-6804-6.
Braid moves in commutation classes of the symmetric group. Anne Schilling, Nicolas M Thiéry, Graham White, Nathan Williams, 10.1016/j.ejc.2016.10.008Eur. J. Combin. 62Anne Schilling, Nicolas M. Thiéry, Graham White, and Nathan Williams. Braid moves in commutation classes of the symmetric group. Eur. J. Combin., 62:15-34, 2017. doi.org/10.1016/j.ejc.2016.10.008.
Quelques remarques sur une construction de Schensted. Marcel-Paul Schützenberger, 10.7146/math.scand.a-10676Scand. Math. 121Marcel-Paul Schützenberger. Quelques remarques sur une con- struction de Schensted. Scand. Math., 12(1):117-128, 1963. doi.org/10.7146/math.scand.a-10676.
On the number of reduced decompositions of elements of Coxeter groups. Richard P Stanley, 10.1016/S0195-6698(84)80039-6European J. Combin. 54Richard P. Stanley. On the number of reduced decompositions of el- ements of Coxeter groups. European J. Combin., 5(4):359-372, 1984. doi.org/10.1016/S0195-6698(84)80039-6.
Richard P Stanley, of Cambridge Studies in Advanced Mathematics. Cambridge University Press2Richard P. Stanley. Enumerative Combinatorics. Vol. 2, volume 62 of Cam- bridge Studies in Advanced Mathematics. Cambridge University Press, 1999.
Reduced word manipulation: patterns and enumeration. Bridget E Tenner, 10.1007/S10801-017-0752-8J. Alg. Comb. 461Bridget E. Tenner. Reduced word manipulation: patterns and enumeration. J. Alg. Comb., 46(1):189-217, 2017. doi.org/10.1007/S10801-017-0752-8.
. Robert M Thrall, 10.1307/mmj/1028989731A combinatorial problem. Michigan Math. J. 11Robert M. Thrall. A combinatorial problem. Michigan Math. J., 1(1):81-88, 1952. doi.org/10.1307/mmj/1028989731.
E-mail address: [email protected], [email protected]. Stockholm, SwedenDepartment of Mathematics, KTH Royal Institute of TechnologyDepartment of Mathematics, KTH Royal Institute of Technology, Stockholm, Sweden. E-mail address: [email protected], [email protected]
|
[] |
[
"A real-time study of diffusive and ballistic transport in spin-1/2 chains using the adaptive time-dependent density matrix renormalization group method",
"A real-time study of diffusive and ballistic transport in spin-1/2 chains using the adaptive time-dependent density matrix renormalization group method"
] |
[
"S Langer \nInstitut für Theoretische Physik C\nRWTH Aachen University\n52056Aachen, Germany\n\nJülich Aachen Research Alliance (JARA)\nResearch Centre Jülich GmbH\n52425JülichGermany\n",
"F Heidrich-Meisner \nInstitut für Theoretische Physik C\nRWTH Aachen University\n52056Aachen, Germany\n\nJARA -Jülich Aachen Research Alliance\nForschungszentrum Jülich\nGermany\n",
"J Gemmer \nInstitut für Theoretische Physik\nUniversität Osnabrück\nGermany\n",
"I P Mcculloch \nSchool of Physical Sciences\nThe University of Queensland\n4072BrisbaneQueenslandAustralia\n",
"U Schollwöck \nPhysics Department and Arnold Sommerfeld Center for Theoretical Physics\nLudwig-Maximilians-Universität München\nD-80333MünchenGermany\n"
] |
[
"Institut für Theoretische Physik C\nRWTH Aachen University\n52056Aachen, Germany",
"Jülich Aachen Research Alliance (JARA)\nResearch Centre Jülich GmbH\n52425JülichGermany",
"Institut für Theoretische Physik C\nRWTH Aachen University\n52056Aachen, Germany",
"JARA -Jülich Aachen Research Alliance\nForschungszentrum Jülich\nGermany",
"Institut für Theoretische Physik\nUniversität Osnabrück\nGermany",
"School of Physical Sciences\nThe University of Queensland\n4072BrisbaneQueenslandAustralia",
"Physics Department and Arnold Sommerfeld Center for Theoretical Physics\nLudwig-Maximilians-Universität München\nD-80333MünchenGermany"
] |
[] |
Using the adaptive time-dependent density matrix renormalization group method, we numerically study the spin dynamics and transport in one-dimensional spin-1/2 systems at zero temperature. Instead of computing transport coefficients from linear response theory, we study the real-time evolution of the magnetization starting from spatially inhomogeneous initial states. In particular, we are able to analyze systems far away from equilibrium with this set-up. By computing the timedependence of the variance of the magnetization, we can distinguish diffusive from ballistic regimes, depending on model parameters. For the example of the anisotropic spin-1/2 chain and at half filling, we find the expected ballistic behavior in the easy-plane phase, while in the massive regime the dynamics of the magnetization is diffusive. Our approach allows us to tune the deviation of the initial state from the ground state and the qualitative behavior of the dynamics turns out to be valid even for highly perturbed initial states in the case of easy-plane exchange anisotropies. We further cover two examples of nonintegrable models, the frustrated chain and the two-leg spin ladder, and we encounter diffusive transport in all massive phases. In the former system, our results indicate ballistic behavior in the critical phase. We propose that the study of the time-dependence of the spatial variance of particle densities could be instrumental in the characterization of the expansion of ultracold atoms in optical lattices as well.
|
10.1103/physrevb.79.214409
|
[
"https://arxiv.org/pdf/0812.4252v2.pdf"
] | 118,620,123 |
0812.4252
|
ae070acaa7753246877dfd64062043c4ad970ea2
|
A real-time study of diffusive and ballistic transport in spin-1/2 chains using the adaptive time-dependent density matrix renormalization group method
16 Jun 2009
S Langer
Institut für Theoretische Physik C
RWTH Aachen University
52056Aachen, Germany
Jülich Aachen Research Alliance (JARA)
Research Centre Jülich GmbH
52425JülichGermany
F Heidrich-Meisner
Institut für Theoretische Physik C
RWTH Aachen University
52056Aachen, Germany
JARA -Jülich Aachen Research Alliance
Forschungszentrum Jülich
Germany
J Gemmer
Institut für Theoretische Physik
Universität Osnabrück
Germany
I P Mcculloch
School of Physical Sciences
The University of Queensland
4072BrisbaneQueenslandAustralia
U Schollwöck
Physics Department and Arnold Sommerfeld Center for Theoretical Physics
Ludwig-Maximilians-Universität München
D-80333MünchenGermany
A real-time study of diffusive and ballistic transport in spin-1/2 chains using the adaptive time-dependent density matrix renormalization group method
16 Jun 2009(Dated: June 16, 2009)
Using the adaptive time-dependent density matrix renormalization group method, we numerically study the spin dynamics and transport in one-dimensional spin-1/2 systems at zero temperature. Instead of computing transport coefficients from linear response theory, we study the real-time evolution of the magnetization starting from spatially inhomogeneous initial states. In particular, we are able to analyze systems far away from equilibrium with this set-up. By computing the timedependence of the variance of the magnetization, we can distinguish diffusive from ballistic regimes, depending on model parameters. For the example of the anisotropic spin-1/2 chain and at half filling, we find the expected ballistic behavior in the easy-plane phase, while in the massive regime the dynamics of the magnetization is diffusive. Our approach allows us to tune the deviation of the initial state from the ground state and the qualitative behavior of the dynamics turns out to be valid even for highly perturbed initial states in the case of easy-plane exchange anisotropies. We further cover two examples of nonintegrable models, the frustrated chain and the two-leg spin ladder, and we encounter diffusive transport in all massive phases. In the former system, our results indicate ballistic behavior in the critical phase. We propose that the study of the time-dependence of the spatial variance of particle densities could be instrumental in the characterization of the expansion of ultracold atoms in optical lattices as well.
Using the adaptive time-dependent density matrix renormalization group method, we numerically study the spin dynamics and transport in one-dimensional spin-1/2 systems at zero temperature. Instead of computing transport coefficients from linear response theory, we study the real-time evolution of the magnetization starting from spatially inhomogeneous initial states. In particular, we are able to analyze systems far away from equilibrium with this set-up. By computing the timedependence of the variance of the magnetization, we can distinguish diffusive from ballistic regimes, depending on model parameters. For the example of the anisotropic spin-1/2 chain and at half filling, we find the expected ballistic behavior in the easy-plane phase, while in the massive regime the dynamics of the magnetization is diffusive. Our approach allows us to tune the deviation of the initial state from the ground state and the qualitative behavior of the dynamics turns out to be valid even for highly perturbed initial states in the case of easy-plane exchange anisotropies. We further cover two examples of nonintegrable models, the frustrated chain and the two-leg spin ladder, and we encounter diffusive transport in all massive phases. In the former system, our results indicate ballistic behavior in the critical phase. We propose that the study of the time-dependence of the spatial variance of particle densities could be instrumental in the characterization of the expansion of ultracold atoms in optical lattices as well.
I. INTRODUCTION
Transport in low-dimensional strongly correlated systems continues to excite theoretical and experimental physicists alike. For theorists, transport problems pose a formidable challenge, as established tools to work out ground-state properties of strongly correlated systems do not always provide an adequate description of transport as well (see Refs. 1 and 2 and references therein), which, in particular, pertains to systems driven out of equilibrium.
Within linear-response theory, one often distinguishes between ballistic and diffusive transport by invoking the notion of the Drude weight, 3,4 i.e., the prefactor of a delta-function at zero-frequency in the frequency dependent transport coefficient. A finite Drude weight defines ballistic transport, while in the case of a vanishing Drude weight, the zero-frequency limit of the conductivity's regular part determines the long-time behavior.
Significant theoretical attention has been devoted to one-dimensional spin systems (see Refs. 1,2,5 for a review). Open theoretical questions include, for instance, the finite temperature transport of the anisotropic spin-1/2 chain with nearest-neighbor interactions (the XXZ chain), with an unsettled debate on whether finite temperature transport in the Heisenberg chain is ballistic or not 6,7,8,9,10,11,12,13 as well as on the actual tem-perature dependence of the Drude weight. 8,12,14 For nonintegrable models and finite temperatures, one expects diffusive transport on general grounds, 2,5,15,16 and numerical studies have widely confirmed this picture in the high-temperature limit and massive phases of spin models. 6,10,11,17,18,19 A similar scenario has emerged for thermal transport. 10,20,21,22 Yet, the issue of (quasi)ballistic transport in gapless phases of, e.g., the frustrated chain, 10,13,23,24 at low temperatures is still under scrutiny. Moreover, the possibility of anomalous transport due to a diverging coefficient of the dc conductivity has been emphasized. 22,23,25 Much less is known about the non-linear transport at large external driving forces, or more generally, nonequilibrium properties. A recent study using a quantum master-equation approach has addressed the spin transport in the antiferromagnetic phase of the XXZ chain. 26 The time-evolution of magnetization profiles in analytically exactly solvable models has been the case of interest in Refs. 27,28,29,30. Besides the fundamental interest in understanding large-bias and out-of-equilibrium phenomena, research into transport properties of low-dimensional spin systems is strongly motivated by exciting experimental results on large thermal conductivities in spin ladder and chain materials (see, e.g., Refs. 31,32 for a review). Most evidently in spin ladder materials such as (La,Sr,Ca) 14 Cu 24 O 41 , such large thermal conductivities have been attributed to magnetic excitations. 33,34 In a more recent experiment on La 5 Ca 9 Cu 24 O 41 , 35 heat dynamics has been probed by time-of-flight measurements. In this set-up, the surface of a sample is covered with a thin fluorescent layer. After shining on that surface with a laser, one can then follow the propagation of heat in the surface by thermal imaging at different times. A pronounced difference is seen comparing a surface that contains ladders to one that is perpendicular to the ladder direction. In the former case, heat diffuses predominantly along the ladder direction, while little dynamics is seen in the latter case. These results support the notion of anisotropic heat transport in this material, 34 due to the contribution of magnetic excitations.
Besides thermal transport measurements, questions of diffusive versus ballistic transport have been experimentally probed in nuclear magnetic resonance 36,37 (see Ref. 38 for related theoretical work) as well as muon spin resonance experiments. 39 In a more recent development, transport properties of low-dimensional ultra-cold atom gases have gained attention as well, with experiments focusing on the detection of Anderson localization. 40,41 Interacting two-component Bose gases in optical lattices have been suggested to potentially realize spin-1/2 Hamiltonians. 42,43 As far as numerical approaches are concerned, on the technical side, full exact diagonalization (ED) studies are restricted to system sizes of about 24 sites in the case of a spin-1/2 chain, while in Quantum Monte Carlo simulations, the calculation of frequency dependent properties of more than two-point correlation functions remains difficult (see, e.g., Ref. 44 and references therein). The density matrix renormalization group (DMRG) method 45,46,47 has most successfully been applied to zero-temperature phenomena. With the advent of the adaptive time-dependent DMRG method (tDMRG), 48,49,50 the study of non-equilibrium and large bias phenomena has become possible. While the methods mentioned so far are designed for pure bulk systems, quantum-master approaches that account for dissipation by incorporating baths have been used as well in the study of transport in 1D spin systems. 51,52,53,54,55 Such methods are not constrained to the linear response either and circumvent the use of Kubo formulae, as they measure gradients and current expectation values directly.
In this work we will introduce and exploit an alternative approach based on zero-temperature tDMRG calculations. Instead of analyzing currents and their correlation functions directly, we study the magnetization dynamics after preparing the system in inhomogeneous initial states. For instance, we subject the system to an external magnetic field of Gaussian shape and, after releasing the confining field, follow the time evolution of the magnetization. Computing the variance of the magnetization allows us to distinguish ballistic from diffusive regimes, depending on model parameters: we consider the dynamics to be ballistic if the variance grows quadratically in time, which is the behavior of noninter-acting particles, while a diffusive behavior manifests itself in a linear increase of the magnetization's variance. Our approach has the advantage that we can characterize diffusion by following the time-evolution of a local quantity, the magnetization, as compared to the technically more difficult evaluation of the Kubo formula 56 or the measurement of time-dependent currents. 57 Moreover, we can control the deviation of the initial state from the ground state, thus scanning the regime of systems substantially driven out of equilibrium.
While we show that our results for the spin-1/2 chain with an exchange anisotropy in the regime of small perturbations over the ground-state are consistent with the picture established by analyzing Drude weights at zero temperature, namely ballistic transport in the massless and diffusive transport in the massive regime, 58 we, in particular, argue that this also applies to systems far from equilibrium. The dynamics is further sensitive to the overall filling, or average magnetization, as expected from linear-response theory results for the high-temperature limit. 59,60,61,62 Beyond the anisotropic spin-1/2 chain with nearestneighbor interactions only, we further consider nonintegrable systems such as the frustrated chain and the twoleg spin ladder. As a result, we find that in massive phases, the dynamics is typically diffusive, while in the massless one of the frustrated chain, the zero temperature dynamics are ballistic.
Transport in the XXZ chain has previously been studied in Ref. 57 using tDMRG, there following the time-evolution from a highly excited initial state of the |ψ = | ↑ . . . ↑↓ . . . ↓ form. The long-time behavior of the magnetization was found to be correlated with the phase transition from easy-plane to easy-axis symmetry. Further, the expansion of particle density packets of nearly Gaussian form has been looked at with tDMRG in the context of ultra-cold atomic gases, 63 modeled with the 1D Bose-Hubbard, as well as for short pieces of interacting spinless fermions. 64 The plan of the paper is the following. In Sec. II we define the spin models studied and we describe our numerical method, the tDMRG. We further motivate our definition of diffusive transport by discussing the solution of the diffusion equation in Sec. III. Section IV details the preparation of initial states. In Sec. V, we study the magnetization dynamics in the XXZ chain, with ballistic transport in the massless regime, and diffusive transport in the massive regime. Section VI summarizes our results for two nonintegrable 1D systems, the frustrated chain and the two-leg ladder. We conclude with a discussion in Sec. VII.
II. MODEL AND METHOD
A. 1D Spin-1/2 systems Here we will first concern ourselves with the integrable XXZ chain:
H = L−1 i=1 [ 1 2 (S + i S − i+1 + H.c.) + ∆S z i S z i+1 ] ,(1)
where S µ i andµ = x, y, z are the components of a spin-1/2 operator acting on site i and S ± i = S x i ± iS y i are the lowering/raising operators, respectively. We denote the number of sites by L and we introduce an exchange anisotropy ∆. Equation (1) can be re-expressed in terms of spinless fermions c
H = L−1 i=1 [ 1 2 (c † i c i+1 +H.c.)+∆(n i −1/2)(n i+1 −1/2)] ,(2)
with n i = c † i c i . Setting ∆ = 0 results in a noninteracting system. If not mentioned otherwise, we impose open boundary conditions. We denote the filling factor with n. The local magnetization is given by M i = S z i , and the total magnetization is S z = i S z i = L(n − 1/2). The ground-state phase diagram of the XXZ chain (see, e.g., Ref. 66 and references therein) exhibits quantum critical points at ∆ = ±1. A critical phase covers the |∆| ≤ 1 region, while the ground state for ∆ > 1 exhibits antiferromagnetic order. The region ∆ ≤ −1 has a ferromagnetic ground-state, yet we will restrict the discussion to ∆ ≥ 0. The model is integrable through the Bethe-ansatz. 67 In the second part of this work, we will focus on two nonintegrable models with isotropic interactions (i.e., ∆ = 1), the frustrated chain and the two-leg ladder (for a review on these models, see, e.g., Refs. 66,68). Both models can be understood as limiting cases of a single Hamiltonian that incorporates a dimerization δ and a frustration α:
H = J L−1 i=1 µ=x,y,z [(1 + (−1) i δ)S µ i S µ i+1 + αS µ i S µ i+2 ] . (3)
The frustrated chain corresponds to δ = 0 and α > 0, while the two-leg ladder is the δ = 1 limit. In the latter case, we identify the coupling along legs as J = αJ and the coupling along rungs as J ⊥ = J.
The frustrated chain features a quantum phase transition at α ≈ 0.241, separating a gapless phase from a massive one. 66 The spectrum of the two-leg ladder is gapped for any J ⊥ /J > 0. 68,69
B. Methods
For the time-evolution of the noninteracting case (∆ = 0 in Eq. (2)), we use exact diagonalization which allows us to treat large systems. For all interacting cases, we employ the adaptive time-dependent DMRG method 48,49,50 with a Krylov-space-based time-evolution scheme 70,71 in the space of matrix product states. 72 The control parameters are the discarded weight δρ and the time-step δt.
Our simulations are canonical ones as we work in subspaces with a fixed total S z or particle number, respectively. We have performed an extended error analysis by (i) comparing ED and tDMRG results in the noninteracting case and (ii) by performing several runs with different time-steps and discarded weights at representative parameters in the interacting case. Specifically, in case (i), we have analyzed the relative errors in the local magnetization
δM = 1 L L i=1 |M DMRG i − M ED i |
and its variance (see below). It turns out that typically, a time step of δt = 0.125/J and a discarded weight of δρ = 10 −6 keeps the relative error δM in M below 10 −4 for a chain of L = 200 sites at half filling and for times t ≤ 100/J.
III. BALLISTIC VS DIFFUSIVE TRANSPORT
Within linear response theory, one often separates the dynamical conductivity σ(ω) into a delta-function δ(ω) at zero frequency and a regular part at frequencies ω > 0, 59,65
σ(ω) = 2πDδ(ω) + σ regular (ω) .(4)
These quantities derive from the Kubo formula that is based on evaluating current-current correlation functions. 65 We repeat that a finite Drude weight in a clean, one dimensional system at zero temperature defines an ideal conductor and thus ballistic behavior, while if D = 0, one has an insulator. 3,73 The dependence of the Drude weight on the exchange anisotropy ∆ in the case of the integrable XXZ chain and at zero temperature is well-known: 58
D = π 4 sin ν ν(π − ν) ,(5)
where, in this equation, the anisotropy is parameterized through ∆ = cos(ν). We thus have D > 0 for |∆| ≤ 1, featuring a discontinuous drop to zero at ∆ = 1. Due to the excitation gap in the massive regime ∆ > 1, we have a true insulator with σ dc = 0 that can only transport magnetization once the gap has been exceeded by a sufficiently large external perturbation. Similarly, the Drude weight vanishes in the massive phases of both the spin ladder and the frustrated chain, while in the massless regime of the latter model D is finite at T = 0. 74,75 Strictly speaking, on all systems with open boundary conditions, the Drude weight vanishes identically. Yet, it turns out that the corresponding weight is just shifted to small but finite frequencies, 56,76 and thus the system is expected to still exhibit ballistic and anomalous transport properties.
To justify and motivate our way of analyzing ballistic and diffusive transport, let us, for pedagogical reasons, consider a 1D system that obeys the diffusion equation:
∂ t ρ(x, t) = ∇ · (D∇ρ(x, t)) .(6)
Here, ρ(x, t) denotes, e.g., a particle density, and D the diffusion constant. The Green's function associated with this equation is, in a d-dimensional setup, given by
G(x,x, t) = 1 (4πDt) d/2 · e − (x−x) 2 4Dt .(7)
Therefore, we can calculate the expectation values
x (t) =x and x 2 = |x| 2 + 2 dDt (8) and see that the variance σ 2 x = x 2 −x 2 is linear in t for normal diffusive transport. On the contrary, for ballistic dynamics, one expects the variance σ 2
x to grow quadratically in time, as is well-known from elementary quantum mechanics for free particles.
Given a distribution of M i (t) = n i (t) − 1/2 at a time t, we find it most straightforward to compute the variance from the corresponding particle density distribution n i (t) = M i (t) + 1/2 as this is a positive quantity. We then compute the variance from
σ 2 M (t) = 1 (L/2) L i=1 (i − µ n ) 2 · n i (t) .(9)
where µ n is the first moment of the normalized distribution n i . Note that we normalize n i on the actual number of fermions rather than the system size.
IV. PREPARATION OF INITIAL STATES
We consider three different ways of preparing the initial state, which we now illustrate for the case of the XXZ chain, Eq. (1), and at half filling n = 0.5, i.e., at S z = 0. With the exception of Sec. V B 2, our simulations are always performed in subspaces with these quantum numbers.
Technically, we add a term
H B = − i B i S z i(10)
to the Hamiltonian. By choosing the B i appropriately, this realizes (i) a Gaussian magnetic field, (ii) a boxshape magnetic field, both applied during a ground-state DMRG run, and turned off at time t = 0: Θ(i) is the Heaviside function. In Eq. (11), σ B is the variance of the external Gaussian field, B 0 its amplitude, and we set i 0 = L/2 + 0.5, while in Eq. (12), s B denotes the width of a box in which a constant field B 0 is applied.
B i (t = 0) = B 0 exp(−(i − i 0 ) 2 /2σ 2 B ) (11) B i (t = 0) = B 0 Θ i − L − s B 2 Θ i + L + s B 2 .(12)
The third initial state (iii) is realized by finding the ground state of a system at filling n = (L/2 − 1)/L and then applying a single spin flip S + i on a site i. The timeevolution is then performed at half filling, as in cases (i) and (ii).
The typical shape of the induced density n i (t = 0) , or equivalently, the magnetization profile is illustrated in Figs. 1(a)-(c) at ∆ = 0 (all panels) and ∆ = 0.5 (panel (a) only], for the three initial states (i)-(iii), respectively. Inherent to the fermionic nature of the model Eq. (2), we first observe Friedel oscillations with a 2k F period, where k F = π/2 is the Fermi momentum at half filling. Second, there are slower spatial oscillations, that are more evident in the case of ∆ = 0.5 [ Fig. 1(b)]. These oscillations' characteristic wave-length depends, as we have checked, on σ B as well as the system size. As we shall see later, for the purpose of qualitatively analyzing the time-dependence of the variance, the presence of the longranged oscillations is irrelevant, as the dynamics stems from the cental peak dispersing, while away from the center, the oscillations contribute subdominantly to the time-dependence after turning off H B . We will nevertheless sometimes find it illustrating and useful to work with a density ñ x (t) = [ n 2i−1 (t) + n 2i (t) ]/2 (13) averaged over adjacent sites with i = 1, . . . , L/2 and x = 2i−1/2. To recover the variance of the non-averaged density, we multiplyσ 2 M by a factor of 2. The averaged density ñ i (t = 0) is plotted with dashed lines in Fig. 1(a) for ∆ = 0 and 0.5, and we see that this averaging results in quite smooth curves.
For ñ i (t = 0) and a Gaussian external B i , we can now further address the question whether the induced density profile follows a Gaussian as well. We find that this is the case at small ∆, in good approximation. In the massive phase ∆ > 1, deviations of ñ i (t = 0) from a Gaussian profile are substantial. We will nevertheless refer to initial states prepared with Eq.(11) as Gaussian initial states throughout.
As we go from ∆ = 0 into the massive regime ∆ > 1, a gap opens, and qualitatively, despite the field B i being inhomogeneous, we expect the existence of the gap to affect the average deviation from half filling at a given set of (B 0 , σ B ). To illustrate this point, we display this average deviation δñ, defined as
δñ = i ( ñ i − 1/2) 2 /N ,(14)
in Fig. (2) (∆ = 0, 0.5, 1, 1.6). For ∆ = 1.6, we observe a steep increase in δñ at B 0 /J ≈ 0.4, and we will use B 0 /J > 0.45 for the time-evolution, as below this value, little dynamics in the time-evolution is seen. Note that Fig. 2 has a logarithmic scale on the δñ axis, and below B 0 /J ≈ 0.4, δñ ∝ exp(c/B 0 ) in the case of ∆ = 1.6. We find c = 0.4J and this roughly coincides with twice the spin gap for this value of ∆. 77
V. THE XXZ CHAIN
After detailing the way of preparing initial states, we now come to the analysis of M i (t) = n i (t) − 1/2, focusing on the variance σ 2 M (t). In this section, we will first discuss σ 2 M (t) in the massless regime in Sec. V A, and show that we clearly see ballistic behavior with σ 2 M (t) = const + D 2 t 2 , independently of the initial state. We will then analyze the dependence of the coefficient D 2 on B 0 , scanning the full range of perturbations from a linear one with D 2 ∝ B 0 to the largest perturbations possible. These regimes are distinct by a different finitesize scaling behavior, to be discussed below. As we illustrate in the case of a Gaussian magnetic field Eq. (11), ballistic dynamics is found for ∆ ≤ 1. This holds independently of the actual choice of (B 0 , σ B ) and thus also far from equilibrium, except for the most extreme initial states considered here (see the discussion in Sec. V A 2 below).
We then, in Sec. V B, discuss the transition from ballistic to diffusive behavior which is expected to occur at ∆ = 1. Our data are consistent with this picture, as we find strong evidence for diffusive transport for ∆ 1.5. We will present results for several B 0 at ∆ = 1.5 to substantiate that the observation of diffusive transport is independent of the initial state.
A. XXZ chain: Massless regime
Time-dependence of the variance
We now turn to the analysis of M i (t) = n i (t) − 1/2 in the massless regime. Figures 3(a) and (b) show n i (t) = M i (t) + 1/2 for ∆ = 0 and ∆ = 0.5, respectively. The initial density profile first melts and then, at times t J ≈ 5, splits into two packets that travel into opposite directions. 63 Figure 4 shows snapshots of n i (t) at times t J = 0, 15, 25 for ∆ = 0.5. It is noteworthy that, while substantial oscillations are present far away from the central peak, these oscillations are frozen in and do not contribute to the increase in σ 2 M since, far away from the center of the chain, H ≈ H + H B .
The variance σ 2 M (t) − σ 2 M (t = 0), plotted vs. time, is displayed in Fig. 5(a), for ∆ = 0 (solid line), ∆ = 0.5 (dashed line), and ∆ = 1 (dotted line) for the evolution from an initial state of the type (i), enforced by a Gaussian magnetic field with B 0 = J/2 and σ B = 5. The circles represent the time-evolution ofσ 2 M (t) −σ 2 M (t = 0) computed from the averaged density ñ(t) (see Sec. IV) for ∆ = 0, 0.5, 1. For ∆ = 0 and 0.5, the averaged results very well coincide with σ 2 M (t). For the purpose of characterizing ballistic or diffusive behavior, it therefore does not matter whether the pure fermionic density n i (t) or the averaged quantity ñ i (t) is used. In what follows, we will present results extracted from the former, unless stated otherwise. We mention, though, that the quantitative difference in the variance extracted from the averaged as compared to the bare density becomes more pronounced the larger ∆ and the smaller B 0 is. This becomes evident in the case of ∆ = 1, included in Fig. 5(a).
The key observation from Figs. 5(a) and (b) is the quadratic increase of the variance with time observed for ∆ = 0, 0.5 and 1, which confirms the expected ballistic behavior in the critical regime. For the isotropic chain (∆ = 1), we find that the best fit of a power-law to
σ 2 M (t) − σ 2 M (t = 0) yields σ 2 M (t) − σ 2 M (t = 0) ∝ t 1.
98 , which is thus slightly below the behavior expected for ballistic transport. However, a deviation of just one percent from the expected exponent of 2 is very much within the accuracy of our numerical calculations.
Moreover, this behavior, as we show in Fig. 5(b) for the example of ∆ = 0.5, is independent of the shape of the initial state: all three types of states studied here -Gaussian field, box shape, and application of S + i -result in ballistic dynamics at half filling.
From small to large perturbations
So far, we have worked at a fixed pair of B 0 and σ B . We will now explore the dependence of the dynamics on how far the initial state is perturbed away from the actual ground state. This deviation is measured through the average deviation δñ from half filling (compare Fig. 2). For that purpose, we fix σ B = 5, and perform a set of simulations at different B 0 , at ∆ = 0, 0.5 and 1. Independently of B 0 , we always find a quadratic increase of σ 2 M (t) with time, i.e., σ 2 M (t) = const + D 2 (B 0 )t 2 . As a main result, we therefore conclude that the qualitative behavior of the dynamics is independent of the external perturbation, it is always ballistic in the massless regime.
On a more quantitative level, it is instructive to plot D 2 vs B 0 , shown in Fig. 5(c) for ∆ = 0 and 0.5, as it allows us to distinguish two regimes: first, the linear regime, in which D 2 is linear in B 0 with D 2 L/2 = f (L) at a fixed B 0 . For ∆ = 0 and 0.5, the linear regime extends up to B 0 /J ≈ 0.5. Second, at larger B 0 effects of both the band curvature and the finite band-width start to play a role, with significant finite-size effects, as illustrated in Fig. 6 for B 0 = 2J. In the case of ∆ = 0 and B 0 = 2J, we are able to access system sizes of L ≈ 10 5 , and at large L, the scaling is of the form D 2 L/2 ∝ 1/L, allowing for an extrapolation to L → ∞. In the interacting case, the accessible system sizes are too small to establish such scaling and we thus have not attempted any extrapolation in the case of ∆ = 0.5.
Let us next discuss the limiting cases of first, the linear regime, i.e., D 2 ∝ B 0 , and second, the limit of B 0 → ∞. Starting with the former, the linear regime, D 2 L/2 = γB 0 , we find that the prefactor is γ ≈ 4J for values of ∆ 0.5. At large ∆, this reduces to γ ≈ 3.4J as the results for ∆ = 1 displayed in Fig. 5(c) show (circles). The interpretation of D 2 being linear in B 0 for ∆ ≤ 1 is based on the observation that the area A peak (B 0 ) under the initial Gaussian-like magnetization profile increases linearly in B 0 , which we can strictly confirm in the ∆ = 0 case and chains with up to L = 1000 sites. This analysis requires an estimate of the background density, which can the best be done in the ∆ = 0 case, but suffers from finite-size effects at a nonzero ∆. In the ∆ = 0 case, we find that γB 0 /A peak (B 0 ) = 4J 2 and for ∆ = 0.5, γB 0 /A peak (B 0 ) ≈ 1.7J 2 . We may therefore conclude that in the linear regime, expected from Eq (15) for the behavior of the velocity v 2 g . At sufficiently large B 0 , we expect the band-curvature to play a role as well which we are able to verify in the case of ∆ = 0. D 2 L/(2A peak ) then decreases below v 2 g (∆ = 0) as B 0 increases.
γ B 0 = A peak (B 0 ) v 2 g (∆), where v g (∆)
As for the limit of large B 0 ≫ J, note that since M i ≤ 1/2 and since we work at fixed half filling of the full system, the most extreme initial state that, on a fixed system size L, B 0 → ∞ drives the system into is a Fock state |f with n i (t = 0) = 1 for L/4 < i < 3L/4 and zero otherwise. We thus follow the evolution from such a state as well as the evolution from states with very large B 0 ∼ 10 i J (i = 1, . . . , 30). The inset in Fig. 5(c) shows that as B 0 increases, D 2 (B 0 ) indeed approaches the value found for the limiting state |f for B 0 > 10 22 J in the example of ∆ = 0. In that limit, D 2 = 1/2J 2 .
While for the parameters of the main panel of Fig. 5(c), i.e., ∆ = 0, 0.5, 1 and B 0 /J ≤ 2.4, the variance always follows a power law with exponent two, curiously, this is not the case for the aforementioned Fock states encountered in the B 0 → ∞ limit and ∆ = 0.5, 1. There, we find exponents that are consistently below two for ∆ > 0 and L = 40, 80, 200. This behavior was also observed in Ref. 57 for the evolution from similar Fock states. A full analysis of the evolution from Fock states will be presented elsewhere.
We mention that the time-dependent evolution from Fock states or, more generally, the evolution of particles originally trapped in a confined region into an empty lattice, has been intensely studied with numerical methods in Refs. 79,80,81,82, for the cases of hardcore bosons 79,80 , soft-core bosons, 81 and the interacting two-component Fermi gas. 82 These studies have a fundamental interest in out-of-equilibrium phenomena, with a perspective onto experiments with ultra-cold atoms in optical lattices. Among these we mention the experimental efforts directed at detecting Anderson localization in cold atom gases, precisely by utilizing such expansion set-ups. 40 Finally, while all results discussed here were obtained from chains with L = 200, we stress that we have carefully studied the finite-size scaling of D 2 (see Figs. 5(c) and6). By plotting D 2 L/2 in Fig. 5(c), we account for a trivial size-dependence, and curves obtained from different L but the same ∆ indeed collapse onto a single one in the linear regime. At larger B 0 , D 2 tends to decrease with system size L as less particles per B 0 can be accumulated in the initial Gaussian peak, due to S z i ≤ 1/2 (see Fig. 6).
B. XXZ chain: Massive phase
Transition from ballistic to diffusive behavior
Linear-response theory predicts a sharp transition from ballistic spin transport, characterized by a finite Drude weight D, to diffusive spin transport at ∆ = 1. 58 We have carried out several simulations with different ∆ = 1, 1.2, 1.3, 1.4, 1.5 at fixed parameters B 0 = 2J and σ B = 5 to check whether an analysis of the variance captures this transition. As the data for σ 2 M (t) displayed in Fig. 7(a) show, we clearly find a linear increase of σ 2 M (t) at large times in the case of ∆ = 1.5, which, in the sense of Sec. III, we interpret as evidence for diffusive transport. At smaller 1 < ∆ 1.4, our data do not allow for this conclusion, yet at least we can state that the data for ∆ = 1.3 and 1.4 do not follow a power-law, indicative of non-ballistic transport. Also note that there is no contradiction with Eq. (5) as it is is well-known that finite-size effects in the vicinity of ∆ = 1 are severe and come along with a logarithmically slow convergence with system size of quantities such as the Drude weight or the spin stiffness. 10, 83 We suspect that larger system sizes, hence access to longer simulation times, are necessary to fully capture the sharp transition from ballistic to diffusive behavior at ∆ = 1. We stress that with B 0 = 2J, we work with highly perturbed initial states, and thus the observation of diffusive transport for ∆ ≥ 1.5 is a nontrivial one, going beyond the case studied in Ref. 59. Our results for ∆ ≥ 1.5 thus establish an example of diffusive dynamics with σ 2 M (t) = const + D 1 t in this model for the out-of-equilibrium situation. Note that a recent tDMRG work on transport in spin chains incorporating baths has reported similar results, derived from current and spin profiles in the steady state. 55 It is noteworthy that at short times, obviously, the dynamics is always ballistic, independently of ∆, as can be seen in Fig. 7(a). Even the value of σ 2 M (t) is roughly the same for all ∆ at short times. In the long time limit, which is the relevant one to characterize the system as diffusive or ballistic, we find that σ 2 M (t) systematically decreases with increasing ∆. The reason is that, in the ∆/J → ∞ limit, no dynamics is possible at all.
We have further studied the dependence on B 0 at ∆ = 1.5. In Sec. IV we hinted at the fact that at ∆ > 1, a reasonably large B 0 is necessary to observe a significant change in the magnetization profile over the time scales simulated. We attribute this to the existence of a spin gap in the antiferromagnetic phase ∆ > 1 and focus on B 0 J. Our results of several runs at ∆ = 1.5, scanning the B 0 dependence, are displayed in Fig. 7(b). We find that the variance increases with B 0 , which we mainly attribute to more particles accumulated in the central peak. Furthermore, the figure suggests that the timescale at which diffusive behavior sets in depends on B 0 such that for larger B 0 , we are able to observe a linear increase of σ 2 M (t) earlier in time.
Restoring ballistic transport
While so far we have restricted ourselves to the case of half filling, we now address the magnetization dynamics at incommensurate filling. The initial states are now created by applying the external fields in subspaces that already have a finite magnetization., i.e. S z = 0. Results for the variance at ∆ = 1.5 are displayed in Fig. 7(c). The z-component of the total spin is S z = 0, 5, 10, 15, 20. We find that any nonzero S z is sufficient to render the dynamics ballistic again, on the time scales accessible to our simulations. According to our data, the variance follows
σ 2 M (t)−σ 2 M (t = 0) ∝ t α , with α ≈ 2.
05±0.02. This observation is in agreement with the infinite-temperature behavior of the Drude weight, 59,60 which in the XXZ chain is finite at any ∆ away from half filling. Note that ∆ > 1 and S z > 0 but below saturation is in the easy-plane phase of the XXZ chain, with gapless excitations. 66 Note that the prefactor in σ 2 M (t) − σ 2 M (t = 0) = D 2 (S z )t α depends on S z in a non-monotonous way: it is the largest at around S z = 10 and then decreases as saturation is reached.
VI. NONINTEGRABLE MODELS
We finally move on to the discussion of the dynamics in two nonintegrable models, the two-leg ladder and the frustrated chain, both limiting cases of Eq. (3). Numerical studies of the high-temperature limit, based on the Kubo formula, conclude that spin and thermal transport in the massive phases of these models (see Sec. II) are normal, with a vanishing Drude weight. 2,10,21,22 The conclusions on the massless phase of the frustrated chain are not unambiguous, 2,10,23,24 and it has been pointed out that the energy-current operator, to first order in the next-nearest-neighbor interaction αJ, is conserved. 23 One scenario is that the high-temperature Drude weight vanishes, 10,21 while it is still possible to find anomalous transport properties in the low-temperature regime, e.g., in the form of a peculiar low-frequency behavior of σ regular (ω). Exact diagonalization results show that the Drude weight is finite at zero temperature in the massless phase of the frustrated chain. 74,75 We here use the approach outlined in the previous sections to show that the zero-temperature dynamics of twoleg ladders and frustrated chains with a spin gap is of diffusive nature. To this end, we prepare initial states with Gaussian magnetic fields Eq. (11). We emphasize that, in the case of the spin ladder, both sites on a rung experience the same field. In these two cases, and similar to the discussion of initial states for ∆ > 1 in the XXZ chain (compare Fig. 2 in Sec. IV), the amplitude of the Gaussian field, B 0 , needs to be large enough to induce a substantial perturbation in the magnetization M i that will actually propagate through the system. We thus here probe the magnetization dynamics and transport at large external perturbations.
Starting with the example of a spin ladder with J ⊥ = J , we display the magnetization profile M i (t) = n i (t) − 1/2 in Fig. 8 as a contour plot. The time-dependence of the corresponding variance is shown in Fig. 9(a) [solid line, squares], and we find a linear increase in σ 2 M (t) for times t 17/J, clearly establishing the notion of diffusive dynamics in the ladder system.
A more involved picture emerges in the case of the frustrated chain. For this model we present results for α = 0.2 (circles) and α = 0.4 (stars) in Fig. 9(b). While on the time scales simulated, the variance for the α = 0.2 curve perfectly follows the form σ 2 M (t) = const + D 2 t 2 (a least-square fit to this function is displayed by a thin solid line), in the case of α = 0.4, we observe that the data do not follow a power law σ 2 M (t) = const + D α t α , which supports the notion of a time-dependent crossover from ballistic to diffusive dynamics. In fact, the numerical results yield a variance that clearly increases linearly in time for t 30/J. Note that the transition from the massless to the massive phase in this model is of the Beresinski-Kosterlitz-Thouless type, 84,85,86 with an exponentially growing correlations lengths as the critical point α crit is approached from α > α crit . This renders it very difficult to see a sharp transition in the transport behavior using exact diagonalization or DMRG as α crit is crossed.
We mention, though, that for other values of α > α crit ≈ 0.241 (results not shown here), on similar timescales, no diffusive behavior is seen. Moreover, the time-scale at which diffusive dynamics emerges seems to strongly depend on B 0 , i.e., on how far the initial state is perturbed over the actual ground state. The qualitative trend is that the larger B 0 , the faster diffusive transport is established. Unfortunately, the larger B 0 , the worse is the entanglement growth which renders tDMRG simulations more difficult (see, e.g., Ref. 87).
Keeping in mind these remarks, we are in a position to conjecture that in general, in massive phases of one-dimensional spin models, spin transport is diffusive. Combined with the existing results for the hightemperature limit, 2,10,22 our work suggests that this observation applies independently of temperature.
Note that a recent exact diagonalization study 88 has promoted a different behavior, namely evidence for ballistic spin transport at zero temperature in the frustrated spin chain at α = 1. This conclusion is based on the presence of certain oscillations (dubbed a bounc-
VII. SUMMARY AND DISCUSSION
In this work we studied the nonequilibrium magnetization dynamics in one-dimensional spin models at zero temperature using the adaptive time-dependent DMRG method on system sizes as large as L = 200 sites. We considered several models: the integrable spin-1/2 XXZ chain, the frustrated chain, and the two-leg spin ladder. Based on the analysis of the time-dependence of the spatial variance of the magnetization during the time evolution starting from initial states with an inhomogeneous magnetization profile, we conclude that in the critical regime of the XXZ chain, the magnetization dynamics is ballistic. In contrast to that, in the massive regime, our results indicate diffusive transport at half filling, while ballistic transport is restored away from half filling. A major aspect of our work is that we scanned the entire regime going from small to very strong perturbations over the ground state. This substantially extends previous studies of linear-response functions as we clearly enter into a regime with the system driven out of equilibrium. In the case of the massless regime of the XXZ chain, ballistic transport is seen for substantially perturbed initial state, while for the most extreme initial states, i.e., pure Fock states, we still find a power-law for the timedependence of the variance, but, on the times scales simulated, with an exponent below two. 57 As for the nonintegrable models, the frustrated chain and the ladder, the numerical data clearly support the notion of diffusive dynamics in the ladder system. In the case of the frustrated chain, our data are consistent with a transition from ballistic to diffusive behavior as the quantum critical point α crit ≈ 0.241 is crossed. In the limit of small perturbations, this result confirms with the general picture that massless phases support ballistic, and massive ones diffusive dynamics, at zero temperature and irrespective of integrability. 4 Overall, a difference between the low-and high-temperature behavior is then evident: numerical results for the high-temperature limit 2 consistently support the notion of vanishing Drude weights in nonintegrable models, and thus normal transport behavior, irrespective of what the ground state phases are. Conversely, exact diagonalization studies find a finite Drude weight in the gapless phase of the frustrated spin-1/2 chain at zero temperature. 74,75,90 In the low, or more extremely, zero temperature case, effective low-energy theories are expected to give a valid description, which typically predict diverging transport coefficients of clean spin systems (see, e.g., Ref. 91). As for the heat transport measurements on spin chain and ladders experiments (see Refs. 31,32 for a survey), a dominant magnetic contribution is usually evident in the hightemperature regime, where the validity of effective lowenergy theories for the description of transport is not obvious.
Finally, the approach of distinguishing ballistic from diffusive transport by analyzing the spatial variance of a density like quantity could be instrumental in characterizing ultracold atomic gases in optical lattices as well. There, one typically realizes the expansion of particles into an empty lattice, and experimentally it is possible to measure the expanding cloud's radius. It would thus be very interesting to identify conditions for ballistic as compared to diffusive dynamics for model systems typi-cally encountered in ultracold atomic gases such as the Bose-Hubbard model or a two-component Fermi gas in an optical lattice.
online) Density profiles ni(t = 0) = Mi(t) + 1/2 in the initial state (time t = 0), induced by (a) a Gaussian magnetic field Eq. (11) (B0 = J, σB = 5); (b) a box-shaped magnetic field, Eq. (12) (B0 = J, sB = 5); and (c) application of S + L/2+1 , all at half filling n = 0.5. Thin solid lines in (a)-(c): ∆ = 0; thick solid lines in (a): ∆ = 0.5, thin and thick dashed lines in (a): averaged density ñi(t = 0) for ∆ = 0 and 0.5, respectively (see the text in Sec. IV for details on the averaging and Eq. (13)). Results at ∆ = 0 are obtained with ED, all others with DMRG.
FIG
. 2: (Color online) Average deviation δñ from half filling [see Eq.(14)] in the initial state prepared by applying a Gaussian magnetic field [Eq.(11)] with σB = 5 as a function of B0 for ∆ = 0, 0.5, 1 and 1.6.
FIG. 3 :
3(a), (b) Time-evolution of the density (magnetization) profile ni(t) = Mi(t) + 1/2 for a Gaussian initial state with B0 = J, σB = 5 for (a): ∆ = 0 (ED data) and (b): ∆ = 0.5 (DMRG data, L = 200 sites).
online) Time-evolution of the density ni(t) (or magnetization profile Mi(t) = ni(t) −1/2) for a Gaussian initial state with B0 = J, σB = 5 at ∆ = 0.5 (DMRG data, L = 200 sites): Snap-shots from Fig. 3 at times t J = 0, 15, 25.
FIG. 5 :
5(Color online) (a) Time-dependence of the variance σ 2 M (t) − σ 2 M (t = 0) for ∆ = 0 (solid lines) and ∆ = 0.5 (dashed lines), both at B0 = J/2 and σB = 5. Circles denote the varianceσ 2 M (t) −σ 2 M (t = 0) of the averaged density ñi(t) , Eq. (13). (b) Time-dependence of the variance σ 2 M (t) − σ 2 M (t = 0) for the evolution from a box-like initial state (solid line), after the application of S + L/2 (dot-dashed line), and from a Gaussian-state (dashed line) (all at ∆ = 0.5). The squares denote the perfect fit of all sets to σ 2 M (t) − σ 2 M (t = 0) = D2t 2 . (c) B0-dependence of the coefficient D2 of the time-dependent variance for ∆ = 0 (solid lines), ∆ = 0.5 (dashed lines), and ∆ = 1 (circles) at a fixed σB = 5 and L = 200. The dotted lines display D2 L/2 = 4B0J. Inset: D2(B0) in the limit of large B0 = 10 i J for ∆ = 0. The thin dashed line is the result for the expansion from a Fock state of width L/2 with ni = 1 at the center of the chain.
FIG
is the group velocity (see, e.g., Ref. 78): = cos(ν). From our numerical data for L = 200 sites and at ∆ > 0.5, we obtain D 2 L/(2A peak ) > v 2 g (∆) due to finite-size effects, but qualitatively, D 2 L/(2A peak ) increases with ∆ at a fixed B 0 in the linear regime as . 6: (Color online) Finite-size scaling of D2 for ∆ = 0 and 0.5 in the large B0 regime (B0 = 2J, σB = 5).
solid lines). The parameters are B0 = 2J, σB = 5, L = 200. The thick dashed line shows the asymptotic behavior for ∆ = 1.5, i.e., σ 2 M (t) = const + D1t. (b) Variance for ∆ = 1.5 and several B0/J = 1, 1.5, 2 (L = 200, σB = 5). (c) Variance for ∆ = 1.5 at S z = 0, 5, 10, 15, 20 (see the legend) and B0 = 2J, σB = 5: ballistic transport is restored away from half-filling.
FIG. 8 :
8Two-leg spin ladder with J = J ⊥ [α = 1, δ = 1 in Eq. (3)]: Contour plot of ni(t) = Mi(t) + 1/2. In the initial state, a Gaussian external field Bi with B0 = J and σB = 5 is applied [see Eq. (11)]. DMRG data, L = 200 sites.
chain, σ B =5, L=200 FIG. 9: (Color online) Time dependence of the variance σ 2 M (t) − σ 2 M (t = 0). (a) Spin ladder, solid line with squares, compiled from the data of Fig. 8, B0 = 1.7J. (b) Frustrated chain with α = 0.2 (circles, B0 = J) and a frustrated chain with α = 0.4 (stars, B0 = 1.5J). In all cases, σB = 5. The dashed lines in (a) and (b) are a fit of const+D1 t to the variance of the spin-ladder and the frustrated chain with α = 0.4 at long times, while the thin, solid line in (b) is a least-square fit of D2 t 2 to the result for the frustrated chain with α = 0.2. DMRG data, L = 200 sites. ing behavior) 52,88 in M i (t) in the evolution from an initial state with all spins pointing up(down) in the left(right) part of an open system (compare Refs. 27,57). We believe that our analysis of the variance is more quantitative, which may imply that the oscillations in M i (t) reported on in Ref. 88 are possibly of a different origin. A recent study of quantum quenches in the XXZ chain proposes that oscillations seen in the order parameter are related to the quantum phase transition at ∆ = 1 (Ref. 89, see also Ref. 43).
AcknowledgmentsWe are grateful to W. Brenig, A.
* Corresponding author: [email protected]. de* Corresponding author: [email protected]
Physics and Chemistry of Materials with Low-Dimensional Structures. X Zotos, P Prelovšek, Strong Interactions in Low Dimensions. DordrechtKluwer Academic PublishersX. Zotos and P. Prelovšek, in: Strong Interactions in Low Dimensions, chapter 11, Physics and Chemistry of Mate- rials with Low-Dimensional Structures, Kluwer Academic Publishers (Dordrecht), 2004.
. F Heidrich-Meisner, A Honecker, W Brenig, Eur. J. Phys. Special Topics. 151135F. Heidrich-Meisner, A. Honecker, and W. Brenig, Eur. J. Phys. Special Topics 151, 135 (2007).
. W Kohn, Phys. Rev. 133171W. Kohn, Phys. Rev. 133, A171 (1964).
. D J Scalapino, S R White, S Zhang, Phys. Rev. Lett. 682830D. J. Scalapino, S. R. White, and S. Zhang, Phys. Rev. Lett. 68, 2830 (1992).
. X Zotos, J. Phys. Soc. Jpn. Suppl. 74173X. Zotos, J. Phys. Soc. Jpn. Suppl. 74, 173 (2005).
. B N Narozhny, A J Millis, N Andrei, Phys. Rev. B. 582921B. N. Narozhny, A. J. Millis, and N. Andrei, Phys. Rev. B 58, R2921 (1998).
. K Fabricius, B M Mccoy, Phys. Rev. B. 578340K. Fabricius and B. M. McCoy, Phys. Rev. B 57, 8340 (1998).
. X Zotos, Phys. Rev. Lett. 821764X. Zotos, Phys. Rev. Lett. 82, 1764 (1999).
. J V Alvarez, C Gros, Phys. Rev. Lett. 8877203J. V. Alvarez and C. Gros, Phys. Rev. Lett. 88, 077203 (2002).
. F Heidrich-Meisner, A Honecker, D C Cabra, W Brenig, Phys. Rev. B. 68134436F. Heidrich-Meisner, A. Honecker, D. C. Cabra, and W. Brenig, Phys. Rev. B 68, 134436 (2003).
. D A Rabson, B N Narozhny, A J Millis, Phys. Rev. B. 6954403D. A. Rabson, B. N. Narozhny, and A. J. Millis, Phys. Rev. B 69, 054403 (2004).
. J Benz, T Fukui, A Klümper, C Scheeren, J. Phys. Soc. Jpn. Suppl. 74181J. Benz, T. Fukui, A. Klümper, and C. Scheeren, J. Phys. Soc. Jpn. Suppl. 74, 181 (2005).
. D Heidarian, S Sorella, Phys. Rev. B. 75241104D. Heidarian and S. Sorella, Phys. Rev. B 75, 241104(R) (2007).
. S Fujimoto, N Kawakami, Phys. Rev. Lett. 90197202S. Fujimoto and N. Kawakami, Phys. Rev. Lett. 90, 197202 (2003).
. H Castella, X Zotos, P Prelovšek, Phys. Rev. Lett. 74972H. Castella, X. Zotos, and P. Prelovšek, Phys. Rev. Lett. 74, 972 (1995).
. A Rosch, N Andrei, Phys. Rev. Lett. 851092A. Rosch and N. Andrei, Phys. Rev. Lett. 85, 1092 (2000).
. X Zotos, P Prelovšek, Phys. Rev. B. 53983X. Zotos and P. Prelovšek, Phys. Rev. B 53, 983 (1996).
. S Mukerjee, V Oganesyan, D Huse, Phys. Rev. B. 7335113S. Mukerjee, V. Oganesyan, and D. Huse, Phys. Rev. B 73, 035113 (2006).
. S Mukerjee, B S Shastry, Phys. Rev. B. 77245131S. Mukerjee and B. S. Shastry, Phys. Rev. B 77, 245131 (2008).
. F Heidrich-Meisner, A Honecker, D C Cabra, W Brenig, Phys. Rev. B. 66140406F. Heidrich-Meisner, A. Honecker, D. C. Cabra, and W. Brenig, Phys. Rev. B 66, 140406(R) (2002).
. F Heidrich-Meisner, A Honecker, D C Cabra, W Brenig, Phys. Rev. Lett. 9269703F. Heidrich-Meisner, A. Honecker, D. C. Cabra, and W. Brenig, Phys. Rev. Lett. 92, 069703 (2004).
. X Zotos, Phys. Rev. Lett. 9267202X. Zotos, Phys. Rev. Lett. 92, 067202 (2004).
. P Jung, R W Helmes, A Rosch, Phys. Rev. Lett. 9667202P. Jung, R. W. Helmes, and A. Rosch, Phys. Rev. Lett. 96, 067202 (2006).
. P Jung, A Rosch, Phys. Rev. B. 76245108P. Jung and A. Rosch, Phys. Rev. B 76, 245108 (2007).
. P Prelovšek, S Elshawish, X Zotos, M Long, Phys. Rev. B. 70205129P. Prelovšek, S. ElShawish, X. Zotos, and M.. Long, Phys. Rev. B 70, 205129 (2004).
. G Benenti, G Casati, T Prosen, D Rossini, Europhys. Lett. 8537001G. Benenti, G. Casati, T. Prosen, and D. Rossini, Euro- phys. Lett. 85, 37001 (2009).
. T Antal, Z Rácz, A Rákos, G M Schütz, Phys. Rev. E. 594912T. Antal, Z. Rácz, A. Rákos, and G. M. Schütz, Phys. Rev. E 59, 4912 (1999).
. G O Berim, S I Berim, G G Cabrera, Phys. Rev. B. 6694401G. O. Berim, S. I. Berim, and G. G. Cabrera, Phys. Rev. B 66, 094401 (2002).
. Y Ogata, Phys. Rev. E. 6666123Y. Ogata, Phys. Rev. E 66, 066123 (2002).
. V Hunyadi, Z Rácz, L Sasvari, Phys. Rev. E. 6966103V. Hunyadi, Z. Rácz, and L. Sasvari, Phys. Rev. E 69, 066103 (2004).
. A V Sologubenko, T Lorenz, H R Ott, A Freimuth, J. Low Temp. Phys. 147387A. V. Sologubenko, T. Lorenz, H. R. Ott, and A. Freimuth, J. Low Temp. Phys. 147, 387 (2007).
. C Hess, Eur. Phys. J. Spec. Topics. 15173C. Hess, Eur. Phys. J. Spec. Topics 151, 73 (2007).
. A V Sologubenko, K Gianno, H R Ott, U Ammerahl, A Revcolevschi, Phys. Rev. Lett. 842714A. V. Sologubenko, K. Gianno, H. R. Ott, U. Ammerahl, and A. Revcolevschi, Phys. Rev. Lett. 84, 2714 (2000).
. C Hess, C Baumann, U Ammerahl, B Büchner, F Heidrich-Meisner, W Brenig, A Revcolevschi, Phys. Rev. B. 64184305C. Hess, C. Baumann, U. Ammerahl, B. Büchner, F. Heidrich-Meisner, W. Brenig, and A. Revcolevschi, Phys. Rev. B 64, 184305 (2001).
. M Otter, V Krasnikov, D Fishman, M Pshenichnikov, R Saint-Martin, A Revcolevschi, P Van Loodsrecht, J. Mag. Mag. Mat. 321796M. Otter, V. Krasnikov, D. Fishman, M. Pshenichnikov, R. Saint-Martin, A. Revcolevschi, and P. van Loodsrecht, J. Mag. Mag. Mat. 321, 796 (2009).
. M Takigawa, N Motoyama, H Eisaki, S Uchida, Phys. Rev. Lett. 764612M. Takigawa, N. Motoyama, H. Eisaki, and S. Uchida, Phys. Rev. Lett. 76, 4612 (1996).
. K R Thurber, A W Hunt, T Imai, F C Chou, Phys. Rev. Lett. 87247202K.R. Thurber, A.W. Hunt, T. Imai, and F.C. Chou, Phys. Rev. Lett. 87, 247202 (2001).
. J Sirker, Phys. Rev. B. 73224424J. Sirker, Phys. Rev. B 73, 224424 (2006).
. F L Pratt, S J Blundell, T Lancaster, C Baines, S Takagi, Phys. Rev. Lett. 96247203F. L. Pratt, S. J. Blundell, T. Lancaster, C. Baines, and S. Takagi, Phys. Rev. Lett. 96, 247203 (2006).
. C Fort, L Fallani, V Guarrera, J E Lye, M Modugno, D S Wiersma, M Inguscio, Phys. Rev. Lett. 95170410C. Fort, L. Fallani, V. Guarrera, J. E. Lye, M. Modugno, D. S. Wiersma, and M. Inguscio, Phys. Rev. Lett. 95, 170410 (2005).
. D Clément, A F Varón, M Hugbart, J A Retter, P Bouyer, L Sanchez-Palencia, D M Gangardt, G V Shlyapnikov, A Aspect, Phys. Rev. Lett. 95170409D. Clément, A. F. Varón, M. Hugbart, J. A. Retter, P. Bouyer, L. Sanchez-Palencia, D. M. Gangardt, G. V. Shlyapnikov, and A. Aspect, Phys. Rev. Lett. 95, 170409 (2005).
. S Trotzky, P Cheinet, S Fölling, M Feld, U Schnorrberger, A M Rey, A Polkovnikov, E A Demler, M D Lukin, I Bloch, Science. 319295S. Trotzky, P. Cheinet, S. Fölling, M. Feld, U. Schnor- rberger, A. M. Rey, A. Polkovnikov, E. A. Demler, M. D. Lukin, and I. Bloch, Science 319, 295 (2008).
. T Barthel, C Kasztelan, I P Mcculloch, U Schollwöck, Phys. Rev. A. 7953627T. Barthel, C. Kasztelan, I. P. McCulloch, and U. Schollwöck, Phys. Rev. A 79, 053627 (2009).
. S Grossjohann, W Brenig, Phys. Rev. B. 7994409S. Grossjohann and W. Brenig, Phys. Rev. B 79, 094409 (2009).
. S R White, Phys. Rev. Lett. 692863S. R. White, Phys. Rev. Lett. 69, 2863 (1992).
. S R White, Phys. Rev. B. 4810345S. R. White, Phys. Rev. B 48, 10345 (1993).
. U Schollwöck, Rev. Mod. Phys. 77259U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005).
. A Daley, C Kollath, U Schollwöck, G Vidal, J. Stat. Mech.: Theory Exp. 4005A. Daley, C. Kollath, U. Schollwöck, and G. Vidal, J. Stat. Mech.: Theory Exp. , P04005 (2004).
. S R White, A E Feiguin, Phys. Rev. Lett. 9376401S. R. White and A. E. Feiguin, Phys. Rev. Lett. 93, 076401 (2004).
. G Vidal, Phys. Rev. Lett. 9340502G. Vidal, Phys. Rev. Lett. 93, 040502 (2004).
. K Saito, Europhys. Lett. 6134K. Saito, Europhys. Lett. 61, 34 (2003).
. R Steinigeweg, J Gemmer, M Michel, Europhys. Lett. 75406R. Steinigeweg, J. Gemmer, and M. Michel, Europhys. Lett. 75, 406 (2006).
. C Mejia-Monasterio, H Wichterich, Eur. Phys. J. Spec. Topics. 151113C. Mejia-Monasterio and H. Wichterich, Eur. Phys. J. Spec. Topics 151, 113 (2007).
. M Michel, O Hess, H Wichterich, J Gemmer, Phys. Rev. B. 77104303M. Michel, O. Hess, H. Wichterich, and J. Gemmer, Phys. Rev. B 77, 104303 (2008).
. T Prosen, M Znidaric, J. Stat. Mech: Theor. Exp. 2035T. Prosen and M. Znidaric, J. Stat. Mech: Theor. Exp. , P02035 (2009).
. T D Kühner, S R White, H Monien, Phys. Rev. B. 6112474T. D. Kühner, S. R. White, and H. Monien, Phys. Rev. B 61, 12474 (2000).
. D Gobert, C Kollath, U Schollwöck, G Schütz, Phys. Rev. E. 7136102D. Gobert, C. Kollath, U. Schollwöck, and G. Schütz, Phys. Rev. E 71, 036102 (2005).
. B S Shastry, B Sutherland, Phys. Rev. Lett. 65243B.S. Shastry and B. Sutherland, Phys. Rev. Lett. 65, 243 (1990).
. X Zotos, F Naef, P Prelovšek, Phys. Rev. B. 5511029X. Zotos, F. Naef, and P. Prelovšek, Phys. Rev. B 55, 11029 (1997).
. F Heidrich-Meisner, A Honecker, W Brenig, Phys. Rev. B. 71184415F. Heidrich-Meisner, A. Honecker, and W. Brenig, Phys. Rev. B 71, 184415 (2005).
. K Louis, C Gros, Phys. Rev. B. 67224410K. Louis and C. Gros, Phys. Rev. B 67, 224410 (2003).
. K Sakai, A Klümper, J. Phys. Soc. Jpn. Suppl. 74196K. Sakai and A. Klümper, J. Phys. Soc. Jpn. Suppl. 74, 196 (2005).
. C Kollath, U Schollwöck, J Delft, W Zwerger, Phys. Rev. A. 7153606C. Kollath, U. Schollwöck, J. von Delft, and W. Zwerger, Phys. Rev. A 71, 053606 (2005).
. P Schmitteckert, Phys. Rev. B. 70R121302P. Schmitteckert, Phys. Rev. B. 70, 121302(R) (2004).
G D Mahan, Many-Particle Physics. New YorkPlenum PressG. D. Mahan, Many-Particle Physics, Plenum Press, New York, 1990.
. H.-J Mikeska, A K Kolezhuk, Lect. Notes Phys. 6451H.-J. Mikeska and A. K. Kolezhuk, Lect. Notes Phys. 645, 1 (2004).
. A Klümper, Lect. Notes Phys. 645349A. Klümper, Lect. Notes Phys. 645, 349 (2004).
. E Dagotto, Rep. Prog. Phys. 621525E. Dagotto, Rep. Prog. Phys. 62, 1525 (1999).
. E Dagotto, T M Rice, Science. 271618E. Dagotto and T. M. Rice, Science 271, 618 (1996).
. T Park, J Light, J. Chem. Phys. 85T. Park and J. Light, J. Chem. Phys 85, 5870 (1986).
. M Hochbruck, C Lubich, Siam J Numer, Anal. 341911M. Hochbruck and C. Lubich, SIAM J. Numer. Anal. 34, 1911 (1997).
. I P Mcculloch, J. Stat. Mech.: Theory Exp. 10014I. P. McCulloch, J. Stat. Mech.: Theory Exp. , P10014 (2007).
. D J Scalapino, S R White, S C Zhang, Phys. Rev. B. 477995D. J. Scalapino, S. R. White, and S. C. Zhang, Phys. Rev. B 47, 7995 (1993).
. J Bonča, J P Rodriguez, J Ferrer, K S Bedell, Phys. Rev. B. 503415J. Bonča, J. P. Rodriguez, J. Ferrer, and K. S. Bedell, Phys. Rev. B 50, 3415 (1994).
. S Furukawa, M Sato, Y Saiga, S Onoda, J. Phys. Soc. Jpn. 77123712S. Furukawa, M. Sato, Y. Saiga, and S. Onoda, J. Phys. Soc. Jpn. 77, 123712 (2008).
. M Rigol, B S Shastry, Phys. Rev. B. 77161101M. Rigol and B. S. Shastry, Phys. Rev. B 77, 161101(R) (2008).
. J Cloizeaux, M Gaudin, J. Math. Phys. 71384J. des Cloizeaux and M. Gaudin, J. Math. Phys. 7, 1384 (1966).
. D C Cabra, P Pujol, Lect. Notes Phys. 645253D. C. Cabra and P. Pujol, Lect. Notes Phys. 645, 253 (2004).
. M Rigol, A Muramatsu, Phys. Rev. Lett. 93230404M. Rigol and A. Muramatsu, Phys. Rev. Lett. 93, 230404 (2004).
. M Rigol, A Muramatsu, Phys. Rev. Lett. 94240403M. Rigol and A. Muramatsu, Phys. Rev. Lett. 94, 240403 (2005).
. K Rodriguez, S Manmana, M Rigol, R Noack, A Muramatsu, New J. Phys. 8169K. Rodriguez, S. Manmana, M. Rigol, R. Noack, and A. Muramatsu, New J. Phys. 8, 169 (2006).
. F Heidrich-Meisner, M Rigol, A Muramatsu, A E Feiguin, E Dagotto, Phys. Rev. A. 7813620F. Heidrich-Meisner, M. Rigol, A. Muramatsu, A. E. Feiguin, and E. Dagotto, Phys. Rev. A 78, 013620 (2008).
. N Laflorencie, S Capponi, E S Sørensen, Eur. Phys. J B. 2477N. Laflorencie, S. Capponi, and E. S. Sørensen, Eur. Phys. J B 24, 77 (2001).
. F D M Haldane, Phys. Rev. B. 254925F. D. M. Haldane, Phys. Rev. B 25, 4925 (1982).
. K Okamoto, N Nomura, Phys. Lett. 169433K. Okamoto and N. Nomura, Phys. Lett. 169A, 433 (1992).
. S R White, I Affleck, Phys. Rev. B. 549862S. R. White and I. Affleck, Phys. Rev. B 54, 9862 (1996).
. G D Chiara, S Montangero, P Calabrese, R Fazio, J. Stat. Mech.: Theory Exp. 3001G. D. Chiara, S. Montangero, P. Calabrese, and R. Fazio, J. Stat. Mech.: Theory Exp. , P03001 (2006).
. L F Santos, Phys. Rev. E. 7831125L. F. Santos, Phys. Rev. E 78, 031125 (2008).
. P Barmettler, M Punk, V Gritsev, E Demler, E Altman, Phys. Rev. Lett. 102130603P. Barmettler, M. Punk, V. Gritsev, E. Demler, and E. Alt- man, Phys. Rev. Lett. 102, 130603 (2009).
. V R Vieira, N Guihéry, J P Rodriguez, P D Sacramento, Phys. Rev. B. 63224417V. R. Vieira, N. Guihéry, J. P. Rodriguez, and P. D. Sacra- mento, Phys. Rev. B 63, 224417 (2001).
. K Saito, Phys. Rev. B. 6764410K. Saito, Phys. Rev. B 67, 064410 (2003).
|
[] |
[
"Primitive Recursive Presentations of Transducers and their Products",
"Primitive Recursive Presentations of Transducers and their Products"
] |
[
"Victor Yodaiken [email protected] \nFSMLabs Inc. 2718 Creeks Edge Parkway Austin Texas\n78733USA\n"
] |
[
"FSMLabs Inc. 2718 Creeks Edge Parkway Austin Texas\n78733USA"
] |
[] |
Methods for specifying Moore type state machines (transducers) abstractly via primitive recursive string functions are discussed. The method is mostly of interest as a concise and convenient way of working with the complex state systems found in computer programming and engineering, but a short section indicates connections to algebraic automata theory and the theorem of Krohn and Rhodes. The techniques are shown to allow concise definition of system architectures and the compositional construction of parallel and concurrent systems.
| null |
[
"https://arxiv.org/pdf/0907.4169v2.pdf"
] | 14,608,383 |
0907.4169
|
021322c7712f0c5a47c0be2cd5783e00258524b8
|
Primitive Recursive Presentations of Transducers and their Products
10 Jan 2010
Victor Yodaiken [email protected]
FSMLabs Inc. 2718 Creeks Edge Parkway Austin Texas
78733USA
Primitive Recursive Presentations of Transducers and their Products
10 Jan 2010transducerMoore machineprimitive recursioncomposi- tionparallel
Methods for specifying Moore type state machines (transducers) abstractly via primitive recursive string functions are discussed. The method is mostly of interest as a concise and convenient way of working with the complex state systems found in computer programming and engineering, but a short section indicates connections to algebraic automata theory and the theorem of Krohn and Rhodes. The techniques are shown to allow concise definition of system architectures and the compositional construction of parallel and concurrent systems.
Introduction
The engineering disciplines of programming and computer system design have been handicapped by the practical limitations of mathematical techniques for specifying complex discrete state systems. While finite automata are the natural basis for such efforts, the traditional state-set presentations of automata are convenient for only the simplest systems as well as for classes of systems, but become awkward when state sets are large, when behavior is only partially specified, and for compositional systems. Furthermore, it would be nice to be able to parameterize automata so that we can treat, for example, an 8bit memory as differing from a 64bit memory in only one or a few parameters. These problems can all be addressed by using a recursive function presentation of automata that is introduced here.
General automata have long been understood to be a class of functions from finite strings of input symbols to finite strings of output symbols [1] but for specifying computer systems it is more useful to consider functions from finite strings of inputs to individual outputs. The intuition is that each string describes a path from the initial state to some "current" state and the value of the function is the output of the system in the "current" state. If A is an alphabet of input events and X is a set of possible outputs, let A * be the set of finite strings over A including the empty string Λ and then a function f : A * → X defines a relationship between input sequences and outputs. These functions can be shown to be strongly equivalent to (not necessarily finite) Moore type automata [5] while abstracting out details that are not interesting for our purposes here. If a is an input and w is a string, wa is the result of appending a to w and by defining f (Λ) = x 0 and f (wa) = h(a, f (w)), we can completely specify the operation of f .
Correspondence between a transducer M and a string function f .
Input:w ⇒ Machine:M ⇒ Output:x f (w) = x
It turns out that a type of simultaneous recursion can be used to specify automata products that model composition and parallel (and concurrent) state change. Suppose that f 1 , ...f n are previously defined string functions, f i : A * i → X i and we wish to combine these into a system where inputs from some alphabet A drive the components forward. At each step an input a to the composite system will be used to generate an input sequence z i for each component f i . The input sequence for the component is a function of both a and the feedback, the outputs of the components. The composition builds a new function f from f 1 . . . , f n plus a communication map g and an output map h. Let f (w) = h(f i (u 1 ) . . . , f n (u n )) where the u i are themselves primitive recursive functions of f and w. I will write u i when w is clear from context and use functional form u i (w) otherwise. We always require that u i (Λ) = Λ -so that in the initial state of the composite system every component is in its own initial state. Let w • z be the string obtained by concatenating w and z. The communication map is used as follows:u i (wa) = u i (w) • g(i, a, f (w)). The idea is that appending a to w causes the string g(i, a, f (w)) to be concatenated to u i .
Outline. In what follows, I'll give two examples of parallel composition and then make the correspondence between string functions and transducers precise and prove the correspondence between the simultaneous recursion scheme given above to a "general product" of automata. The concluding section looks at some implications for the study of automata structure and algebraic automata theory. Companion technical reports describe practical use.
The two "factors" case is illustrative.
Input:w → g Input:u 1 → M 1 → Output:x 1 Input:u 2 → M 2 → Output:x 2 → h → Output:x ⇑ ⇓ ⇐ ⇐========= feedback ⇐======== ⇐== F (w) = h(f 1 (u 1 ), f 2 (u 2 )) u i (Λ) = Λ u i (wa) = u i (w) • g(a, i, F (w))
Example: Stack By way of illustration consider a parallel implementation of a stack.
Stack n (w) = (S(u 1 ) . . . , S(u n ))(1)
where each S(za) = a so that the n factors are are simple storage cells. Let's have a special value so we can spot empty cells S(Λ) = EM P T Y and have some a = EM P T Y in the storage cell alphabet. The alphabet of the stack is
P U SH[v] : v ∈ A storage and P OP . Then define the u i u i (Λ) = Λ (2) u i (wa) = u i (w) • v if i = 1 and a = P U SH[v] EM P T Y if i = n and a = P OP S(u i−1 (w)) if i > 1 and a = P U SH[v] S(u i+1 (w)) if i < n and a = P OP (3)
Then define T op(w) = S(u 1 ) and where D(w) = (x, y) tells us that D is trying to send message x (or not sending any message if x = N U LL) and that D is or is not ready to accept a message. For simplicity assume a broadcast network and then define
Empty(w) = 1 if S(u 1 ) = EM P T Y 0 otherwise. and F ull(w) = 1 if S(u n ) = EM P T Y 0 otherwise.
Example: Network
N (w) = (D 1 (u 1 ) . . . , D n (u n ), R(v))
where each D i is a network node and R is an arbiter we can define to pick which, if any, node gets to send a message next. Each D i may be distinct as long as it satisfies the specifications of output values.
R(z) ∈ {1 . . . n}
The alphabet of N can just consist of the single symbol T ICK.
Let u i (wa) = u i (w)• RECV [m], T ICK if R(v(w)) = j and D j (u j (w)) = (m, c) and D i (u i (w)) = (k, ready). Otherwise, just append T ICK to u i .
If D j is itself a product, say D j (w) = (OS(r os ), AP P (r app ) then if w is the string parameter to N , we can look inside at the value of OS(r os (u i (w))).
Basics
A Moore machine or transducer is usually given by a 6-tuple
M = (A, X, S, start, δ, γ)
where A is the alphabet, X is a set of outputs, S is a set of states, start ∈ S is the initial state, δ : S × A → S is the transition function and γ : S → Xis the output function.
Given M , use primitive recursion on sequences to extend the transition function δ to A * by:
δ * (s, Λ) = s and δ * (s, wa) = δ(δ * (s, w), a).(4)
So γ(δ * (start , w)) is the output of M in the state reached by following w from M 's initial state. Call f M (w) = γ(δ * (start, w)) the representing function of M .
If f M is the representing function of M , then f ′ (w) = g(f (w)) represents M ′ obtained by replacing γ with γ ′ (s) = g(γ(s)). The state set of M and transition map remain unchanged.
The transformation from string function to transducer is also simple. Given f :
A * → X define f w (u) = f (w • u). Let S f = {f w : w ∈ A * }. Say f is finite if and only if S f is finite. Define δ f (f w , a) = f wa and define γ(f w ) = f w (Λ) = f (w).
Then with start f = f Λ we have a Moore machine
M(f ) = {S f , start f , δ f , γ f }
and, by construction f is the representing function for M(f ).
A similar construction can be used to produce a monoid from a string function as discussed below in section 3.1.
Any M 2 that has f as a representing function can differ from M 1 = M(f ) only in names of states and by including unreachable and/or duplicative states. That is, there may be some w so that δ * 1 (start 1 , w) = δ * 2 (start 2 , w) but since f w = f w it must be the case that the states are identical in output and in the output of any states reachable from them. If we are using Moore machines to represent the behavior of digital systems, these differences are not particularly interesting and we can treat M(f ) as the Moore machine represented by f .
While finite string functions are the only ones that can directly model digital computer devices or processes 1 , infinite ones are often useful in describing system properties. For example, we may want L(Λ) = 0 and L(wa) = L(w) + 1 and then seek to prove for some P that there is a t 0 so that whenever L(w • z) ≥ L(w) + t 0 there is a prefix v of z so that P (w • v) = 0. In this case, L is an ideal measuring device, not necessarily something we could actually build.
Products
Suppose we have a collection of (not necessarily distinct) Moore machines M i = (A i , X i , S i , start i , δ i , λ i ) for (0 < i ≤ n) that are to be connected to construct a new machine with alphabet A using a connection map g. The intuition is that when an input a is applied to the system, the connection map computes a string of inputs for M i from the input a and the outputs of the factors (feedback ). The general product here is described by Gcseg [2]. I have made the connection maps generate strings instead of single events so that the factors can run at non-uniform rates. If g(i, a, x) = Λ, then M i skips a turn.
Definition 21 General product of automata Given M i = (A i , X i , S i , start i , δ i , γ i ) and h and g define the Moore machine:
M = A n i=1 [M i , g, h] = (A, X, S, start, δ, γ) -S = {(s 1 .
. . , s n ) : s i ∈ X i } and start = (start 1 . . . , start n ) -X = {h(x 1 . . . , x n ) : x i ∈ X i } and γ((s 1 . . . , s n )) = h(γ 1 (s 1 ) . . . , γ n (s n )).
δ((s 1 . . . , s n ), a) = (δ * 1 (s 1 , g(1, a, γ(s))) . . . , δ * n (s n , g(n, a, γ(s)))).
One thing to note is that the general product, in fact any product of automata, is likely to produce a state set that contains unreachable states. The string function created by simultaneous recursion represents the minimized state machine as well. The possible "blow up" of unreachable and duplicate states is not a problem for composite recursion although it vastly complicates work with state-set representations.
Theorem 1 If each f i represents M i and f (w) = h(f 1 (u 1 ) . . . , f n (u n )) and u i (Λ) = Λ and u i (wa) = u i (w) • g(i, a, f (w)) and
M = A n i=1 [M i , h, g] then f represents M Proof: Each f i represents M i so f i (z) = γ i (δ * i (start i , z))(5)
But γ(δ * (start , w)) = h(γ(s)) = h(. . . γ i (δ * i (start i , w i )) . . .) for some w i . All we have to show is that
δ * (start , w) = (. . . δ * i (start i , u i (w)) . . .)(6)
and then we have
γ(δ * (start , w)) = h(. . . γ i (δ * i (start i , u i (w))) . . .).
It follows immediately that
γ(δ * (start , w)) = h(. . . f i (u i (w))) . . .) = f (w)
Equation 6 can be proved by induction on w. Since u i (Λ) = Λ the base case is obvious. Now suppose that equation 6 is correct for w and consider wa. Let δ(start , w) = s = (s 1 . . . , s n ) and let u i (w) = z i . Then, by the induction hypothesis s i = δ * i (start i , z i ), and, by the argument above γ(δ * (start , w)) = f (w). So:
δ * (start , wa) = δ(δ * (start , w), a) (7) = δ(s, a) (8) = (. . . δ * i (s i , g(i, a, γ(s))) . . .) (9) = (. . . δ * i (δ * i (start , u i (w)), g(i, a, f (w))) . . .) (10) = (. . . δ * i (start , u i (w) • g(i, a, f (w))) . . .) (11) = (. . . δ * i (start, u i (wa)) . . .)(12)
proving 6 for wa. It follows directly that if M is represented by f , and f is defined by simultaneous recursion, then f can also be defined by single recursion -although such a definition may be impractical because of the large state set size.
More on Representation and Some Algebra
A number of results follow from theorem 1.
Theorem 2
For M and f constructed as products as above in theorem 1.
-There are an infinite number of distinct products
M ′ = A k i=1 [N i , g i ] so= A k i=1 r[Z i , g, h]
where f represents M ′ and each Z i is a 2 state Moore machine. In fact k = ⌈log 2 (|S M ′ |)⌉. This is simple binary encoding.
Monoids
If f : A * → X then say w ≡ f u iff f (z • w • y) = f (z • u • y) for all z, y ∈ A * . Let [w] /f = {u ∈ A * , u ≡ w}. Then define [w] /f · [z] /f = [w • z] /f . The set of these classes with · comprises a monoid where [w] /f · [Λ] /f = [w] /f
for the required identity. Say that this monoid is the monoid determined by f . Recall the construction of states from string functions above and the set S f consisting of all the functions f w so that f w (z) = f (w • z). Note that if v, z ∈ [w] /f it must be the case that for any string r f r•z = f r•v . So it is possible to associate each [w] /f with a map from S f → S f where f r → f r•z for any z in [w] /f . As a result, whenever S f is finite, there are only a finite number of maps S f → S f so the monoid determined by f must also be finite.
Suppose f (w) = h(f 1 (u 1 ) . . . , f n (u n )) so that u i (wa) = u i (w) • z i where z i only depends on the feedback from factors indexed by j < i. That is, there are r 1 . . . r n so that z 1 = r 1 (a) and z i+1 = r i+1 (a, f (w, 1) . . . , f (w, i)). In this case f is constructed in cascade where information flows only in one direction and the results of Krohn-Rhodes theory [4,3] will apply.
If f is finite and represents a state machine with k states and each of the f i are finite with k i states in the represented state machine, then if Σ j≤n j=1 k j < k the factorization is an implementation of f by essentially simpler string functionsand it corresponds to a factorization of the monoid of f into simpler monoids.
Let T n (Λ) = 0 and T n (wa) = T (w) + 1 mod n. Now define G n as a cascade of T 2 's as follows:
G n (w) = (T 2 (u 1 ) . . . , T 2 (u n )) (13) u 1 (wa) = u 1 (w) • a = wa (14) u i+1 (wa) = u i+1 (w) • Λ if ∃j < i, T 2 (u j (w)) = 0 a otherwise(15)
This is called a "ripple carry adder" in digital circuit engineering: each counter increments only if the "carry" is propagating through all lower order counters. Put H n (w) = Σ i≤n i=1 T 2 (u i ) × 2 i−1 where the u i are as defined for G n . Then H n = T 2 n and you cannot make a G n which counts mod any number other then 2 n . Otherwise, the underlying monoid of T k has a simple group factor (a prime cyclic group) and those cannot be factored into smaller elements without some feedback.
While the cascade decompositions may simplify the interconnect in one way, they do not necessarily indicate the most efficient or interesting decomposition in practice. Cascades are good designs for "pipelined" execution but may be slow if we have to wait for the data to propagate to the terminal element. And group qualities in data structures can correspond to "undo" properties. For example, consider a circular buffer -like those commonly used for UNIX type fifos/pipes. The idea is that "write" operations push data into the pipe and "read" operations remove data in order of the "writes". The memory used to hold the data is allocated in a cycle. One way to implement such a buffer is to decompose it into an array of k memory locations and a mod k counter. A write operation causes an increment of the counter and a store of data in the appropriate memory location. The increment has an inverse, the write does not. But the result is that a write can be "forgotten". Perhaps factoring off group-like components will reveal other possibilities for this type of partial inverse.
A computer on a network might, from the outside, appear to have an alphabet consisting of RECV [m], T RAN SM IT [m], for m in a set of possible messages and T ICK to indicate passage of time. Say D is a networked computer if D(w) ∈ {(m, c) : m ∈ M essages ∪ {N U LL}, c ∈ {ready, busy})}
that f represents M ′ as well as M . -If all of the M i are finite state, M is finite state (by construction). -If all of the f i are finite state, f is finite state ( since it represents a finite state Moore machine). -If f is finite state then there is some M ′
⋆ This paper replaces multiple earlier rough drafts.
There is confusion on this subject for reasons I cannot fathom, but processes executing on real computers are not Turing machines because real computers do not have infinite tapes and the possibility of removeable tapes doesn't make any difference.
Theories of Abstract Automata. Michael A Arbib, Prentice-HallMichael A. Arbib. Theories of Abstract Automata. Prentice-Hall, 1969.
Products of Automata. Monographs in Theoretical Computer Science. Ferenc Gecseg, Springer VerlagFerenc Gecseg. Products of Automata. Monographs in Theoretical Computer Sci- ence. Springer Verlag, 1986.
Algebraic theory of automata. A Ginzburg, Academic PressA. Ginzburg. Algebraic theory of automata. Academic Press, 1968.
Algebraic Automata Theory. W M L Holcombe, Cambridge University PressW.M.L. Holcombe. Algebraic Automata Theory. Cambridge University Press, 1983.
E F Moore, Sequential Machines: Selected Papers. Reading MAAddison-WelseyE.F. Moore, editor. Sequential Machines: Selected Papers. Addison-Welsey, Reading MA, 1964.
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.